Sample records for maximum regional compression

  1. High precision Hugoniot measurements of D2 near maximum compression

    Benage, John; Knudson, Marcus; Desjarlais, Michael


    The Hugoniot response of liquid deuterium has been widely studied due to its general importance and to the significant discrepancy in the inferred shock response obtained from early experiments. With improvements in dynamic compression platforms and experimental standards these results have converged and show general agreement with several equation of state (EOS) models, including quantum molecular dynamics (QMD) calculations within the Generalized Gradient Approximation (GGA). This approach to modeling the EOS has also proven quite successful for other materials and is rapidly becoming a standard approach. However, small differences remain among predictions obtained using different local and semi-local density functionals; these small differences show up in the deuterium Hugoniot at ~ 30-40 GPa near the region of maximum compression. Here we present experimental results focusing on that region of the Hugoniot and take advantage of advancements in the platform and standards, resulting in data with significantly higher precision than that obtained in previous studies. These new data may prove to distinguish between the subtle differences predicted by the various density functionals. Results of these experiments will be presented along with comparison to various QMD calculations. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  2. The maximum force in a column under constant speed compression

    Kuzkin, Vitaly A


    Dynamic buckling of an elastic column under compression at constant speed is investigated assuming the first-mode buckling. Two cases are considered: (i) an imperfect column (Hoff's statement), and (ii) a perfect column having an initial lateral deflection. The range of parameters, where the maximum load supported by a column exceeds Euler static force is determined. In this range, the maximum load is represented as a function of the compression rate, slenderness ratio, and imperfection/initial deflection. Considering the results we answer the following question: "How slowly the column should be compressed in order to measure static load-bearing capacity?" This question is important for the proper setup of laboratory experiments and computer simulations of buckling. Additionally, it is shown that the behavior of a perfect column having an initial deflection differ significantlys form the behavior of an imperfect column. In particular, the dependence of the maximum force on the compression rate is non-monotoni...

  3. Regions of constrained maximum likelihood parameter identifiability

    Lee, C.-H.; Herget, C. J.


    This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.

  4. Unsupervised regions of interest extraction for color image compression

    Xiaoguang Shao; Kun Gao; Lili L(U); Guoqiang Ni


    A novel unsupervised approach for regions of interest (ROI) extraction that combines the modified visual attention model and clustering analysis method is proposed.Then the non-uniform color image compression algorithm is followed to compress ROI and other regions with different compression ratios through the JPEG image compression algorithm.The reconstruction algorithm of the compressed image is similar to that of the JPEG algorithm.Experimental results show that the proposed method has better performance in terms of compression ratio and fidelity when comparing with other traditional approaches.

  5. Design of reinforced concrete walls casted in place for the maximum normal stress of compression

    T. C. Braguim

    Full Text Available It is important to evaluate which designing models are safe and appropriate to structural analysis of buildings constructed in Concrete Wall system. In this work it is evaluated, through comparison of maximum normal stress of compression, a simple numerical model, which represents the walls with frame elements, with another much more robust and refined, which represents the walls with shells elements. The designing of the normal stress of compression it is done for both cases, based on NBR 16055, to conclude if the wall thickness initially adopted, it is enough or not.

  6. Region-Based Image-Fusion Framework for Compressive Imaging

    Yang Chen


    Full Text Available A novel region-based image-fusion framework for compressive imaging (CI and its implementation scheme are proposed. Unlike previous works on conventional image fusion, we consider both compression capability on sensor side and intelligent understanding of the image contents in the image fusion. Firstly, the compressed sensing theory and normalized cut theory are introduced. Then region-based image-fusion framework for compressive imaging is proposed and its corresponding fusion scheme is constructed. Experiment results demonstrate that the proposed scheme delivers superior performance over traditional compressive image-fusion schemes in terms of both object metrics and visual quality.

  7. Maximum-Entropy Meshfree Method for Compressible and Near-Incompressible Elasticity

    Ortiz, A; Puso, M A; Sukumar, N


    Numerical integration errors and volumetric locking in the near-incompressible limit are two outstanding issues in Galerkin-based meshfree computations. In this paper, we present a modified Gaussian integration scheme on background cells for meshfree methods that alleviates errors in numerical integration and ensures patch test satisfaction to machine precision. Secondly, a locking-free small-strain elasticity formulation for meshfree methods is proposed, which draws on developments in assumed strain methods and nodal integration techniques. In this study, maximum-entropy basis functions are used; however, the generality of our approach permits the use of any meshfree approximation. Various benchmark problems in two-dimensional compressible and near-incompressible small strain elasticity are presented to demonstrate the accuracy and optimal convergence in the energy norm of the maximum-entropy meshfree formulation.

  8. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: I. general description

    Kaganovich, Igor D., E-mail: [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Massidda, Scott; Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Vay, Jean-Luc [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)


    Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the

  9. Plasma compression in magnetic reconnection regions in the solar corona

    Provornikova, Elena; Lukin, Vyacheslav S


    It has been proposed that particles bouncing between magnetized flows converging in a reconnection region can be accelerated by the first order Fermi mechanism. Analytical considerations of this mechanism have shown that the spectral index of accelerated particles is related to the total plasma compression within the reconnection region similarly to the case of diffusive shock acceleration mechanism. As a first step to investigate the efficiency of Fermi acceleration in reconnection regions in producing hard energy spectra of particles in the solar corona, we explore the degree of plasma compression that can be achieved at reconnection sites. In particular, we aim to determine the conditions for the strong compressions to form. Using a two-dimensional resistive MHD numerical model we consider a set of magnetic field configurations where magnetic reconnection can occur including a Harris current sheet, a force-free current sheet, and two merging flux ropes. Plasma parameters are taken to be characteristic of t...

  10. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    Shen, Hua


    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  11. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex


    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕb. In the presence of large voltage errors, δU≫ΔEb, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  12. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    Massidda, Scott [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Kaganovich, Igor D., E-mail: [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Lidia, Steven M.; Seidl, Peter [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)


    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, {Delta}{Epsilon}{sub b}. In the presence of large voltage errors, {delta}U Double-Nested-Greater-Than {Delta}E{sub b}, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  13. Compressed sensing with side information on the feasible region

    Rostami, Mohammad


    This book discusses compressive sensing in the presence of side information. Compressive sensing is an emerging technique for efficiently acquiring and reconstructing a signal. Interesting instances of Compressive Sensing (CS) can occur when, apart from sparsity, side information is available about the source signals. The side information can be about the source structure, distribution, etc. Such cases can be viewed as extensions of the classical CS. In these cases we are interested in incorporating the side information to either improve the quality of the source reconstruction or decrease the number of samples required for accurate reconstruction. In this book we assume availability of side information about the feasible region. The main applications investigated are image deblurring for optical imaging, 3D surface reconstruction, and reconstructing spatiotemporally correlated sources. The author shows that the side information can be used to improve the quality of the reconstruction compared to the classic...

  14. Regional growth curves for Norway for annual daily maximum floods; Regional flomfrekvensanalyse for norske vassdrag

    Saelthun, Nils Roar [ed.; Tveito, Ole Einar; Boensnes, Truls Erik; Roald, Lars A.


    This report establishes new regional growth curves for Norway for annual daily maximum floods based on the general extreme value distribution. The parameters of the regional distributions were estimated by the probability weighted moment method. The regions were established by the hierarchical cluster analysis. The homogeneity of each region was examined by use of Wilt shires R-test. New regional formulae were established linking the mean annual flood to basin characteristics. The results have been compared to the previous set of regional growth curves and regional formulae predicting the mean annual flood. The relation between peak flood and daily values has been examined. A formula for predicting peak flood quantiles for a given daily flood quantile has been developed. 22 refs., 24 figs., 8 tabs.

  15. Fast Intra and Inter Prediction Mode Decision of H.264/AVC for Medical Image Compression Based on Region of Interest

    Mehdi Jafari


    Full Text Available This paper aims at applying H.264 in medical video compression applications and improving the H.264 Compression performance with better perceptual quality and low coding complexity. In order to achieve higher compression of medical video, while maintaining high image quality in the region of interest, with low coding complexity, here we propose a new model using H.264/AVC that uses lossless compression in the region of interest, and very high rate, lossy compression in other regions. This paper proposes a new method to achieve fast intra and inter prediction mode decision that is based on coarse macroblocks for intra and inter prediction mode decision of the background region and finer macroblocks for region of interest. Also the macroblocks of the background region are encoded with the maximum quantization parameter allowed by H.264/AVC in order to maximize the number of null coefficients. Experimental results show that the proposed algorithm achieves a higher compression rate on medical videos with a higher quality of region of interest with low coding complexity when compared to our previous algorithm and other standard algorithms reported in the literature.

  16. Regional Frequency Analysis of Annual Maximum Rainfall in Monsoon Region of Pakistan using L-moments

    Amina Shahzadi; Ahmad Saeed Akhter; Betul Saf


    The estimation of magnitude and frequency of extreme rainfall has immense importance to make decisions about hydraulic structures like spillways, dikes and dams etc The main objective of this study is to get the best fit distributions for annual maximum rainfall data on regional basis in order to estimate the extreme rainfall events (quantiles) for various return periods. This study is carried out using index flood method using L-moments by Hosking and wallis (1997). The study is based on 23 ...

  17. Stop search in the compressed region via semileptonic decays

    Cheng, Hsin-Chia; Gao, Christina; Li, Lingfeng; Neill, Nicolás A.


    In supersymmetric extensions of the Standard Model, the superpartners of the top quark (stops) play the crucial role in addressing the naturalness problem. For direct pair-production of stops with each stop decaying into a top quark plus the lightest neutralino, the standard stop searches have difficulty finding the stop for a compressed spectrum where the mass difference between the stop and the lightest neutralino is close to the top quark mass, because the events look too similar to the large toverline{t} background. With an additional hard ISR jet, the two neutralinos from the stop decays are boosted in the opposite direction and they can give rise to some missing transverse energy. This may be used to distinguish the stop decays from the backgrounds. In this paper we study the semileptonic decay of such signal events for the compressed mass spectrum. Although the neutrino from the W decay also produces some missing transverse energy, its momentum can be reconstructed from the kinematic assumptions and mass-shell conditions. It can then be subtracted from the total missing transverse momentum to obtain the neutralino contribution. Because it suffers from less backgrounds, we show that the semileptonic decay channel has a better discovery reach than the fully hadronic decay channel along the compressed line {m}_{tilde{t}}-{m}_{tilde{χ}}≈ {m}_t . With 300 fb-1, the 13 TeV LHC can discover the stop up to 500 GeV, covering the most natural parameter space region.

  18. Regional Frequency Analysis of Annual Maximum Rainfall in Monsoon Region of Pakistan using L-moments

    Amina Shahzadi


    Full Text Available The estimation of magnitude and frequency of extreme rainfall has immense importance to make decisions about hydraulic structures like spillways, dikes and dams etc The main objective of this study is to get the best fit distributions for annual maximum rainfall data on regional basis in order to estimate the extreme rainfall events (quantiles for various return periods. This study is carried out using index flood method using L-moments by Hosking and wallis (1997. The study is based on 23 sites of rainfall which are divided into three homogeneous regions. The collective results of L-moment ratio diagram, Z-statistic and AWD values show the GLO, GEV and GNO to be best fit for all three regions and in addition PE3 for region 3. On the basis of relative RMSE, for region 1 and region 2, GLO, GEV and GNO are producing approximately the same relative RMSE for return periods upto 100. While GNO is producing less relative RMSE for large return periods of 500 and 1000. So for large return periods GNO could be best distribution. For region 3 GLO, GEV, GNO and PE3 are having approximately the same relative RMSE for return periods upto 100. While for large return periods of 500 and 1000 PE3 could be best on basis of less relative RMSE.

  19. Hybrid Energy Storage System Based on Compressed Air and Super-Capacitors with Maximum Efficiency Point Tracking (MEPT)

    Lemofouet, Sylvain; Rufer, Alfred

    This paper presents a hybrid energy storage system mainly based on Compressed Air, where the storage and withdrawal of energy are done within maximum efficiency conditions. As these maximum efficiency conditions impose the level of converted power, an intermittent time-modulated operation mode is applied to the thermodynamic converter to obtain a variable converted power. A smoothly variable output power is achieved with the help of a supercapacitive auxiliary storage device used as a filter. The paper describes the concept of the system, the power-electronic interfaces and especially the Maximum Efficiency Point Tracking (MEPT) algorithm and the strategy used to vary the output power. In addition, the paper introduces more efficient hybrid storage systems where the volumetric air machine is replaced by an oil-hydraulics and pneumatics converter, used under isothermal conditions. Practical results are also presented, recorded from a low-power air motor coupled to a small DC generator, as well as from a first prototype of the hydro-pneumatic system. Some economical considerations are also made, through a comparative cost evaluation of the presented hydro-pneumatic systems and a lead acid batteries system, in the context of a stand alone photovoltaic home application. This evaluation confirms the cost effectiveness of the presented hybrid storage systems.

  20. Knock-Limited Performance of Triptane and Xylidines Blended with 28-R Aviation Fuel at High Compression Ratios and Maximum-Economy Spark Setting

    Held, Louis F.; Pritchard, Ernest I.


    An investigation was conducted to evaluate the possibilities of utilizing the high-performance characteristics of triptane and xylidines blended with 28-R fuel in order to increase fuel economy by the use of high compression ratios and maximum-economy spark setting. Full-scale single-cylinder knock tests were run with 20 deg B.T.C. and maximum-economy spark settings at compression ratios of 6.9, 8.0, and 10.0, and with two inlet-air temperatures. The fuels tested consisted of triptane, four triptane and one xylidines blend with 28-R, and 28-R fuel alone. Indicated specific fuel consumption at lean mixtures was decreased approximately 17 percent at a compression ratio of 10.0 and maximum-economy spark setting, as compared to that obtained with a compression ratio of 6.9 and normal spark setting. When compression ratio was increased from 6.9 to 10.0 at an inlet-air temperature of 150 F, normal spark setting, and a fuel-air ratio of 0.065, 55-percent triptane was required with 28-R fuel to maintain the knock-limited brake power level obtained with 28-R fuel at a compression ratio of 6.9. Brake specific fuel consumption was decreased 17.5 percent at a compression ratio of 10.0 relative to that obtained at a compression ratio of 6.9. Approximately similar results were noted at an inlet-air temperature of 250 F. For concentrations up through at least 20 percent, triptane can be more efficiently used at normal than at maximum-economy spark setting to maintain a constant knock-limited power output over the range of compression ratios tested.

  1. Implementation of transformed lenses in bed of nails reducing refractive index maximum value and sub-unity regions.

    Prado, Daniel R; Osipov, Andrey V; Quevedo-Teruel, Oscar


    Transformation optics with quasi-conformal mapping is applied to design a Generalized Maxwell Fish-eye Lens (GMFEL) which can be used as a power splitter. The flattened focal line obtained as a result of the transformation allows the lens to adapt to planar antenna feeding systems. Moreover, sub-unity refraction index regions are reduced because of the space compression effect of the transformation, reducing the negative impact of removing those regions when implementing the lens. A technique to reduce the maximum value of the refractive index is presented to compensate for its increase because of the transformation. Finally, the lens is implemented with the bed of nails technology, employing a commercial dielectric slab to improve the range of the effective refractive index. The lens was simulated with a 3D full-wave simulator to validate the design, obtaining an original and feasible power splitter based on a dielectric lens.

  2. Gradient Compression Garments as a Countermeasure to Post-Space Flight Orthostatic Intolerance: Potential Interactions with the Maximum Absorbency Garment

    Lee, S. M. C.; Laurie, S. S.; Macias, B. R.; Willig, M.; Johnson, K.; Stenger, M. B.


    Astronauts and cosmonauts may experience symptoms of orthostatic intolerance during re-entry, landing, and for several days post-landing following short- and long-duration spaceflight. Presyncopal symptoms have been documented in approximately 20% of short-duration and greater than 60% of long-duration flyers on landing day specifically during 5-10 min of controlled (no countermeasures employed at the time of testing) stand tests or 80 deg head-up tilt tests. Current operational countermeasures to orthostatic intolerance include fluid loading prior to and whole body cooling during re-entry as well as compression garments that are worn during and for up to several days after landing. While both NASA and the Russian space program have utilized compression garments to protect astronauts and cosmonauts traveling on their respective vehicles, a "next-generation" gradient compression garment (GCG) has been developed and tested in collaboration with a commercial partner to support future space flight missions. Unlike previous compression garments used operationally by NASA that provide a single level of compression across only the calves, thighs, and lower abdomen, the GCG provides continuous coverage from the feet to below the pectoral muscles in a gradient fashion (from approximately 55 mm Hg at the feet to approximately 16 mmHg across the abdomen). The efficacy of the GCG has been demonstrated previously after a 14-d bed rest study without other countermeasures and after short-duration Space Shuttle missions. Currently the GCG is being tested during a stand test following long-duration missions (6 months) to the International Space Station. While results to date have been promising, interactions of the GCG with other space suit components have not been examined. Specifically, it is unknown whether wearing the GCG over NASA's Maximum Absorbency Garment (MAG; absorbent briefs worn for the collection of urine and feces while suited during re-entry and landing) will

  3. On the magnitude of temperature decrease in the equatorial regions during the Last Glacial Maximum

    王宁练; 姚檀栋; 施雅风; L.G.Thompson; J.Cole-Dai; P.-N.Lin; and; M.E.Davis


    Based on the data of temperature changes revealed by means of various palaeothermometric proxy indices,it is found that the magnitude of temperature decrease became large with altitude in the equatorial regions during the Last Glacial Maximum. The direct cause of this phenomenon was the change in temperature lapse rate, which was about(0.1±0.05)℃/100 m larger in the equator during the Last Glacial Maximum than at present. Moreover, the analyses show that CLIMAP possibly underestimated the sea surface temperature decrease in the equatorial regions during the Last Glacial Maximum.

  4. Non-uniformly under-sampled multi-dimensional spectroscopic imaging in vivo: maximum entropy versus compressed sensing reconstruction.

    Burns, Brian; Wilson, Neil E; Furuyama, Jon K; Thomas, M Albert


    The four-dimensional (4D) echo-planar correlated spectroscopic imaging (EP-COSI) sequence allows for the simultaneous acquisition of two spatial (ky, kx) and two spectral (t2, t1) dimensions in vivo in a single recording. However, its scan time is directly proportional to the number of increments in the ky and t1 dimensions, and a single scan can take 20–40 min using typical parameters, which is too long to be used for a routine clinical protocol. The present work describes efforts to accelerate EP-COSI data acquisition by application of non-uniform under-sampling (NUS) to the ky–t1 plane of simulated and in vivo EP-COSI datasets then reconstructing missing samples using maximum entropy (MaxEnt) and compressed sensing (CS). Both reconstruction problems were solved using the Cambridge algorithm, which offers many workflow improvements over other l1-norm solvers. Reconstructions of retrospectively under-sampled simulated data demonstrate that the MaxEnt and CS reconstructions successfully restore data fidelity at signal-to-noise ratios (SNRs) from 4 to 20 and 5× to 1.25× NUS. Retrospectively and prospectively 4× under-sampled 4D EP-COSI in vivo datasets show that both reconstruction methods successfully remove NUS artifacts; however, MaxEnt provides reconstructions equal to or better than CS. Our results show that NUS combined with iterative reconstruction can reduce 4D EP-COSI scan times by 75% to a clinically viable 5 min in vivo, with MaxEnt being the preferred method. 2013 John Wiley & Sons, Ltd.

  5. The Maximum Free Magnetic Energy Allowed in a Solar Active Region

    Moore, Ronald L.; Falconer, David A.


    Two whole-active-region magnetic quantities that can be measured from a line-of-sight magnetogram are (sup L) WL(sub SG), a gauge of the total free energy in an active region's magnetic field, and sup L(sub theta), a measure of the active region's total magnetic flux. From these two quantities measured from 1865 SOHO/MDI magnetograms that tracked 44 sunspot active regions across the 0.5 R(sub Sun) central disk, together with each active region's observed production of CMEs, X flares, and M flares, Falconer et al (2009, ApJ, submitted) found that (1) active regions have a maximum attainable free magnetic energy that increases with the magnetic size (sup L) (sub theta) of the active region, (2) in (Log (sup L)WL(sub SG), Log(sup L) theta) space, CME/flare-productive active regions are concentrated in a straight-line main sequence along which the free magnetic energy is near its upper limit, and (3) X and M flares are restricted to large active regions. Here, from (a) these results, (b) the observation that even the greatest X flares produce at most only subtle changes in active region magnetograms, and (c) measurements from MSFC vector magnetograms and from MDI line-of-sight magnetograms showing that practically all sunspot active regions have nearly the same area-averaged magnetic field strength: =- theta/A approximately equal to 300 G, where theta is the active region's total photospheric flux of field stronger than 100 G and A is the area of that flux, we infer that (1) the maximum allowed ratio of an active region's free magnetic energy to its potential-field energy is 1, and (2) any one CME/flare eruption releases no more than a small fraction (less than 10%) of the active region's free magnetic energy. This work was funded by NASA's Heliophysics Division and NSF's Division of Atmospheric Sciences.

  6. Stop Search in the Compressed Region via Semileptonic Decays

    Cheng, Hsin-Chia; Li, Lingfeng; Neill, Nicolas A


    In supersymmetric extensions of the Standard Model, the superpartners of the top quark (stops) play the crucial role in addressing the naturalness problem. For direct pair-production of stops with each stop decaying into a top quark plus the lightest neutralino, the standard stop searches have difficulty finding the stop for a compressed spectrum where the mass difference between the stop and the lightest neutralino is close to the top quark mass, because the events look too similar to the large $t\\bar{t}$ background. With an additional hard ISR jet, the two neutralinos from the stop decays are boosted in the opposite direction and they can give rise to some missing transverse energy. This may be used to distinguish the stop decays from the backgrounds. In this paper we study the semileptonic decay of such signal events for the compressed mass spectrum. Although the neutrino from the $W$ decay also produces some missing transverse energy, its momentum can be reconstructed from the kinematic assumptions and ma...

  7. Remote sensing image compression for deep space based on region of interest

    王振华; 吴伟仁; 田玉龙; 田金文; 柳健


    A major limitation for deep space communication is the limited bandwidths available. The downlinkrate using X-band with an L2 halo orbit is estimated to be of only 5.35 GB/d. However, the Next GenerationSpace Telescope (NGST) will produce about 600 GB/d. Clearly the volume of data to downlink must be re-duced by at least a factor of 100. One of the resolutions is to encode the data using very low bit rate image com-pression techniques. An very low bit rate image compression method based on region of interest(ROI) has beenproposed for deep space image. The conventional image compression algorithms which encode the original datawithout any data analysis can maintain very good details and haven' t high compression rate while the modernimage compressions with semantic organization can have high compression rate even to be hundred and can' tmaintain too much details. The algorithms based on region of interest inheriting from the two previews algorithmshave good semantic features and high fidelity, and is therefore suitable for applications at a low bit rate. Theproposed method extracts the region of interest by texture analysis after wavelet transform and gains optimal localquality with bit rate control. The Result shows that our method can maintain more details in ROI than generalimage compression algorithm(SPIHT) under the condition of sacrificing the quality of other uninterested areas.

  8. Operational forecasting of daily temperatures in the Valencia Region. Part I: maximum temperatures in summer.

    Gómez, I.; Estrela, M.


    Extreme temperature events have a great impact on human society. Knowledge of summer maximum temperatures is very useful for both the general public and organisations whose workers have to operate in the open, e.g. railways, roadways, tourism, etc. Moreover, summer maximum daily temperatures are considered a parameter of interest and concern since persistent heat-waves can affect areas as diverse as public health, energy consumption, etc. Thus, an accurate forecasting of these temperatures could help to predict heat-wave conditions and permit the implementation of strategies aimed at minimizing the negative effects that high temperatures have on human health. The aim of this work is to evaluate the skill of the RAMS model in determining daily maximum temperatures during summer over the Valencia Region. For this, we have used the real-time configuration of this model currently running at the CEAM Foundation. To carry out the model verification process, we have analysed not only the global behaviour of the model for the whole Valencia Region, but also its behaviour for the individual stations distributed within this area. The study has been performed for the summer forecast period of 1 June - 30 September, 2007. The results obtained are encouraging and indicate a good agreement between the observed and simulated maximum temperatures. Moreover, the model captures quite well the temperatures in the extreme heat episodes. Acknowledgement. This work was supported by "GRACCIE" (CSD2007-00067, Programa Consolider-Ingenio 2010), by the Spanish Ministerio de Educación y Ciencia, contract number CGL2005-03386/CLI, and by the Regional Government of Valencia Conselleria de Sanitat, contract "Simulación de las olas de calor e invasiones de frío y su regionalización en la Comunidad Valenciana" ("Heat wave and cold invasion simulation and their regionalization at Valencia Region"). The CEAM Foundation is supported by the Generalitat Valenciana and BANCAIXA (Valencia, Spain).

  9. Regional maximum rainfall analysis using L-moments at the Titicaca Lake drainage, Peru

    Fernández-Palomino, Carlos Antonio; Lavado-Casimiro, Waldo Sven


    The present study investigates the application of the index flood L-moments-based regional frequency analysis procedure (RFA-LM) to the annual maximum 24-h rainfall (AM) of 33 rainfall gauge stations (RGs) to estimate rainfall quantiles at the Titicaca Lake drainage (TL). The study region was chosen because it is characterised by common floods that affect agricultural production and infrastructure. First, detailed quality analyses and verification of the RFA-LM assumptions were conducted. For this purpose, different tests for outlier verification, homogeneity, stationarity, and serial independence were employed. Then, the application of RFA-LM procedure allowed us to consider the TL as a single, hydrologically homogeneous region, in terms of its maximum rainfall frequency. That is, this region can be modelled by a generalised normal (GNO) distribution, chosen according to the Z test for goodness-of-fit, L-moments (LM) ratio diagram, and an additional evaluation of the precision of the regional growth curve. Due to the low density of RG in the TL, it was important to produce maps of the AM design quantiles estimated using RFA-LM. Therefore, the ordinary Kriging interpolation (OK) technique was used. These maps will be a useful tool for determining the different AM quantiles at any point of interest for hydrologists in the region.

  10. Application of region selective embedded zerotree wavelet coder in CT image compression.

    Li, Guoli; Zhang, Jian; Wang, Qunjing; Hu, Cungang; Deng, Na; Li, Jianping


    Compression is necessary in medical image preservation because of the huge data quantity. Medical images are different from the common images because of their own characteristics, for example, part of information in CT image is useless, and it's a kind of resource waste to save this part information. The region selective EZW coder was proposed with which only useful part of image was selected and compressed, and the test image provides good result.

  11. Optimal control and optimal trajectories of regional macroeconomic dynamics based on the Pontryagin maximum principle

    Bulgakov, V. K.; Strigunov, V. V.


    The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.

  12. Novel region-based image compression method based on spiking cor tical model

    Rongchang Zhao; Yide Ma


    To get the high compression ratio as wel as the high-quality reconstructed image, an effective image compres-sion scheme named irregular segmentation region coding based on spiking cortical model (ISRCS) is presented. This scheme is region-based and mainly focuses on two issues. Firstly, an appro-priate segmentation algorithm is developed to partition an image into some irregular regions and tidy contours, where the crucial regions corresponding to objects are retained and a lot of tiny parts are eliminated. The irregular regions and contours are coded using different methods respectively in the next step. The other is-sue is the coding method of contours where an efficient and novel chain code is employed. This scheme tries to find a compromise between the quality of reconstructed images and the compression ratio. Some principles and experiments are conducted and the results show its higher performance compared with other com-pression technologies, in terms of higher quality of reconstructed images, higher compression ratio and less time consuming.

  13. Level spacing of U(5) \\leftrightarrow SO(6) transitional region with maximum likelihood estimation method

    Jafarizadeh, M A; Sabric, H; Malekic, B Rashidian


    In this paper,a systematic study of quantum phase transition within U(5) \\leftrightarrow SO(6) limits is presented in terms of infinite dimensional Algebraic technique in the IBM framework. Energy level statistics are investigated with Maximum Likelihood Estimation (MLE) method in order to characterize transitional region. Eigenvalues of these systems are obtained by solving Bethe-Ansatz equations with least square fitting processes to experimental data to obtain constants of Hamiltonian. Our obtained results verify the dependence of Nearest Neighbor Spacing Distribution's (NNSD) parameter to control parameter (c_{s}) and also display chaotic behavior of transitional regions in comparing with both limits. In order to compare our results for two limits with both GUE and GOE ensembles, we have suggested a new NNSD distribution and have obtained better KLD distances for the new distribution in compared with others in both limits. Also in the case of N\\to\\infty, the total boson number dependence displays the univ...

  14. Regional analysis of annual maximum rainfall using TL-moments method

    Shabri, Ani Bin; Daud, Zalina Mohd; Ariff, Noratiqah Mohd


    Information related to distributions of rainfall amounts are of great importance for designs of water-related structures. One of the concerns of hydrologists and engineers is the probability distribution for modeling of regional data. In this study, a novel approach to regional frequency analysis using L-moments is revisited. Subsequently, an alternative regional frequency analysis using the TL-moments method is employed. The results from both methods were then compared. The analysis was based on daily annual maximum rainfall data from 40 stations in Selangor Malaysia. TL-moments for the generalized extreme value (GEV) and generalized logistic (GLO) distributions were derived and used to develop the regional frequency analysis procedure. TL-moment ratio diagram and Z-test were employed in determining the best-fit distribution. Comparison between the two approaches showed that the L-moments and TL-moments produced equivalent results. GLO and GEV distributions were identified as the most suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation was used for performance evaluation, and it showed that the method of TL-moments was more efficient for lower quantile estimation compared with the L-moments.

  15. Deterministic single soliton generation and compression in microring resonators avoiding the chaotic region

    Jaramillo-Villegas, Jose A; Wang, Pei-Hsun; Leaird, Daniel E; Weiner, Andrew M


    A path within the parameter space of phase detuning and pump power is demonstrated in order to obtain a single cavity soliton (CS) with certainty in SiN microring resonators in the anomalous dispersion regime. Once the single CS state is reached, it is possible to continue a path to compress it, broadening the corresponding single FSR frequency Kerr comb. This behavior is first obtained by identifying the regions in the parameter space via numerical simulations of the Lugiato-Lefever equation (LLE), and second, defining a path from the stable modulation instability (SMI) region to the stable cavity solitons (SCS) region avoiding the chaotic and unstable regions.

  16. Hydrological and vegetation shifts in the Wallacean region of central Indonesia since the Last Glacial Maximum

    Wicaksono, Satrio A.; Russell, James M.; Holbourn, Ann; Kuhnt, Wolfgang


    Precipitation is the most important variable of Indonesian climate, yet there are substantial uncertainties about past and future hydroclimate dynamics over the region. This study explores vegetation and rainfall and associated changes in atmospheric circulation during the past 26,000 years in Wallacea, a biogeographical area in central Indonesia, wedged between the Sunda and Sahul shelves and known for its exceptionally high rainforest biodiversity. We use terrestrial plant biomarkers from sediment cores retrieved from Mandar Bay, off west Sulawesi, to reconstruct changes in Wallacean vegetation and climate since the Last Glacial Maximum (LGM). Enriched leaf wax carbon isotope (δ13Cwax) values recorded in Mandar Bay during the LGM, together with other regional vegetation records, document grassland expansion, implying a regionally dry, and possibly more seasonal, glacial climate. Depleted leaf wax deuterium isotope (δDwax) values in Mandar Bay during the LGM, and low reconstructed precipitation isotope compositions from nearby sites, reveal an intensified Austral-Asian summer monsoon circulation and a southward shift of the mean position of the Intertropical Convergence Zone, likely due to strong southern hemisphere summer insolation and the presence of large northern hemisphere ice sheets. Mandar Bay δ13Cwax was anti-correlated with δDwax during the LGM and the last deglaciation, but was positively correlated during most of the Holocene, indicating time-varying controls on the isotopic composition of rainfall in this region. The inundation event of the Sunda Shelf and in particular the opening of the Java Sea and Karimata Strait between 9.4 and 11.1 thousand years ago might have provided new moisture sources for regional convection and/or influenced moisture source trajectories, providing the trigger for shifts in atmospheric circulation and the controls on precipitation isotope compositions from the LGM to the Holocene.

  17. Best fitting distributions for the standard duration annual maximum precipitations in the Aegean Region

    Halil Karahan


    Full Text Available Knowing the properties like amount, duration, intensity, spatial and temporal variation etc… of precipitation which is the primary input of water resources is required for planning, design, construction and operation studies of various sectors like water resources, agriculture, urbanization, drainage, flood control and transportation. For executing the mentioned practices, reliable and realistic estimations based on existing observations should be made. The first step of making a reliable estimation is to test the reliability of existing observations. In this study, Kolmogorov-Smirnov, Anderson-Darling and Chi-Square goodness of distribution fit tests were applied for determining to which distribution the measured standard duration maximum precipitation values (in the years 1929-2005 fit in the meteorological stations operated by the Turkish State Meteorological Service (DMİ which are located in the city and town centers of Aegean Region. While all the observations fit to GEV distribution according to Anderson-Darling test, it was seen that short, mid-term and long duration precipitation observations generally fit to GEV, Gamma and Log-normal distribution according to Kolmogorov-Smirnov and Chi-square tests. To determine the parameters of the chosen probability distribution, maximum likelihood (LN2, LN3, EXP2, Gamma3, probability-weighted distribution (LP3,Gamma2, L-moments (GEV and least squares (Weibull2 methods were used according to different distributions.

  18. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan


    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  19. Probabilistic tsunami hazard assessment for the Makran region with focus on maximum magnitude assumption

    Hoechner, Andreas; Babeyko, Andrey Y.; Zamora, Natalia


    Despite having been rather seismically quiescent for the last decades, the Makran subduction zone is capable of hosting destructive earthquakes and tsunami. In particular, the well-known thrust event in 1945 (Balochistan earthquake) led to about 4000 casualties. Nowadays, the coastal regions are more densely populated and vulnerable to similar events. Furthermore, some recent publications discuss rare but significantly larger events at the Makran subduction zone as possible scenarios. We analyze the instrumental and historical seismicity at the subduction plate interface and generate various synthetic earthquake catalogs spanning 300 000 years with varying magnitude-frequency relations. For every event in the catalogs we compute estimated tsunami heights and present the resulting tsunami hazard along the coasts of Pakistan, Iran and Oman in the form of probabilistic tsunami hazard curves. We show how the hazard results depend on variation of the Gutenberg-Richter parameters and especially maximum magnitude assumption.

  20. Simultaneous Solar Maximum Mission (SMM) and Very Large Array (VLA) observations of solar active regions

    Willson, Robert F.


    Very Large Array observations at 20 cm wavelength can detect the hot coronal plasma previously observed at soft x ray wavelengths. Thermal cyclotron line emission was detected at the apex of coronal loops where the magnetic field strength is relatively constant. Detailed comparison of simultaneous Solar Maximum Mission (SMM) Satellite and VLA data indicate that physical parameters such as electron temperature, electron density, and magnetic field strength can be obtained, but that some coronal loops remain invisible in either spectral domain. The unprecedent spatial resolution of the VLA at 20 cm wavelength showed that the precursor, impulsive, and post-flare components of solar bursts originate in nearby, but separate loops or systems of loops.. In some cases preburst heating and magnetic changes are observed from loops tens of minutes prior to the impulsive phase. Comparisons with soft x ray images and spectra and with hard x ray data specify the magnetic field strength and emission mechanism of flaring coronal loops. At the longer 91 cm wavelength, the VLA detected extensive emission interpreted as a hot 10(exp 5) K interface between cool, dense H alpha filaments and the surrounding hotter, rarefield corona. Observations at 91 cm also provide evidence for time-correlated bursts in active regions on opposite sides of the solar equator; they are attributed to flare triggering by relativistic particles that move along large-scale, otherwise-invisible, magnetic conduits that link active regions in opposite hemispheres of the Sun.

  1. Opening Up the Compressed Region of Top Squark Searches at 13 TeV LHC.

    An, Haipeng; Wang, Lian-Tao


    Light top superpartners play a key role in stabilizing the electroweak scale in supersymmetric theories. For R-parity conserved supersymmetric models, traditional searches are not sensitive to the compressed regions. In this Letter, we propose a new method targeting this region, with top squark and neutralino mass splitting ranging from m_{t[over ˜]}-m_{χ}≳m_{t} to about 20 GeV. In particular, we focus on the signal process in which a pair of top squarks are produced in association with a hard jet, and we define a new observable R_{M} whose distribution has a peak in this compressed region. The position of the peak is closely correlated with m_{t[over ˜]}. We show that for the 13 TeV LHC with a luminosity of 3000  fb^{-1}, this analysis can extend the reach of the top squark in the compressed region to m_{t[over ˜]} around 800 GeV.

  2. Regional variations in the compressive properties of lumbar vertebral trabeculae. Effects of disc degeneration

    Keller, T.S.; Hansson, T.H.; Abram, A.C.; Spengler, D.M.; Panjabi, M.M. (Orthopaedic Biomechanics Lab, Nashville, TN (USA))


    The compressive mechanical properties of human lumbar vertebral trabeculae were examined on the basis of anatomic origin, bone density, and intervertebral disc properties. Trabecular bone compressive strength and stiffness increased with increasing bone density, the latter proportional to strength and stiffness to the one-half power. Regional variations within each segment were found, the most prevalent differences occurring in regions of bone overlying the disc nucleus in comparison with bone overlying the disc anulus. For normal discs, the ratio of strength of bone overlying the disc nucleus to bone overlying the disc anulus was 1.25, decreasing to 1.0 for moderately degenerated discs. These results suggest that an interdependency of trabecular bone properties and intervertebral disc properties may exist.

  3. Fast-PPP assessment in European and equatorial region near the solar cycle maximum

    Rovira-Garcia, Adria; Juan, José Miguel; Sanz, Jaume


    The Fast Precise Point Positioning (Fast-PPP) is a technique to provide quick high-accuracy navigation with ambiguity fixing capability, thanks to an accurate modelling of the ionosphere. Indeed, once the availability of real-time precise satellite orbits and clocks is granted to users, the next challenge is the accuracy of real-time ionospheric corrections. Several steps had been taken by gAGE/UPC to develop such global system for precise navigation. First Wide-Area Real-Time Kinematics (WARTK) feasibility studies enabled precise relative continental navigation using a few tens of reference stations. Later multi-frequency and multi-constellation assessments in different ionospheric scenarios, including maximum solar-cycle conditions, were focussed on user-domain performance. Recently, a mature evolution of the technique consists on a dual service scheme; a global Precise Point Positioning (PPP) service, together with a continental enhancement to shorten convergence. A end to end performance assessment of the Fast-PPP technique is presented in this work, focussed in Europe and in the equatorial region of South East Asia (SEA), both near the solar cycle maximum. The accuracy of the Central Processing Facility (CPF) real-time precise satellite orbits and clocks is respectively, 4 centimetres and 0.2 nanoseconds, in line with the accuracy of the International GNSS Service (IGS) analysis centres. This global PPP service is enhanced by the Fast-PPP by adding the capability of global undifferenced ambiguity fixing thanks to the fractional part of the ambiguities determination. The core of the Fast-PPP is the capability to compute real-time ionospheric determinations with accuracies at the level or better than 1 Total Electron Content Unit (TECU), improving the widely-accepted Global Ionospheric Maps (GIM), with declared accuracies of 2-8 TECU. This large improvement in the modelling accuracy is achieved thanks to a two-layer description of the ionosphere combined with

  4. Joint development normal to regional compression during flexural-flow folding: the Lilstock buttress anticline, Somerset, England

    Engelder, Terry; Peacock, David C. P.


    Alpine inversion in the Bristol Channel Basin includes reverse-reactivated normal faults with hanging wall buttress anticlines. At Lilstock Beach, joint sets in Lower Jurassic limestone beds cluster about the trend of the hinge of the Lilstock buttress anticline. In horizontal and gently north-dipping beds, J3 joints ( 295-285° strike) are rare, while other joint sets indicate an anticlockwise sequence of development. In the steeper south-dipping beds, J3 joints are the most frequent in the vicinity of the reverse-reactivated normal fault responsible for the anticline. The J3 joints strike parallel to the fold hinge, and their poles tilt to the south when bedding is restored to horizontal. This southward tilt aims at the direction of σ 1 for Alpine inversion. Finite-element analysis is used to explain the southward tilt of J3 joints that propagate under a local σ 3 in the direction of σ 1 for Alpine inversion. Tilted principal stresses are characteristic of limestone-shale sequences that are sheared during parallel (flexural-flow) folding. Shear tractions on the dipping beds generate a tensile stress in the stiffer limestone beds even when remote principal stresses are compressive. This situation favors the paradoxical opening of joints in the direction of the regional maximum horizontal stress. We conclude that J3 joints propagated during the Alpine compression caused the growth of the Lilstock buttress anticline.

  5. Maximum Regional Emission Reduction Potential in Residential Sector Based on Spatial Distribution of Population and Resources

    Winijkul, E.; Bond, T. C.


    In the residential sector, major activities that generate emissions are cooking and heating, and fuels ranging from traditional (wood) to modern (natural gas, or electricity) are used. Direct air pollutant emissions from this sector are low when natural gas or electricity are the dominant energy sources, as is the case in developed countries. However, in developing countries, people may rely on solid fuels and this sector can contribute a large fraction of emissions. The magnitude of the health loss associated with exposure to indoor smoke as well as its concentration among rural population in developing countries have recently put preventive measures high on the agenda of international development and public health organizations. This study focuses on these developing regions: Central America, Africa, and Asia. Current and future emissions from the residential sector depend on both fuel and cooking device (stove) type. Availability of fuels, stoves, and interventions depends strongly on spatial distribution. However, regional emission calculations do not consider this spatial dependence. Fuel consumption data is presented at country level, without information about where different types of fuel are used. Moreover, information about stove types that are currently used and can be used in the future is not available. In this study, we first spatially allocate current emissions within residential sector. We use Geographic Information System maps of temperature, electricity availability, forest area, and population to determine the distribution of fuel types and availability of stoves. Within each country, consumption of different fuel types, such as fuelwood, coal, and LPG is distributed among different area types (urban, peri-urban, and rural area). Then, the cleanest stove technologies which could be used in the area are selected based on the constraints of each area, i.e. availability of resources. Using this map, the maximum emission reduction compared with

  6. Export production in the New-Zealand region since the Last Glacial Maximum

    Durand, Axel; Chase, Zanna; Noble, Taryn L.; Bostock, Helen; Jaccard, Samuel L.; Kitchener, Priya; Townsend, Ashley T.; Jansen, Nils; Kinsley, Les; Jacobsen, Geraldine; Johnson, Sean; Neil, Helen


    Increased export production (EP) in the Subantarctic Zone (SAZ) of the Southern Ocean due to iron fertilisation has been proposed as a key mechanism for explaining carbon drawdown during the last glacial maximum (LGM). This work reconstructs marine EP since the LGM at four sites around New Zealand. For the first time in this region, 230-Thorium-normalised fluxes of biogenic opal, carbonate, excess barium, and organic carbon are presented. In Subtropical Waters and the SAZ, these flux variations show that EP has not changed markedly since the LGM. The only exception is a site currently north of the subtropical front. Here we suggest the subtropical front shifted over the core site between 18 and 12 ka, driving increased EP. To understand why EP remained mostly low and constant elsewhere, lithogenic fluxes at the four sites were measured to investigate changes in dust deposition. At all sites, lithogenic fluxes were greater during the LGM compared to the Holocene. The positive temporal correlation between the Antarctic dust record and lithogenic flux at a site in the Tasman Sea shows that regionally, increased dust deposition contributed to the high glacial lithogenic fluxes. Additionally, it is inferred that lithogenic material from erosion and glacier melting deposited on the Campbell Plateau during the deglaciation (18-12 ka). From these observations, it is proposed that even though increased glacial dust deposition may have relieved iron limitation within the SAZ around New Zealand, the availability of silicic acid limited diatom growth and thus any resultant increase in carbon export during the LGM. Therefore, silicic acid concentrations have remained low since the LGM. This result suggests that both silicic acid and iron co-limit EP in the SAZ around New Zealand, consistent with modern process studies.

  7. Level Set Segmentation of Medical Images Based on Local Region Statistics and Maximum a Posteriori Probability

    Wenchao Cui


    Full Text Available This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP and Bayes’ rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  8. Deterministic single soliton generation and compression in microring resonators avoiding the chaotic region.

    Jaramillo-Villegas, Jose A; Xue, Xiaoxiao; Wang, Pei-Hsun; Leaird, Daniel E; Weiner, Andrew M


    A path within the parameter space of detuning and pump power is demonstrated in order to obtain a single cavity soliton (CS) with certainty in SiN microring resonators in the anomalous dispersion regime. Once the single CS state is reached, it is possible to continue a path to compress it, broadening the corresponding single free spectral range (FSR) Kerr frequency comb. The first step to achieve this goal is to identify the stable regions in the parameter space via numerical simulations of the Lugiato-Lefever equation (LLE). Later, using this identification, we define a path from the stable modulation instability (SMI) region to the stable cavity solitons (SCS) region avoiding the chaotic and unstable regions.

  9. Prevention of thromboembolism following elective hip surgery. The value of regional anesthesia and graded compression stockings

    Wille-Jørgensen, P; Christensen, S W; Bjerg-Nielsen, A


    Ninety-eight patients scheduled for elective hip arthroplasty receiving either general or regional anesthesia and graded compression stockings as the only thromboprophylactic treatment were screened for postoperative deep-venous thrombosis with 99mTc-plasmin scintimetry. The diagnosis of deep......-venous thrombosis was established by phlebography and the diagnosis of pulmonary embolism by pulmonary perfusion and ventilation scintigraphy. Of 65 patients surgically treated under general anesthesia, 20 (31%) developed deep-venous thrombosis and six developed pulmonary embolism. Of 33 patients surgically treated...... using regional anesthesia, three (9%) developed deep-venous thrombosis and one developed a pulmonary embolus. The number of patients developing deep-venous thrombosis was significantly lower in the group receiving regional anesthesia compared with the group receiving general anesthesia. The results...

  10. Influencing Factors of Compression Strength of Asphalt Mixture in Cold Region%寒区沥青混合料抗压强度影响因素

    韦佑坡; 马骉; 司伟


    针对寒区低温特点,对沥青混合料进行室内单轴压缩试验,分析温度、油石比、沥青种类和级配对混合料抗压强度的影响.结果表明,混合料抗压强度随温度的升高而降低;对比不同最大公称粒径的沥青混合料的抗压强度可知,SBR改性AC - 16混合料的抗压强度高于AC - 13;存在对应于抗压强度达到最大值时的最佳油石比,约在6.0%~7.0%之间;SBR改性沥青混合料的低温抗压性能明显优于l30#道路石油沥青混合料.混合料抗压强度值的对数与温度及油石比的关系符合二元一次函数关系.用SPSS相关分析方法分析各影响因素对混合料抗压特性的影响程度可知,温度和沥青种类对抗压强度影响较大.%Aimed at the climate feature of low temperature in cold region, the influence of temperature, asphalt-aggregate ratio, asphalt types and aggregate gradation on the compression strength of asphalt mixture was analysed by indoor uniaxial compression test. The results show that (1) the compressive strength become lower with the increase of temperature; (2) based on comparing strengths of asphalt mixture in different nominal maximum sizes of aggregate, the compression strength of SBR modified AC-16 asphalt mixture is better than that of AC-13; (3) corresponds to maximum compressive strength of asphalt mixture, there exists the optimum asphalt-aggregate ratio between 6. 0% -7. 0% ; (4) the compressive properties of SBR modified asphalt mixture is superior to that of paving asphalt mixture No. 100 under low temperature; (5) the relation of the logarithm of the compression strength with temperature and asphalt-aggregate ratio approximately obeys two-variable linear function. The results also revealed that temperature and asphalt types have greatly affect on compression strength of asphalt mixture among influencing factors based on correspondence analysis of SPSS.

  11. Regions of constrained maximum likelihood parameter identifiability. [of discrete-time nonlinear dynamic systems with white measurement errors

    Lee, C.-H.; Herget, C. J.


    This short paper considers the parameter-identification problem of general discrete-time, nonlinear, multiple input-multiple output dynamic systems with Gaussian white distributed measurement errors. Knowledge of the system parameterization is assumed to be available. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems.

  12. Probable Maximum Precipitation (PMP) over mountainous region of Cameron Highlands- Batang Padang Catchment of Malaysia

    Sidek, L. M.; Mohd Nor, M. D.; Rakhecha, P. R.; Basri, H.; Jayothisa, W.; Muda, R. S.; Ahmad, M. N.; Razad, A. Z. Abdul


    The Cameron Highland Batang Padang (CHBP) catchment situated on the main mountain range of Peninsular Malaysia is of large economical importance where currently a series of three dams (Sultan Abu Bakar, Jor and Mahang) exist in the development of water resources and hydropower. The prediction of the design storm rainfall values for different return periods including PMP values can be useful to review the adequacy of the current spillway capacities of these dams. In this paper estimates of the design storm rainfalls for various return periods and also the PMP values for rainfall stations in the CHBP catchment have been computed for the three different durations of 1, 3 & 5 days. The maximum values for 1 day, 3 days and 5 days PMP values are found to be 730.08mm, 966.17mm and 969.0mm respectively at Station number 4513033 Gunung Brinchang. The PMP values obtained were compared with previous study results undertaken by NAHRIM. However, the highest ratio of 1 day, 3 day and 5 day PMP to highest observed rainfall are found to be 2.30, 1.94 and 1.82 respectively. This shows that the ratio tend to decrease as the duration increase. Finally, the temporal pattern for 1 day, 3day and 5 days have been developed based on observed extreme rainfall at station 4513033 Gunung Brinchang for the generation of Probable Maximum Flood (PMF) in dam break analysis.

  13. Integrated modeling for optimized regional transportation with compressed natural gas fuel

    Hossam A. Gabbar


    Full Text Available Transportation represents major energy consumption where fuel is considered as a primary energy source. Recent development in the vehicle technology revealed possible economical improvements when using natural gas as a fuel source instead of traditional gasoline. There are several fuel alternatives such as electricity, which showed potential for future long-term transportation. However, the move from current situation where gasoline vehicle is dominating shows high cost compared to compressed natural gas vehicle. This paper presents modeling and simulation methodology to optimize performance of transportation based on quantitative study of the risk-based performance of regional transportation. Emission estimation method is demonstrated and used to optimize transportation strategies based on life cycle costing. Different fuel supply scenarios are synthesized and evaluated, which showed strategic use of natural gas as a fuel supply.

  14. Quality ratings of frequency-compressed speech by participants with extensive high-frequency dead regions in the cochlea.

    Salorio-Corbetto, Marina; Baer, Thomas; Moore, Brian C J


    The objective was to assess the degradation of speech sound quality produced by frequency compression for listeners with extensive high-frequency dead regions (DRs). Quality ratings were obtained using values of the starting frequency (Sf) of the frequency compression both below and above the estimated edge frequency, fe, of each DR. Thus, the value of Sf often fell below the lowest value currently used in clinical practice. Several compression ratios were used for each value of Sf. Stimuli were sentences processed via a prototype hearing aid based on Phonak Exélia Art P. Five participants (eight ears) with extensive high-frequency DRs were tested. Reductions of sound-quality produced by frequency compression were small to moderate. Ratings decreased significantly with decreasing Sf and increasing CR. The mean ratings were lowest for the lowest Sf and highest CR. Ratings varied across participants, with one participant rating frequency compression lower than no frequency compression even when Sf was above fe. Frequency compression degraded sound quality somewhat for this small group of participants with extensive high-frequency DRs. The degradation was greater for lower values of Sf relative to fe, and for greater values of CR. Results varied across participants.

  15. An exploratory study of spatial annual maximum of monthly precipitation in the northern region of Portugal

    Prata Gomes, D.; Neves, M. M.; Moreira, E.


    Adequately analyzing and modeling the extreme rainfall events is of great importance because of the effects that their magnitude and frequency can have on human life, agricultural productivity and economic aspects, among others. A single extreme event may affect several locations, and their spatial dependence has to be appropriately taken into account. Classical geostatistics is a well-developed field for dealing with location referenced data, but it is largely based on Gaussian processes and distributions, that are not appropriate for extremes. In this paper, an exploratory study of the annual maximum of monthly precipitation recorded in the northern area of Portugal from 1941 to 2006 at 32 locations is performed. The aim of this paper is to apply max-stable processes, a natural extension of multivariate extremes to the spatial set-up, to briefly describe the models considered and to estimate the required parameters to simulate prediction maps.

  16. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Rik Van de Walle


    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4%.

  17. Lightweight Object Tracking in Compressed Video Streams Demonstrated in Region-of-Interest Coding

    Lerouge Sam


    Full Text Available Video scalability is a recent video coding technology that allows content providers to offer multiple quality versions from a single encoded video file in order to target different kinds of end-user devices and networks. One form of scalability utilizes the region-of-interest concept, that is, the possibility to mark objects or zones within the video as more important than the surrounding area. The scalable video coder ensures that these regions-of-interest are received by an end-user device before the surrounding area and preferably in higher quality. In this paper, novel algorithms are presented making it possible to automatically track the marked objects in the regions of interest. Our methods detect the overall motion of a designated object by retrieving the motion vectors calculated during the motion estimation step of the video encoder. Using this knowledge, the region-of-interest is translated, thus following the objects within. Furthermore, the proposed algorithms allow adequate resizing of the region-of-interest. By using the available information from the video encoder, object tracking can be done in the compressed domain and is suitable for real-time and streaming applications. A time-complexity analysis is given for the algorithms proving the low complexity thereof and the usability for real-time applications. The proposed object tracking methods are generic and can be applied to any codec that calculates the motion vector field. In this paper, the algorithms are implemented within MPEG-4 fine-granularity scalability codec. Different tests on different video sequences are performed to evaluate the accuracy of the methods. Our novel algorithms achieve a precision up to 96.4 .

  18. Measurement of SUVs-Maximum for Normal Region Using VOI in PET/MRI and PET/CT

    Jeong Kyu Park


    Full Text Available The purpose of this research is to establish an overall data set associated with the VOI (Volume of Interest, which is available for simultaneous assessment of PET/MRI and PET/CT regardless of the use of contrast media. The participants as objects of this investigation are 26 healthy examinees in Korea, SUV (standardized-uptake-values-maximum evaluation for whole-body F-18 FDG (fluorodeoxyglucose PET/MRI image using VOI of normal region has exhibited very significant difference to that for whole-body F-18 FDG PET/CT image (significant probability value (P0.8. It is shown that one needs to decide SUVs-maximum for PET/MRI with the reduction of 25.0~26.4% from their evaluated value and needs to decide with the reduction of 28.8~29.4% in the same situation but with the use of contrast media. The use of SUVLBM-maximum (SUVLean Body Mass-maximum is very advantageous in reading overall image of PET/CT and PET/MRI to medical doctors and researchers, if we consider its convenience and efficiency. We expect that this research enhances the level of the early stage accurate diagnosis with whole-body images of PET/MRI and PET/CT.

  19. Comparison of annual maximum series and partial duration series methods for modeling extreme hydrologic events: 2. Regional modeling

    Madsen, Henrik; Pearson, Charles P.; Rosbjerg, Dan


    Two regional estimation schemes, based on, respectively, partial duration series (PDS) and annual maximum series (AMS), are compared. The PDS model assumes a generalized Pareto (GP) distribution for modeling threshold exceedances corresponding to a generalized extreme value (GEV) distribution for annual maxima. First, the accuracy of PDS/GP and AMS/GEV regional index-flood T-year event estimators are compared using Monte Carlo simulations. For estimation in typical regions assuming a realistic degree of heterogeneity, the PDS/GP index-flood model is more efficient. The regional PDS and AMS procedures are subsequently applied to flood records from 48 catchments in New Zealand. To identify homogeneous groupings of catchments, a split-sample regionalization approach based on catchment characteristics is adopted. The defined groups are more homogeneous for PDS data than for AMS data; a two-way grouping based on annual average rainfall is sufficient to attain homogeneity for PDS, whereas a further partitioning is necessary for AMS. In determination of the regional parent distribution using L- moment ratio diagrams, PDS data, in contrast to AMS data, provide an unambiguous interpretation, supporting a GP distribution.

  20. Age- and sex-related regional compressive strength characteristics of human lumbar vertebrae in osteoporosis

    Márta Kurutz


    Full Text Available Márta Kurutz1, Judit Donáth3, Miklós Gálos2, Péter Varga1, Béla Fornet41Department of Structural Mechanics; 2Department of Construction Materials, Budapest University of Technology and Economics, Budapest, Hungary; 3Department of Reumatology, National Institute for Reumatology, Budapest, Hungary; 4Department of Radiology, County Hospital András Jósa, Nyiregyháza, HungaryObjective: To obtain the compressive load bearing and energy absorption capacity of lumbar vertebrae of osteoporotic elderly for the everyday medical praxis in terms of the simple diagnostic data, like computed tomography (CT, densitometry, age, and sex.Methods: Compressive test of 54 osteoporotic cadaver vertebrae L1 and L2, 16 males and 38 females (age range 43–93, mean age 71.6 ± 13.3 years, mean bone mineral density (BMD 0.377 ± 0.089 g/cm2, mean T-score −5.57 ± 0.79, Z-score −4.05 ± 0.77 was investigated. Based on the load-displacement diagrams and the measured geometrical parameters of vertebral bodies, proportional, ultimate and yield stresses and strains, Young’s modulus, ductility and energy absorption capacity were determined. Three vertebral regions were distinguished: superior, central and inferior regions, but certain parameters were calculated for the upper/lower intermediate layers, as well. Cross-sectional areas, and certain bone tissue parameters were determined by image analysis of CT pictures of vertebrae. Sex- and age-related decline functions and trends of strength characteristics were determined.Results: Size-corrected failure load was 15%–25% smaller in women, proportional and ultimate stresses were about 30%–35% smaller for women in any region, and 20%–25% higher in central regions for both sexes. Young’s moduli were about 30% smaller in women in any region, and 20%–25% smaller in the central region for both sexes. Small strains were higher in males, large strains were higher in females, namely, proportional strains were

  1. Performance of IRI-2012 model during a deep solar minimum and a maximum year over global equatorial regions

    Kumar, Sanjay


    Present paper inspects the prediction capability of the latest version of the International Reference Ionosphere (IRI-2012) model in predicting the total electron content (TEC) over seven different equatorial regions across the globe during a very low solar activity phase 2009 and a high solar activity phase 2012. This has been carried out by comparing the ground-based Global Positioning System (GPS)-derived VTEC with those from the IRI-2012 model. The observed GPS-TEC shows the presence of winter anomaly which is prominent during the solar maximum year 2012 and disappeared during solar minimum year 2009. The monthly and seasonal mean of the IRI-2012 model TEC with IRI-NeQ topside has been compared with the GPS-TEC, and our results showed that the monthly and seasonal mean value of the IRI-2012 model overestimates the observed GPS-TEC at all the equatorial stations. The discrepancy (or over estimation) in the IRI-2012 model is found larger during solar maximum year 2012 than that during solar minimum year 2009. This is a contradiction to the results recently presented by Tariku (2015) over equatorial regions of Uganda. The discrepancy is found maximum during the December solstice and a minimum during the March equinox. The magnitude of discrepancy in the IRI-2012 model showed longitudinal dependent which maximized in western longitude sector during both the years 2009 and 2012. The significant discrepancy in the IRI-2012 model observed during the solar minimum year 2009 could be attributed to larger difference between F10.7 flux and EUV flux (26-34 nm) during low solar activity period 2007-2009 than that during high solar activity period 2010-2012. This suggests that to represent the solar activity impact in the IRI model, implementation of new solar activity indices is further required for its better performance.

  2. Identifying genomic regions for fine-mapping using genome scan meta-analysis (GSMA) to identify the minimum regions of maximum significance (MRMS) across populations.

    Cooper, Margaret E; Goldstein, Toby H; Maher, Brion S; Marazita, Mary L


    In order to detect linkage of the simulated complex disease Kofendrerd Personality Disorder across studies from multiple populations, we performed a genome scan meta-analysis (GSMA). Using the 7-cM microsatellite map, nonparametric multipoint linkage analyses were performed separately on each of the four simulated populations independently to determine p-values. The genome of each population was divided into 20-cM bin regions, and each bin was rank-ordered based on the most significant linkage p-value for that population in that region. The bin ranks were then averaged across all four studies to determine the most significant 20-cM regions over all studies. Statistical significance of the averaged bin ranks was determined from a normal distribution of randomly assigned rank averages. To narrow the region of interest for fine-mapping, the meta-analysis was repeated two additional times, with each of the 20-cM bins offset by 7 cM and 13 cM, respectively, creating regions of overlap with the original method. The 6-7 cM shared regions, where the highest averaged 20-cM bins from each of the three offsets overlap, designated the minimum region of maximum significance (MRMS). Application of the GSMA-MRMS method revealed genome wide significance (p-values refer to the average rank assigned to the bin) at regions including or adjacent to all of the simulated disease loci: chromosome 1 (p value value value < 0.05 for 7-14 cM, the region adjacent to D4). This GSMA analysis approach demonstrates the power of linkage meta-analysis to detect multiple genes simultaneously for a complex disorder. The MRMS method enhances this powerful tool to focus on more localized regions of linkage.

  3. Precipitation Interpolation by Multivariate Bayesian Maximum Entropy Based on Meteorological Data in Yun- Gui-Guang region, Mainland China

    Wang, Chaolin; Zhong, Shaobo; Zhang, Fushen; Huang, Quanyi


    Precipitation interpolation has been a hot area of research for many years. It had close relation to meteorological factors. In this paper, precipitation from 91 meteorological stations located in and around Yunnan, Guizhou and Guangxi Zhuang provinces (or autonomous region), Mainland China was taken into consideration for spatial interpolation. Multivariate Bayesian maximum entropy (BME) method with auxiliary variables, including mean relative humidity, water vapour pressure, mean temperature, mean wind speed and terrain elevation, was used to get more accurate regional distribution of annual precipitation. The means, standard deviations, skewness and kurtosis of meteorological factors were calculated. Variogram and cross- variogram were fitted between precipitation and auxiliary variables. The results showed that the multivariate BME method was precise with hard and soft data, probability density function. Annual mean precipitation was positively correlated with mean relative humidity, mean water vapour pressure, mean temperature and mean wind speed, negatively correlated with terrain elevation. The results are supposed to provide substantial reference for research of drought and waterlog in the region.

  4. Identifying genomic regions for fine-mapping using genome scan meta-analysis (GSMA to identify the minimum regions of maximum significance (MRMS across populations

    Maher Brion S


    Full Text Available Abstract In order to detect linkage of the simulated complex disease Kofendrerd Personality Disorder across studies from multiple populations, we performed a genome scan meta-analysis (GSMA. Using the 7-cM microsatellite map, nonparametric multipoint linkage analyses were performed separately on each of the four simulated populations independently to determine p-values. The genome of each population was divided into 20-cM bin regions, and each bin was rank-ordered based on the most significant linkage p-value for that population in that region. The bin ranks were then averaged across all four studies to determine the most significant 20-cM regions over all studies. Statistical significance of the averaged bin ranks was determined from a normal distribution of randomly assigned rank averages. To narrow the region of interest for fine-mapping, the meta-analysis was repeated two additional times, with each of the 20-cM bins offset by 7 cM and 13 cM, respectively, creating regions of overlap with the original method. The 6–7 cM shared regions, where the highest averaged 20-cM bins from each of the three offsets overlap, designated the minimum region of maximum significance (MRMS. Application of the GSMA-MRMS method revealed genome wide significance (p-values refer to the average rank assigned to the bin at regions including or adjacent to all of the simulated disease loci: chromosome 1 (p p-value p-value p-value

  5. Periodic traveling compression regions during quiet geomagnetic conditions and their association with ground Pi2

    A. Keiling


    Full Text Available Recently, Keiling et al. (2006 showed that periodic (~90 s traveling compression regions (TCRs during a substorm had properties of Pi2 pulsations, prompting them to call this type of periodic TCRs "lobe Pi2". It was further shown that time-delayed ground Pi2 had the same period as the lobe Pi2 located at 16 RE, and it was concluded that both were remotely driven by periodic, pulsed reconnection in the magnetotail. In the study reported here, we give further evidence for this association by reporting additional periodic TCR events (lobe Pi2s at 18 RE all of which occurred in succession during a geomagnetically very quiet, non-substorm period. Each quiet-time periodic TCR event occurred during an interval of small H-bay-like ground disturbance (<40 nT. Such disturbances have previously been identified as poleward boundary intensifications (PBIs. The small H bays were superposed by Pi2s. These ground Pi2s are compared to the TCRs in the tail lobe (Cluster and both magnetic pulsations and flow variations at 9 RE inside the plasma sheet (Geotail. The main results of this study are: (1 Further evidence is given that periodic TCRs in the tail lobe at distances of 18 RE and ground Pi2 are related phenomena. In particular, it is shown that both had the same periodicity and occurred simultaneously (allowing for propagation time delays strongly suggesting that both had the same periodic source. Since the TCRs were propagating Earthward, this source was located in the outer magnetosphere beyond 18 RE. (2 The connection of periodic TCRs and ground Pi2 also exists during very quiet geomagnetic conditions with PBIs present in addition to the previous result (Keiling et al., 2006 which showed this connection during substorms. (3 Combining (1 and (2, we conclude that the frequency of PBI-associated Pi2 is controlled in the outer magnetosphere as opposed to the

  6. Region of interest extraction for lossless compression of bone X-ray images.

    Kazeminia, S; Karimi, N; Soroushmehr, S M R; Samavi, S; Derksen, H; Najarian, K


    For few decades digital X-ray imaging has been one of the most important tools for medical diagnosis. With the advent of distance medicine and the use of big data in this respect, the need for efficient storage and online transmission of these images is becoming an essential feature. Limited storage space and limited transmission bandwidth are the main challenges. Efficient image compression methods are lossy while the information of medical images should be preserved with no change. Hence, lossless compression methods are necessary for this purpose. In this paper, a novel method has been proposed to eliminate the non-ROI data from bone X-ray images. Background pixels do not contain any valuable medical information. The proposed method is based on the histogram dispersion method. ROI is separated from the background and it is compressed with a lossless compression method to preserve medical information of the image. Compression ratios of the implemented results show that the proposed algorithm is capable of effective reduction of the statistical and spatial redundancies.

  7. Impact of Freeze-Thaw Cycles on Compressive Characteristics of Asphalt Mixture in Cold Regions

    SI Wei; LI Ning; MA Biao; REN Junping; WANG Hainian; HU Jian


    Low average temperature, large temperature difference and continual freeze-thaw (F-T) cycles have signiifcant impacts on mechanical property of asphalt pavement. F-T cycles test was applied to illustrate the mixtures’ compressive characteristics. Exponential model was applied to analyze the variation of compressive characteristics with F-T cycles; Loss ratio model and Logistic model were used to present the deterioration trend with the increase of F-T cycles. ANOVA was applied to show the signiifcant impact of F-T cycles and asphalt-aggregate ratio. The experiment results show that the compressive strength and resilient modulus decline with increasing F-T cycles; the degradation is sharp during the initial F-T cycles, after 8 F-T cycles it turns to gentle. ANOVA results show that F-T cycles, and asphalt-aggregate ratio have signiifcant inlfuence on the compressive characteristics. Exponential model, Loss ratio model and Logistic model are signiifcantly iftting the test data from statistics view. These models well relfect the compressive characteristics of asphalt mixture degradation trend with increasing F-T cycles.

  8. Profile of malignant spinal cord compression: One year study at regional cancer center

    Malik Tariq Rasool


    Results: Most of the patients were in the age group of 41–60 years and there was no gender preponderance in patients. Female breast cancer was the most common incident (15.5% malignancy followed by multiple myeloma, lung, and prostatic carcinoma. Lower dorsal spine was the most common site of compression (35% followed by lumbar (31% and mid-dorsal (26% spine. 70 (91% patients had cord compression subsequent to bone metastasis while as other patients had leptomeningeal metastasis. In 31 (40% patients, spinal cord compression was the presenting symptom. Overall, only 26 patients had motor improvement after treatment. Conclusion: Grade of power before treatment was predictive of response to treatment and overall outcome of motor or sensory functions. Neurodeficit of more than 10 days duration was associated with poor outcome in neurological function.

  9. Regional Analysis of Precipitation by Means of Bivariate Distribution Adjusted by Maximum Entropy; Analisis regional de precipitacion con base en una distribucion bivariada ajustada por maxima entropia

    Escalante Sandoval, Carlos A.; Dominguez Esquivel, Jose Y. [Universidad Nacional Autonoma de Mexico (Mexico)


    The principle of maximum entropy (POME) is used to derive an alternative method of parameter estimation for the bivariate Gumbel distribution. A simple algorithm for this parameter estimation technique is presented. This method is applied to analyze the precipitation in a region of Mexico. Design events are compered with those obtained by the maximum likelihood procedure. According to the results, the proposed technique is a suitable option to be considered when performing frequency analysis of precipitation with small samples. [Spanish] El principio de maxima entropia, conocido como POME, es utilizado para derivar un procedimiento alternativo de estimacion de parametros de la distribucion bivariada de valores extremos con marginales Gumbel. El modelo se aplica al analisis de la precipitacion maxima en 24 horas en una region de Mexico y los eventos de diseno obtenidos son comparados con los proporcionados por la tecnica de maxima verosimilitud. De acuerdo con los resultados obtenidos, se concluye que la tecnica propuesta representa una buena opcion, sobre todo para el caso de muestras pequenas.

  10. Lossless compression of RNAi fluorescence images using regional fluctuations of pixels.

    Karimi, Nader; Samavi, Shadrokh; Shirani, Shahram


    RNA interference (RNAi) is considered one of the most powerful genomic tools which allows the study of drug discovery and understanding of the complex cellular processes by high-content screens. This field of study, which was the subject of 2006 Nobel Prize of medicine, has drastically changed the conventional methods of analysis of genes. A large number of images have been produced by the RNAi experiments. Even though a number of capable special purpose methods have been proposed recently for the processing of RNAi images but there is no customized compression scheme for these images. Hence, highly proficient tools are required to compress these images. In this paper, we propose a new efficient lossless compression scheme for the RNAi images. A new predictor specifically designed for these images is proposed. It is shown that pixels can be classified into three categories based on their intensity distributions. Using classification of pixels based on the intensity fluctuations among the neighbors of a pixel a context-based method is designed. Comparisons of the proposed method with the existing state-of-the-art lossless compression standards and well-known general-purpose methods are performed to show the efficiency of the proposed method.

  11. Effects of augmented trunk stabilization with external compression support on shoulder and scapular muscle activity and maximum strength during isometric shoulder abduction.

    Jang, Hyun-jeong; Kim, Suhn-yeop; Oh, Duck-won


    The aim of the present study was to investigate the effects of augmented trunk stabilization with external compression support (ECS) on the electromyography (EMG) activity of shoulder and scapular muscles and shoulder abductor strength during isometric shoulder abduction. Twenty-six women volunteered for the study. Surface EMG was used to monitor the activity of the upper trapezius (UT), lower trapezius (LT), serratus anterior (SA), and middle deltoid (MD), and shoulder abductor strength was measured using a dynamometer during three experimental conditions: (1) no external support (condition-1), (2) pelvic support (condition-2), and (3) pelvic and thoracic supports (condition-3) in an active therapeutic movement device. EMG activities were significantly lower for UT and higher for MD during condition 3 than during condition 1 (p Shoulder abductor strength was significantly higher during condition 3 than during condition 1 (p muscle effort of the UT during isometric shoulder abduction and increasing shoulder abductor strength. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. A topological restricted maximum likelihood (TopREML approach to regionalize trended runoff signatures in stream networks

    M. F. Müller


    Full Text Available We introduce TopREML as a method to predict runoff signatures in ungauged basins. The approach is based on the use of linear mixed models with spatially correlated random effects. The nested nature of streamflow networks is taken into account by using water balance considerations to constrain the covariance structure of runoff and to account for the stronger spatial correlation between flow-connected basins. The restricted maximum likelihood (REML framework generates the best linear unbiased predictor (BLUP of both the predicted variable and the associated prediction uncertainty, even when incorporating observable covariates into the model. The method was successfully tested in cross validation analyses on mean streamflow and runoff frequency in Nepal (sparsely gauged and Austria (densely gauged, where it matched the performance of comparable methods in the prediction of the considered runoff signature, while significantly outperforming them in the prediction of the associated modeling uncertainty. TopREML's ability to combine deterministic and stochastic information to generate BLUPs of the prediction variable and its uncertainty makes it a particularly versatile method that can readily be applied in both densely gauged basins, where it takes advantage of spatial covariance information, and data-scarce regions, where it can rely on covariates, which are increasingly observable thanks to remote sensing technology.

  13. L-Moments Regional Frequency Analysis Methodology Application in maximum rainfall values over the Bogota River's basin

    Romero, Claudia; Mesa, Duvan


    L-Moments Regional Frequency Analysis Methodology Application in maximum rainfall values over the Bogota River's basin 1°Claudia Patricia Romero Hernández; 2°Duvan Javier Mesa Fernández Universidad Santo Tomas; Colombia The application area of this methodology is the Bogota River's basin, which is located in Cundinamarca; a Colombian department with a total surface area of 589.143 hectares. This basin includes 19 sub-basins, and it is the most densely urbanized of the country. Including its metropolitan area, this region boasts a population of 9.000.000 inhabitants; which composes approximately 23% of Colombia's population and possesses around 19% of the country's industries. This basin has shown a notorious increase of complicated floods frequency in the last few years due to climatic variations. These climatic periods correspond to a weather pattern called Niña Phenomenon (2010-2011), which affected 57.000 citizens in this department and 4,900 people directly in Bogota city, with an estimated economic damage of 277'121,052 USD. The Regional Frequency Analysis methodology is a statistics procedure that consists in adding information from multiple samples in a single large sample, assuming previously that all of these come from the same probability model, except for a difference between them due to a scale factor. These samples are defined by a "regionalization" procedure known as the "Avenue Index" or "Flood Index". This procedure groups several kinds of information that comes from a common probability model, such as temperature, rainfall, and water flow. This model must be similar for all of the weather stations located in a homogeneous region. Maps for each of 4 return periods (5, 10, 50 and 100 years) were developed based on 120 weather stations located on this basin. The information used in this process comes from median monthly rainfall data, based on historical series between 30 and 40 years average. An increase in the annual median rainfall was

  14. A maximum likelihood QTL analysis reveals common genome regions controlling resistance to Salmonella colonization and carrier-state

    Thanh-Son Tran


    Full Text Available Abstract Background The serovars Enteritidis and Typhimurium of the Gram-negative bacterium Salmonella enterica are significant causes of human food poisoning. Fowl carrying these bacteria often show no clinical disease, with detection only established post-mortem. Increased resistance to the carrier state in commercial poultry could be a way to improve food safety by reducing the spread of these bacteria in poultry flocks. Previous studies identified QTLs for both resistance to carrier state and resistance to Salmonella colonization in the same White Leghorn inbred lines. Until now, none of the QTLs identified was common to the two types of resistance. All these analyses were performed using the F2 inbred or backcross option of the QTLExpress software based on linear regression. In the present study, QTL analysis was achieved using Maximum Likelihood with QTLMap software, in order to test the effect of the QTL analysis method on QTL detection. We analyzed the same phenotypic and genotypic data as those used in previous studies, which were collected on 378 animals genotyped with 480 genome-wide SNP markers. To enrich these data, we added eleven SNP markers located within QTLs controlling resistance to colonization and we looked for potential candidate genes co-localizing with QTLs. Results In our case the QTL analysis method had an important impact on QTL detection. We were able to identify new genomic regions controlling resistance to carrier-state, in particular by testing the existence of two segregating QTLs. But some of the previously identified QTLs were not confirmed. Interestingly, two QTLs were detected on chromosomes 2 and 3, close to the locations of the major QTLs controlling resistance to colonization and to candidate genes involved in the immune response identified in other, independent studies. Conclusions Due to the lack of stability of the QTLs detected, we suggest that interesting regions for further studies are those that were

  15. Adaptive region-growing with maximum curvature strategy for tumor segmentation in 18F-FDG PET

    Tan, Shan; Li, Laquan; Choi, Wookjin; Kang, Min Kyu; D'Souza, Warren D.; Lu, Wei


    Accurate tumor segmentation in PET is crucial in many oncology applications. We developed an adaptive region-growing (ARG) algorithm with a maximum curvature strategy (ARG_MC) for tumor segmentation in PET. The ARG_MC repeatedly applied a confidence connected region-growing algorithm with increasing relaxing factor f. The optimal relaxing factor (ORF) was then determined at the transition point on the f-volume curve, where the volume just grew from the tumor into the surrounding normal tissues. The ARG_MC along with five widely used algorithms were tested on a phantom with 6 spheres at different signal to background ratios and on two clinic datasets including 20 patients with esophageal cancer and 11 patients with non-Hodgkin lymphoma (NHL). The ARG_MC did not require any phantom calibration or any a priori knowledge of the tumor or PET scanner. The identified ORF varied with tumor types (mean ORF  =  9.61, 3.78 and 2.55 respectively for the phantom, esophageal cancer, and NHL datasets), and varied from one tumor to another. For the phantom, the ARG_MC ranked the second in segmentation accuracy with an average Dice similarity index (DSI) of 0.86, only slightly worse than Daisne’s adaptive thresholding method (DSI  =  0.87), which required phantom calibration. For both the esophageal cancer dataset and the NHL dataset, the ARG_MC had the highest accuracy with an average DSI of 0.87 and 0.84, respectively. The ARG_MC was robust to parameter settings and region of interest selection, and it did not depend on scanners, imaging protocols, or tumor types. Furthermore, the ARG_MC made no assumption about the tumor size or tumor uptake distribution, making it suitable for segmenting tumors with heterogeneous FDG uptake. In conclusion, the ARG_MC was accurate, robust and easy to use, it provides a highly potential tool for PET tumor segmentation in clinic.

  16. Comment on "Electrostatic compressive and rarefactive shocks and solitons in relativistic plasmas occurring in polar regions of pulsar"

    Hafez, M. G.; Talukder, M. R.; Hossain Ali, M.


    The aim of this comment is to show the solution of the KdVB equation used by Shah et al. (Astrophys. Space Sci. 335:529-537, 2011, doi: 10.1007/s10509-011-0766-y) is not correct. So, the numerical results that are predicted in this manuscript should not be helpful for further investigations in a plasma laboratory. For this reason, we have employed the Bernoulli's equation method to obtain the correct form of analytical solution to this equation, which is appropriate for the study of electrostatic compressive and rarefactive shocks and solitons in relativistic plasmas occurring in polar regions of pulsar.

  17. Prospective Out-of-ecliptic White-light Imaging of Interplanetary Corotating Interaction Regions at Solar Maximum

    Xiong, Ming; Davies, Jackie A.; Li, Bo; Yang, Liping; Liu, Ying D.; Xia, Lidong; Harrison, Richard A.; Keiji, Hayashi; Li, Huichao


    Interplanetary corotating interaction regions (CIRs) can be remotely imaged in white light (WL), as demonstrated by the Solar Mass Ejection Imager (SMEI) on board the Coriolis spacecraft and Heliospheric Imagers (HIs) on board the twin Solar TErrestrial RElations Observatory (STEREO) spacecraft. The interplanetary WL intensity, due to Thomson scattering of incident sunlight by free electrons, is jointly determined by the 3D distribution of electron number density and line-of-sight (LOS) weighting factors of the Thomson-scattering geometry. The 2D radiance patterns of CIRs in WL sky maps look very different from different 3D viewpoints. Because of the in-ecliptic locations of both the STEREO and Coriolis spacecraft, the longitudinal dimension of interplanetary CIRs has, up to now, always been integrated in WL imagery. To synthesize the WL radiance patterns of CIRs from an out-of-ecliptic (OOE) vantage point, we perform forward magnetohydrodynamic modeling of the 3D inner heliosphere during Carrington Rotation CR1967 at solar maximum. The mixing effects associated with viewing 3D CIRs are significantly minimized from an OOE viewpoint. Our forward modeling results demonstrate that OOE WL imaging from a latitude greater than 60° can (1) enable the garden-hose spiral morphology of CIRs to be readily resolved, (2) enable multiple coexisting CIRs to be differentiated, and (3) enable the continuous tracing of any interplanetary CIR back toward its coronal source. In particular, an OOE view in WL can reveal where nascent CIRs are formed in the extended corona and how these CIRs develop in interplanetary space. Therefore, a panoramic view from a suite of wide-field WL imagers in a solar polar orbit would be invaluable in unambiguously resolving the large-scale longitudinal structure of CIRs in the 3D inner heliosphere.

  18. Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.

    De Queiroz, Ricardo; Chou, Philip A


    In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.

  19. Time compression of soil erosion by the effect of largest daily event. A regional analysis of USLE database.

    Gonzalez-Hidalgo, J. C.; Batalla, R.; Cerda, A.; de Luis, M.


    When Thornes and Brunsden wrote in 1977 "How often one hears the researcher (and no less the undergraduate) complain that after weeks of observation "nothing happened" only to learn that, the day after his departure, a flood caused unprecedent erosion and channel changes!" (Thornes and Brunsden, 1977, p. 57), they focussed on two different problems in geomorphological research: the effects of extreme events and the temporal compression of geomorphological processes. The time compression is one of the main characteristic of erosion processes. It means that an important amount of the total soil eroded is produced in very short temporal intervals, i.e. few events mostly related to extreme events. From magnitude-frequency analysis we know that few events, not necessarily extreme by magnitude, produce high amount of geomorphological work. Last but not least, extreme isolated events are a classical issue in geomorphology by their specific effects, and they are receiving permanent attention, increased at present because of scenarios of global change. Notwithstanding, the time compression of geomorphological processes could be focused not only on the analysis of extreme events and the traditional magnitude-frequency approach, but on new complementary approach based on the effects of largest events. The classical approach define extreme event as a rare event (identified by its magnitude and quantified by some deviation from central value), while we define largest events by the rank, whatever their magnitude. In a previous research on time compression of soil erosion, using USLE soil erosion database (Gonzalez-Hidalgo et al., EGU 2007), we described a relationship between the total amount of daily erosive events recorded by plot and the percentage contribution to total soil erosion of n-largest aggregated daily events. Now we offer a further refined analysis comparing different agricultural regions in USA. To do that we have analyzed data from 594 erosion plots from USLE

  20. Mid-Latitude Pc1, 2 Pulsations Induced by Magnetospheric Compression in the Maximum and Early Recovery Phase of Geomagnetic Storms

    N. A. Zolotukhina; I.P. Kharchenko


    We investigate the properties of interplanetary inhomogeneities generating long-lasting mid-latitude Pc1, 2 geomagnetic pulsations. The data from the Wind and IMP 8 spacecrafts, and from the Mondy and Borok midlatitude magnetic observatories are used in this study. The pulsations under investigation develop in the maximum and early recovery phase of magnetic storms. The pulsations have amplitudes from a few tens to several hundred pT andlast more than seven hours. A close association of the increase (decrease) in solar wind dynamic pressure (Psw) with the onset or enhancement (attenuation or decay) of these pulsations has been established. Contrary to high-latitude phenomena, there is a distinctive feature of the interplanetary inhomogeneities that are responsible for generation of long-lasting mid-latitude Pc1, 2. It is essential that the effect of the quasi-stationary negative Bz-component of the interplanetary magnetic field on the magnetosphere extends over 4 hours. Only then are the Psw pulses able to excite the above-mentioned type of mid-latitude geomagnetic pulsations. Model calculations show that in the cases under study the plasmapause can form in the vicinity of the magnetic observatory. This implies that the existence of an intense ring current resulting from the enhanced magnetospheric convection is necessary for the Pc1, 2 excitation. Further, the existence of the plasmapause above the observation point (as a waveguide) is necessary for long-lasting Pc1 waves to arrive at the ground.

  1. [Surgical treatment of discogenic compression of the spinal cord in the thoracic region].

    Sulla, I; Faguĺa, J; Klímová, E; Mach, P


    The objective of the submitted work was to draw attention to some problems associated with the diagnosis and treatment of prolapse of sequestra of thoracic intervertebral discs. The investigated group comprised 9 subjects (4 women, 5 men) aged 33 to 67 years operated upon at the Neurosurgical Clinic in Kosice between Jan. 1 1982 and June 30 2001 on account of compression of nervous structures in the thoracic portion of the spine by sequestra of intervertebral discs. This was manifested by back pain, a sensation of stiffening of the muscles of the lower extremities, altered sensitivity and in all by impaired gait. Only one female patient developed urinary retention, another one painless paraparesis of the lower extremities, therefore the condition was evaluated as a demyelinisation process. In three patients as the only imaging examination method perimyelography was used, in another two it was supplemented by CT. Four patients were examined by MRI. This graphic method proved to be the most suitable. In all subjects of the investigated group the clinical picture and examination methods indicated a unilateral predominance of the affection. In five subjects it proved possible to remove the sequestream of the intervertebral disc via laminectomy, in another four a transpedicular approach into the spinal canal was used successfully. In all patients the condition improved after surgery.

  2. Irregular Segmented Region Compression Coding Based on Pulse Coupled Neural Network

    MA Yi-de; QI Chun-liang; QIAN Zhi-bai; SHI Fei; ZHANG Bei-dou


    An irregular segmented region coding algorithm based on pulse coupled neural network(PCNN) is presented. PCNN has the property of pulse-coupled and changeable threshold, through which these adjacent pixels with approximate gray values can be activated simultaneously. One can draw a conclusion that PCNN has the advantage of realizing the regional segmentation, and the details of original image can be achieved by the parameter adjustment of segmented images, and at the same time, the trivial segmented regions can be avoided. For the better approximation of irregular segmented regions, the Gram-Schmidt method, by which a group of orthonormal basis functions is constructed from a group of linear independent initial base functions, is adopted. Because of the orthonormal reconstructing method, the quality of reconstructed image can be greatly improved and the progressive image transmission will also be possible.

  3. IMP-8 observations of traveling compressions regions: New evidence for near-earth plasmoids and neutral lines

    Slavin, J.A.; Lepping, R.P.; Baker, D.N. (NASA/Goddard Space Flight Center, Greenbelt, MD (USA))


    An examination of IMP-8 tail lobe magnetic field measurements has been conducted to determine whether the traveling compressions region (TCR) phenomena detected by ISEE-3 in the distant geotail, and believed to be caused by tailward moving plasmoids, are present closer to the earth. The study produced 16 examples of TCRs at distances of X = {minus}31 to {minus}37 R{sub E}. For two events considered in detail TCRs were observed in close association with substorm growth phase signatures in the lobes. The lengths of these TCRs are estimated to be 8-12 R{sub E}. It is their conclusion that the IMP-8 TCR observations provide new evidence that small plasmoids and, hence, multiple reconnection neutral lines can sometimes exist earthward of X = {minus}35 R{sub E}.

  4. Sonoelastographic evaluation with the determination of compressibility ratio for symmetrical prostatic regions in the diagnosis of clinically significant prostate cancer

    Artur Przewor


    Full Text Available Aim: Sonoelastography is a technique that assesses tissue hardness/compressibility. Utility and sensitivity of the method in prostate cancer diagnostics were assessed compared to the current gold standard in prostate cancer diagnostics i.e. systematic biopsy. Material and methods: The study involved 84 patients suspected of prostate cancer based on elevated PSA levels or abnormal per rectal examination findings. Sonoelastography was used to evaluate the prostate gland. In the case of regions with hardness two-fold greater than that of symmetric prostate area (strain ratio >2, targeted biopsy was used; which was followed by an ultrasound-guided 8- or 10-core systematic biopsy (regardless of sonoelastography-indicated sites as a reference point. Results: The mean age of patients was 69 years. PSA serum levels ranged between 1.02 and 885 ng/dl. The mean prostate volume was 62 ml (19–149 ml. Prostate cancer was found in 39 out of 84 individuals. Statistically significant differences in strain ratios between cancers and benign lesions were shown. Sonoelastography guided biopsy revealed 30 lesions – overall sensitivity 77% (sensitivity of the method – 81%. Sonoelastographic sensitivity increased depending on cancer stage according to the Gleason grading system: 6–60%, 7–75%, 8–83%, 9/10–100%. The estimated sensitivity of systematic biopsy was 92%. Conclusions: Sonoelastography shows higher diagnostic sensitivity in prostate cancer diagnostics compared to conventional imaging techniques, i.e. grey-scale TRUS, Doppler ultrasound. It allows to reduce the number of collected tissue cores, and thus limit the incidence of complications as well as the costs involved. Sonoelastography using the determination of compressibility ratio for symmetrical prostatic regions may prove useful in the detection of clinically significant prostate cancer.

  5. Trigeminal Neuralgia: Evaluation of the Relationship Between the Region of Neuralgic Manifestation and the Site of Neurovascular Compression Under Endoscopy.

    Zhang, Wenhao; Chen, Minjie; Zhang, Weijie; Chai, Ying


    This study aimed to evaluate the relationship among the pain region, branches of trigeminal nerve, and the neurovascular compression (NVC) location. A total of 123 consecutive patients with trigeminal neuralgia (TN) underwent endoscope-assisted microvascular decompression according to positive preoperative tomographic angiography. V2 alone was in 51 cases and V3 alone was in 64 cases. The location of NVC was classified into cranial, caudal, medial, or lateral sites. Some patients with multiple regions were recorded as medial + cranial, lateral + cranial, medial + caudal, and lateral + caudal. Twenty-eight (71.8%) of 39 patients with TN (V2) had their NVC at the medial site of the nerve. Twenty-seven (64.3%) of 42 patients with TN (V3) had their NVC at the lateral site of the nerve. There was a statistically significant difference (P = 0.0011 NVC at the cranial site of the nerve. Thirty-four (69.4%) of 49 patients with TN (V3) had their NVC at the caudal site of the nerve. There was no statistical difference (P = 0.3097 > 0.01). Evaluation of the relationship between the pain region and the NVC location by endoscopic images during microvascular decompression is more accurate. The second branch is mostly distributed in the medial area, and third branch is mainly distributed in the lateral area.

  6. From large scale gas compression to cluster formation in the Antennae overlap region

    Herrera, Cinthya N; Nesvadba, Nicole P H


    We present a detailed observational analysis of how merger-driven turbulence may regulate the star-formation efficiency during galaxy interactions and set the initial conditions for the formation of super star clusters. Using VLT/SINFONI, we obtained near-infrared imaging spectroscopy of a small region in the Antennae overlap region, coincident with the supergiant molecular cloud 2 (SGMC 2). We find extended H2 line emission across much of the 600 pc field-of-view, traced at sub-arcsecond spatial resolution. The data also reveal a compact H2 source with broad lines and a dynamical mass Mdyn 10^7 Msun, which has no observable Brg or K-band continuum emission, and no obvious counterpart in the 6 cm radio continuum. Line ratios indicate that the H2 emission of both sources is powered by shocks, making these lines a quantitative tracer of the dissipation of turbulent kinetic energy. The turbulence appears to be driven by the large-scale gas dynamics, and not by feedback from star formation. We propose a scenario ...

  7. Maximum Fidelity

    Kinkhabwala, Ali


    The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...

  8. Crustal seismicity and the earthquake catalog maximum moment magnitudes (Mcmax) in stable continental regions (SCRs): correlation with the seismic velocity of the lithosphere

    Mooney, Walter D.; Ritsema, Jeroen; Hwang, Yong Keun


    A joint analysis of global seismicity and seismic tomography indicates that the seismic potential of continental intraplate regions is correlated with the seismic properties of the lithosphere. Archean and Early Proterozoic cratons with cold, stable continental lithospheric roots have fewer crustal earthquakes and a lower maximum earthquake catalog moment magnitude (Mcmax). The geographic distribution of thick lithospheric roots is inferred from the global seismic model S40RTS that displays shear-velocity perturbations (δVS) relative to the Preliminary Reference Earth Model (PREM). We compare δVS at a depth of 175 km with the locations and moment magnitudes (Mw) of intraplate earthquakes in the crust (Schulte and Mooney, 2005). Many intraplate earthquakes concentrate around the pronounced lateral gradients in lithospheric thickness that surround the cratons and few earthquakes occur within cratonic interiors. Globally, 27% of stable continental lithosphere is underlain by δVS≥3.0%, yet only 6.5% of crustal earthquakes with Mw>4.5 occur above these regions with thick lithosphere. No earthquakes in our catalog with Mw>6 have occurred above mantle lithosphere with δVS>3.5%, although such lithosphere comprises 19% of stable continental regions. Thus, for cratonic interiors with seismically determined thick lithosphere (1) there is a significant decrease in the number of crustal earthquakes, and (2) the maximum moment magnitude found in the earthquake catalog is Mcmax=6.0. We attribute these observations to higher lithospheric strength beneath cratonic interiors due to lower temperatures and dehydration in both the lower crust and the highly depleted lithospheric root.

  9. A role of vertical mixing on nutrient supply into the subsurface chlorophyll maximum in the shelf region of the East China Sea

    Lee, Keunjong; Matsuno, Takeshi; Endoh, Takahiro; Ishizaka, Joji; Zhu, Yuanli; Takeda, Shigenobu; Sukigara, Chiho


    In summer, Changjiang Diluted Water (CDW) expands over the shelf region of the northern East China Sea. Dilution of the low salinity water could be caused by vertical mixing through the halocline. Vertical mixing through the pycnocline can transport not only saline water, but also high nutrient water from deeper layers to the surface euphotic zone. It is therefore very important to quantitatively evaluate the vertical mixing to understand the process of primary production in the CDW region. We conducted extensive measurements in the region during the period 2009-2011. Detailed investigations of the relative relationship between the subsurface chlorophyll maximum (SCM) and the nitracline suggested that there were two patterns relating to the N/P ratio. Comparing the depths of the nitracline and SCM, it was found that the SCM was usually located from 20 to 40 m and just above the nitracline, where the N/P ratio within the nitracline was below 15, whereas it was located from 10 to 30 m and within the nitracline, where the N/P ratio was above 20. The large value of the N/P ratio in the latter case suggests the influence of CDW. Turbulence measurements showed that the vertical flux of nutrients with vertical mixing was large (small) where the N/P ratio was small (large). A comparison with a time series of primary production revealed a consistency with the pattern of snapshot measurements, suggesting that the nutrient supply from the lower layer contributes considerably to the maintenance of SCM.

  10. Arctic Bowyery – the Use of Compression Wood in Bows in the Subarctic and Arctic Regions of Eurasia and America

    Marcus Lepola


    Full Text Available This paper is a study of the traditional use of a special kind of wood in bow construction in Eurasia and North America. This special kind of wood, called compression wood and coming from coniferous trees, has unique qualities that makes it suitable for bow construction. Bows made using this special wood have been referred to as Finno-Ugric bows, Sámi bows, Two-Wood bows and Eurasia laminated bows. These bows appear to have developed from archaic forms of compression wood self bows that were made from a single piece of wood. Recently features similar to the Eurasian compression wood bows have been discovered in bows originating from Alaska, and the use of compression wood for bow manufacture has been known to some Canadian Inuit groups. This paper addresses the origin and possible diffusion pattern of this innovation in bow technology in Eurasia and suggests a timeframe and a possible source for the transfer of this knowledge to North America. This paper also discusses the role of the Asiatic composite bow in the development of bows in Eurasia.

  11. Implementation of Monmonier's algorithm of maximum differences for the regionalization of forest tree populations as a basis for the selection of seed sources

    Ivetić V.


    Full Text Available The regionalization of forest tree populations was researched on an example of beech, as the species with the largest range and the widest ecological amplitude in Serbia. The implementation of Monmonier's algorithm of maximum differences to analyze the spatial distances and the matrix of genetic distances generated by RAPD markers produced different results, depending on the method of addressing the genetic distances, so that data processing should be planned in accordance with the number of samples and their geographic location. The analysis is simple and enables a good visualization of genetic variability barriers which, in combination with the data on the distribution and the geographic barriers, can be utilized for recommending the transfer of forest tree reproductive material.

  12. Period–color and Amplitude–color Relations at Maximum and Minimum Light for RR Lyrae Stars in the SDSS Stripe 82 Region

    Ngeow, Chow-Choong; Kanbur, Shashi M.; Bhardwaj, Anupam; Schrecengost, Zachariah; Singh, Harinder P.


    Investigation of period–color (PC) and amplitude–color (AC) relations at the maximum and minimum light can be used to probe the interaction of the hydrogen ionization front (HIF) with the photosphere and the radiation hydrodynamics of the outer envelopes of Cepheids and RR Lyraes. For example, theoretical calculations indicated that such interactions would occur at minimum light for RR Lyrae and result in a flatter PC relation. In the past, the PC and AC relations have been investigated by using either the (V ‑ R)MACHO or (V ‑ I) colors. In this work, we extend previous work to other bands by analyzing the RR Lyraes in the Sloan Digital Sky Survey Stripe 82 Region. Multi-epoch data are available for RR Lyraes located within the footprint of the Stripe 82 Region in five (ugriz) bands. We present the PC and AC relations at maximum and minimum light in four colors: (u ‑ g)0, (g ‑ r)0, (r ‑ i)0, and (i ‑ z)0, after they are corrected for extinction. We found that the PC and AC relations for this sample of RR Lyraes show a complex nature in the form of flat, linear or quadratic relations. Furthermore, the PC relations at minimum light for fundamental mode RR Lyrae stars are separated according to the Oosterhoff type, especially in the (g ‑ r)0 and (r ‑ i)0 colors. If only considering the results from linear regressions, our results are quantitatively consistent with the theory of HIF-photosphere interaction for both fundamental and first overtone RR Lyraes.

  13. Development of a methodology to evaluate probable maximum snow accumulation using a regional climate model: application to Quebec, Canada, under changing climate conditions

    Klein, I. M.; Rousseau, A. N.; Gagnon, P.; Frigon, A.


    Probable Maximum Snow Accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood. A robust methodology for evaluating the PMSA is imperative so the resulting spring probable maximum flood is neither overestimated, which would mean financial losses, nor underestimated, which could affect public safety. In addition, the impact of climate change needs to be considered since it is known that solid precipitation in some Nordic landscapes will in all likelihood intensify over the next century. In this paper, outputs from different simulations produced by the Canadian Regional Climate Model are used to estimate PMSAs for southern Quebec, Canada (44.1°N - 49.1°N; 68.2°W - 75.5°W). Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationary tests indicate that climate change will not only affect precipitation and temperature but also the monthly maximum precipitable water and the ensuing maximization ratio r. The maximization ratio r is used to maximize "efficient" snowfall events; and represents the ratio of the 100-year precipitable water of a given month divided by the snowstorm precipitable water. A computational method was developed to maximize precipitable water using a non-stationary frequency analysis. The method was carefully adapted to the spatial and temporal constraints embedded in the resolution of the available simulation data. For example, for a given grid cell and time step, snow and rain may occur simultaneously. In this case, the focus is restricted to snow and snow-storm-conditions only, thus rainfall and humidity that could lead to rainfall are neglected. Also, the temporal resolution cannot necessarily capture the full duration of actual snow storms. The threshold for a snowstorm to be maximized and the duration resulting from considered time steps are adjusted in order to obtain a high percentage of maximization ratios below

  14. Semiquantitative analysis of maximum standardized uptake values of regional lymph nodes in inflammatory breast cancer: is there a reliable threshold for differentiating benign from malignant?

    Carkaci, Selin; Adrada, Beatriz E; Rohren, Eric; Wei, Wei; Quraishi, Mohammad A; Mawlawi, Osama; Buchholz, Thomas A; Yang, Wei


    The aim of this study was to determine an optimum standardized uptake value (SUV) threshold for identifying regional nodal metastasis on 18F-fluorodeoxyglucose (FDG) positron emission tomographic (PET)/computed tomographic (CT) studies of patients with inflammatory breast cancer. A database search was performed of patients newly diagnosed with inflammatory breast cancer who underwent 18F-FDG PET/CT imaging at the time of diagnosis at a single institution between January 1, 2001, and September 30, 2009. Three radiologists blinded to the histopathology of the regional lymph nodes retrospectively analyzed all 18F-FDG PET/CT images by measuring the maximum SUV (SUVmax) in visually abnormal nodes. The accuracy of 18F-FDG PET/CT image interpretation was correlated with histopathology when available. Receiver-operating characteristic curve analysis was performed to assess the diagnostic performance of PET/CT imaging. Sensitivity, specificity, positive predictive value, and negative predictive value were calculated using three different SUV cutoff values (2.0, 2.5, and 3.0). A total of 888 regional nodal basins, including bilateral axillary, infraclavicular, internal mammary, and supraclavicular lymph nodes, were evaluated in 111 patients (mean age, 56 years). Of the 888 nodal basins, 625 (70%) were negative and 263 (30%) were positive for metastasis. Malignant lymph nodes had significantly higher SUVmax than benign lymph nodes (P lymph nodes on 18F-FDG PET/CT imaging may help differentiate benign and malignant lymph nodes in patients with inflammatory breast cancer. An SUV cutoff of 2 provided the best accuracy in identifying regional nodal metastasis in this patient population. Copyright © 2012 AUR. Published by Elsevier Inc. All rights reserved.

  15. Evaluation of daily maximum and minimum 2-m temperatures as simulated with the regional climate model COSMO-CLM over Africa

    Kraehenmann, Stefan; Kothe, Steffen; Ahrens, Bodo [Frankfurt Univ. (Germany). Inst. for Atmospheric and Environmental Sciences; Panitz, Hans-Juergen [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany)


    The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8) to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax) over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22 and 0.44 ), and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax) for Africa (covering the period 2008-2010) is created using the regression-kriging-regression-kriging (RKRK) algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90{sup th} percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2 C across arid areas, yet overestimated by around 2 C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones), but less well performance for Tmax (capture below 70%). Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90{sup th} percentile of Tmax, particularly across

  16. "Compressed" Compressed Sensing

    Reeves, Galen


    The field of compressed sensing has shown that a sparse but otherwise arbitrary vector can be recovered exactly from a small number of randomly constructed linear projections (or samples). The question addressed in this paper is whether an even smaller number of samples is sufficient when there exists prior knowledge about the distribution of the unknown vector, or when only partial recovery is needed. An information-theoretic lower bound with connections to free probability theory and an upper bound corresponding to a computationally simple thresholding estimator are derived. It is shown that in certain cases (e.g. discrete valued vectors or large distortions) the number of samples can be decreased. Interestingly though, it is also shown that in many cases no reduction is possible.

  17. Three-dimensional display of peripheral nerves in the wrist region based on MR diffusion tensor imaging and maximum intensity projection post-processing

    Ding, Wen Quan, E-mail: [Department of Hand Surgery, Hand Surgery Research Center, Affiliated Hospital of Nantong University, Nantong, Jiangsu (China); Zhou, Xue Jun, E-mail: [Department of Radiology, Affiliated Hospital of Nantong University, Nantong, Jiangsu (China); Tang, Jin Bo, E-mail: [Department of Hand Surgery, Hand Surgery Research Center, Affiliated Hospital of Nantong University, Nantong, Jiangsu (China); Gu, Jian Hui, E-mail: [Department of Hand Surgery, Hand Surgery Research Center, Affiliated Hospital of Nantong University, Nantong, Jiangsu (China); Jin, Dong Sheng, E-mail: [Department of Radiology, Jiangsu Province Official Hospital, Nanjing, Jiangsu (China)


    Highlights: • 3D displays of peripheral nerves can be achieved by 2 MIP post-processing methods. • The median nerves’ FA and ADC values can be accurately measured by using DTI6 data. • Adopting 6-direction DTI scan and MIP can evaluate peripheral nerves efficiently. - Abstract: Objectives: To achieve 3-dimensional (3D) display of peripheral nerves in the wrist region by using maximum intensity projection (MIP) post-processing methods to reconstruct raw images acquired by a diffusion tensor imaging (DTI) scan, and to explore its clinical applications. Methods: We performed DTI scans in 6 (DTI6) and 25 (DTI25) diffusion directions on 20 wrists of 10 healthy young volunteers, 6 wrists of 5 patients with carpal tunnel syndrome, 6 wrists of 6 patients with nerve lacerations, and one patient with neurofibroma. The MIP post-processing methods employed 2 types of DTI raw images: (1) single-direction and (2) T{sub 2}-weighted trace. The fractional anisotropy (FA) and apparent diffusion coefficient (ADC) values of the median and ulnar nerves were measured at multiple testing sites. Two radiologists used custom evaluation scales to assess the 3D nerve imaging quality independently. Results: In both DTI6 and DTI25, nerves in the wrist region could be displayed clearly by the 2 MIP post-processing methods. The FA and ADC values were not significantly different between DTI6 and DTI25, except for the FA values of the ulnar nerves at the level of pisiform bone (p = 0.03). As to the imaging quality of each MIP post-processing method, there were no significant differences between DTI6 and DTI25 (p > 0.05). The imaging quality of single-direction MIP post-processing was better than that from T{sub 2}-weighted traces (p < 0.05) because of the higher nerve signal intensity. Conclusions: Three-dimensional displays of peripheral nerves in the wrist region can be achieved by MIP post-processing for single-direction images and T{sub 2}-weighted trace images for both DTI6 and DTI25

  18. Ultrasound beamforming using compressed data.

    Li, Yen-Feng; Li, Pai-Chi


    The rapid advancements in electronics technologies have made software-based beamformers for ultrasound array imaging feasible, thus facilitating the rapid development of high-performance and potentially low-cost systems. However, one challenge to realizing a fully software-based system is transferring data from the analog front end to the software back end at rates of up to a few gigabits per second. This study investigated the use of data compression to reduce the data transfer requirements and optimize the associated trade-off with beamforming quality. JPEG and JPEG2000 compression techniques were adopted. The acoustic data of a line phantom were acquired with a 128-channel array transducer at a center frequency of 3.5 MHz, and the acoustic data of a cyst phantom were acquired with a 64-channel array transducer at a center frequency of 3.33 MHz. The receive-channel data associated with each transmit event are separated into 8 × 8 blocks and several tiles before JPEG and JPEG2000 data compression is applied, respectively. In one scheme, the compression was applied to raw RF data, while in another only the amplitude of baseband data was compressed. The maximum compression ratio of RF data compression to produce an average error of lower than 5 dB was 15 with JPEG compression and 20 with JPEG2000 compression. The image quality is higher with baseband amplitude data compression than with RF data compression; although the maximum overall compression ratio (compared with the original RF data size), which was limited by the data size of uncompressed phase data, was lower than 12, the average error in this case was lower than 1 dB when the compression ratio was lower than 8.

  19. Temporal compression of soil erosion processes. A regional analysis of USLE database; La compresion temporal de los procesos de erosion del suelo. Un analisis regional de la base de datos USLE

    Gonzalez-Hidalgo, J. C.; Luis, M.; Lopez-Bermudez, F.


    When John Thornes and Denis Brunsden wrote in 1977 How often one hears the researcher (and no less the undergraduate) complain that after weeks of observation nothing happened only to learn that, the day after his departure, a flood caused unprecedented erosion and channel changes (Thrones and Brunsden, 1977, p. 57), they were focussing to important problems in Geomorphology: the extreme events and time compression of geomorphological processes. Time compression is a fundamental characteristic of geomorphological processes, some times produced by extreme events. Extreme events are rare events, defined by deviation from mean values. But from magnitude-frequency analysis we know that few events, not necessarily extreme, are able to produce a high amount of geomorphological work. finally time compression of geomorphological processes can be focused by the analysis of largest events defined by ranks, not magnitude. We have analysed the effects of largest events on total soil erosion by using 594 erosion plots from USLE database. Plots are located in different climate regions of USA and have different length of records. The 10 largest daily events mean contribution value is 60% of total soil erosion. There exist a relationship between such percentage and total daily erosive events recorded. The pattern seems to be independent of climate conditions. We discuss the nature of such relationship and the implications in soil erosion research. (Author) 17 refs.

  20. Underground pumped hydro storage and compressed air energy storage: an analysis of regional markets and development potential



    The analysis had the following objectives: (1) a survey of the regional markets within the continental United States to identify three regions most suitable for UPHS and CAES; (2) a national survey with emphasis on the three selected regions to determine developmental potential and costs of UPHS and CAES; (3) determine cost effectiveness of UPHS and CAES and their market share in future electric systems; and (4) recommend research, development and demonstration work to realize the timely commercialization of UPHS and CAES system. (TFD)

  1. An audit of current practice and management of metastatic spinal cord compression at a regional cancer centre.

    Sui, J


    Metastatic spinal cord compression (MSCC) is an oncological emergency requiring prompt recognition and management to preserve neurological function and mobility. We performed an audit to assess current practice of MSCC against current best practice as outlined by NICE. Our retrospective audit identified 10 patients from January to December 2009 with confirmed MSCC. The most common primary tumours were prostate 3 (30%), breast 3 (30%) and lung 2 (20%). Pain was the main presenting symptom 9 (90%), followed by weakness 7 (70%) and sensory changes 1 (10%). 5 (50%) had MRI within 24 hours and only 6 (60%) underwent full MRI scan. 8 (80%) had corticosteroids before MRI scan. 6 (60%) received radiotherapy within 24 hours. Only 4 (40%) were referred to orthopaedics and none of these patients had been recommended surgery. Up 14 days following radiological confirmation of MSCC, the number of patients who were unable to walk increased by 20%. Only 5 (50%) were discharged during this period of study. Our audit reported a number of variances in management compared to NICE guideline. These can be improved by following a\\'fast track\\' referral pathway and regular education for junior doctors and primary care doctors.

  2. Effect of cooler electrons on a compressive ion acoustic solitary wave in a warm ion plasma — Forbidden regions, double layers, and supersolitons

    Ghosh, S. S., E-mail: [Indian Institute of Geomagnetism, New Panvel, Navi Mumbai 410218 (India); Sekar Iyengar, A. N. [Plasma Physics Division, Saha Institute of Nuclear Physics, Kolkata 700064 (India)


    It is observed that the presence of a minority component of cooler electrons in a three component plasma plays a deterministic role in the evolution of solitary waves, double layers, or the newly discovered structures called supersolitons. The inclusion of the cooler component of electrons in a single electron plasma produces sharp increase in nonlinearity in spite of a decrease in the overall energy of the system. The effect maximizes at certain critical value of the number density of the cooler component (typically 15%–20%) giving rise to a hump in the amplitude variation profile. For larger amplitudes, the hump leads to a forbidden region in the ambient cooler electron concentration which dissociates the overall existence domain of solitary wave solutions in two distinct parameter regime. It is observed that an inclusion of the cooler component of electrons as low as < 1% affects the plasma system significantly resulting in compressive double layers. The solution is further affected by the cold to hot electron temperature ratio. In an adequately hotter bulk plasma (i.e., moderately low cold to hot electron temperature ratio), the parameter domain of compressive double layers is bounded by a sharp discontinuity in the corresponding amplitude variation profile which may lead to supersolitons.

  3. Shocklets in compressible flows

    袁湘江; 男俊武; 沈清; 李筠


    The mechanism of shocklets is studied theoretically and numerically for the stationary fluid, uniform compressible flow, and boundary layer flow. The conditions that trigger shock waves for sound wave, weak discontinuity, and Tollmien-Schlichting (T-S) wave in compressible flows are investigated. The relations between the three types of waves and shocklets are further analyzed and discussed. Different stages of the shocklet formation process are simulated. The results show that the three waves in compressible flows will transfer to shocklets only when the initial disturbance amplitudes are greater than the certain threshold values. In compressible boundary layers, the shocklets evolved from T-S wave exist only in a finite region near the surface instead of the whole wavefront.

  4. Maximum information photoelectron metrology

    Hockett, P; Wollenhaupt, M; Baumert, T


    Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...

  5. The maximum tolerated dose of gamma radiation to the optic nerve during γ knife radiosurgery in an animal study.

    Deng, Xingli; Yang, Zhiyong; Liu, Ruen; Yi, Meiying; Lei, Deqiang; Wang, Zhi; Zhao, Hongyang


    The safety of gamma knife radiosurgery should be considered when treating pituitary adenomas. To determine the maximum tolerated dose of radiation delivered by gamma knife radiosurgery to optic nerves. An animal model designed to establish prolonged balloon compression of the optic chiasm and parasellar region was developed to mimic the optic nerve compression caused by pituitary adenomas. Twenty cats underwent surgery to place a balloon for compression effect and 20 cats in a sham operation group received microsurgery without any treatment. The effects of gamma knife irradiation at 10-13 Gy on normal (sham operation group) and compressed (optic nerve compression group) optic nerves were investigated by pattern visual evoked potential examination and histopathology. Gamma knife radiosurgery at 10 Gy had almost no effect. At 11 Gy, P100 latency was significantly prolonged and P100 amplitude was significantly decreased in compressed optic nerves, but there was little change in the normal optic nerves. Doses of 11 Gy and higher induced significant electrophysiological variations and degeneration of the myelin sheath and axons in both normal and compressed optic nerves. Compressed optic nerves are more sensitive to gamma knife radiosurgery than normal optic nerves. The minimum dose of gamma knife radiosurgery that causes radiation injury in normal optic nerves is 12 Gy; however, the minimum dose is 11 Gy in compressed optic nerves. Copyright © 2013 S. Karger AG, Basel.

  6. A high-dynamic range transimpedance amplifier with compression

    Mičušík, D.; Zimmermann, H.


    This paper presents a transimpedance amplifier (TIA) with the logarithmic compression of the input current signal. The presented TIA has two regions of operation: a linear one for small input current signals and a compression one for high input currents, that could otherwise saturate the TIA. The measured -3dB bandwidth in the linear region of operation is 102MHz. The measured maximum input current overdrive is 20.5mA. However, the maximum of the monotonic compression is approx. 8mA. Using the compression technique we could achieve low rms equivalent input noise current (~20.2nA) within the measured bandwidth and with approx. 2pF capacitance at the input. Thus the dynamic range at the input of the TIA is approx. 120dB considering the maximal current overdrive. The proposed TIA represents the input stage of a optical receiver with integrated differential 50Ω output driver. The optical receiver occupies approx. 1.24mm2 in 0.35 μm SiGe BiCMOS technology and consumes 78mA from 5V supply.

  7. Asymptotic bifurcation solutions for compressions of a clamped nonlinearly elastic rectangle: transition region and barrelling to a corner-like profile

    Dai, H H


    Buckling and barrelling instabilities in the uniaxial compressions of an elastic rectangle have been studied by many authors under lubricated end conditions. However, in practice it is very difficult to realize such conditions due to friction. Here, we study the compressions of a two-dimensional nonlinearly elastic rectangle under clamped end conditions.

  8. The pre-onset, transitional, and foot regions in resistance versus temperature behavior in high-T2 cuprates: Inferences regarding maximum T2

    Vezzoli, G. C.; Burke, T.; Chen, M. F.; Craver, F.; Stanley, W.


    We have studied the pre-onset deviation-from-linearity region, the transitional regime, and the foot region in the resistance versus temperature behavior of high-T sub c oxide superconductors, employing time varying magnetic fields and carefully controlled precise temperatures. We have shown that the best value of T sub c can be extrapolated from the magnetic field induced divergence of the resistance versus inverse absolute temperature data as derived from the transitional and/or foot regions. These data are in accord with results from previous Hall effect studies. The pre-onset region however, shows a differing behavior (in R versus 1000/T as a function of B) which we believe links it to an incipient Cooper pairing that suffers a kinetic barrier opposing formation of a full supercurrent. This kinetic dependence is believed to be associated with the lifetime of the mediator particle. This particle is interpreted to be the virtual exciton formed from internal-field induced charge-transfer excitations which transiently neutralize the multivalence cations and establish bound holes on the oxygens.

  9. Pre-onset, transitional, and foot regions in resistance versus temperature behavior in high-t[sub 2] cuprates: Inferences regarding maximum t[sub 2]. Final report

    Vezzoli, G.C.; Burke, T.; Chen, M.F.; Craver, F.; Stanley, W.


    We have studied the pre-onset deviation-from-linearity region, the transitional regime, and the foot region in the resistance versus temperature behavior of high-T sub c oxdie superconductors, employing time varying magnetic fields and carefully controlled precise temperatures. We have shown that the best value of T sub c can be extrapolated from the magnetic field induced divergence of the resistance versus inverse absolute temperature data as derived from the transitional and/or foot regions. These data are in accord with results from previous Hall effect studies. The pre-onset region however, shows a differing behavior (in R versus 1000/T as a function of B) which we believe links it to an incipient Cooper pairing that suffers a kinetic barrier opposing formation of a full supercurrent. This kinetic dependence is believed to be associated with the lifetime of the mediator particle. This particle is interpreted to be the virtual exciton formed from internal-field induced charge-transfer excitations which transiently neutralize the multivalence cations and establish bound holes on the oxygens.

  10. A THEMIS Survey of Flux Ropes and Traveling Compression Regions: Location of the Near-Earth Reconnection Site During Solar Minimum

    Imber, S. M.; Slavin, J. A.; Auster, H. U.; Angelopoulos, V.


    A statistical study of flux ropes and traveling compression regions (TCRs) during the Time History of Events and Macroscale Interactions during Substorms (THEMIS) second tail season has been performed. A combined total of 135 flux ropes and TCRs in the range GSM X approx -14 to -31 R(sub E) were identified, many of these occurring in series of two or more events separated by a few tens of seconds. Those occurring within 10 min of each other were combined into aggregated reconnection events. For the purposes of this survey, these are most likely the products of reconnect ion occurring simultaneously at multiple, closely spaced x-lines as opposed to statistically independent episodes of reconnection. The 135 flux ropes and TCRs were grouped into 87 reconnection events; of these, 28 were moving tailward and 59 were moving Earthward. The average location of the near-Earth x-line determined from statistical analysis of these reconnection events is (X(sub GSM), Y*(sub GSM)) = (-30R(sub E), 5R(sub E)), where Y* includes a correction for the solar aberration angle. A strong east-west asymmetry is present in the tailward events, with >80% being observed at GSM Y* > O. Our results indicate that the Earthward flows are similarly asymmetric in the midtail region, becoming more symmetric inside - 18 R(sub E). Superposed epoch analyses indicate that the occurrence of reconnection closer to the Earth, i.e., X > -20 R(sub E), is associated with elevated solar wind velocity and enhanced negative interplanetary magnetic field B(sub z). Reconnection events taking place closer to the Earth are also far more effective in producing geomagnetic activity, judged by the AL index, than reconnection initiated beyond X approx -25 R(sub E).

  11. Wellhead compression

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)


    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  12. Compressive beamforming

    Xenaki, Angeliki; Mosegaard, Klaus


    Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex...

  13. Maximum Autocorrelation Factorial Kriging

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.


    This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...

  14. Region specific response of intervertebral disc cells to complex dynamic loading: an organ culture study using a dynamic torsion-compression bioreactor.

    Samantha C W Chan

    Full Text Available The spine is routinely subjected to repetitive complex loading consisting of axial compression, torsion, flexion and extension. Mechanical loading is one of the important causes of spinal diseases, including disc herniation and disc degeneration. It is known that static and dynamic compression can lead to progressive disc degeneration, but little is known about the mechanobiology of the disc subjected to combined dynamic compression and torsion. Therefore, the purpose of this study was to compare the mechanobiology of the intervertebral disc when subjected to combined dynamic compression and axial torsion or pure dynamic compression or axial torsion using organ culture. We applied four different loading modalities [1. control: no loading (NL, 2. cyclic compression (CC, 3. cyclic torsion (CT, and 4. combined cyclic compression and torsion (CCT] on bovine caudal disc explants using our custom made dynamic loading bioreactor for disc organ culture. Loads were applied for 8 h/day and continued for 14 days, all at a physiological magnitude and frequency. Our results provided strong evidence that complex loading induced a stronger degree of disc degeneration compared to one degree of freedom loading. In the CCT group, less than 10% nucleus pulposus (NP cells survived the 14 days of loading, while cell viabilities were maintained above 70% in the NP of all the other three groups and in the annulus fibrosus (AF of all the groups. Gene expression analysis revealed a strong up-regulation in matrix genes and matrix remodeling genes in the AF of the CCT group. Cell apoptotic activity and glycosaminoglycan content were also quantified but there were no statistically significant differences found. Cell morphology in the NP of the CCT was changed, as shown by histological evaluation. Our results stress the importance of complex loading on the initiation and progression of disc degeneration.

  15. Establishment of Maximum Voluntary Compressive Neck Tolerance Levels


    Bridges Casey Pirnstill Chris Burneka John Plaga Grant Roush Biosciences and Performance Division Vulnerability Analysis Branch July 2011...S) Michael Cote, John Buhrman, Nathaniel Bridges, Casey Pirnstill, Chris Burneka, John Plaga , Grant Roush 5d. PROJECT NUMBER OSMS 5e. TASK

  16. Saturn's dynamic magnetotail: A comprehensive magnetic field and plasma survey of plasmoids and traveling compression regions and their role in global magnetospheric dynamics

    Jackman, C. M.; Slavin, J. A.; Kivelson, M. G.; Southwood, D. J.; Achilleos, N.; Thomsen, M. F.; DiBraccio, G. A.; Eastwood, J. P.; Freeman, M. P.; Dougherty, M. K.; Vogt, M. F.


    We present a comprehensive study of the magnetic field and plasma signatures of reconnection events observed with the Cassini spacecraft during the tail orbits of 2006. We examine their "local" properties in terms of magnetic field reconfiguration and changing plasma flows. We also describe the "global" impact of reconnection in terms of the contribution to mass loss, flux closure, and large-scale tail structure. The signatures of 69 plasmoids, 17 traveling compression regions (TCRs), and 13 planetward moving structures have been found. The direction of motion is inferred from the sign of the change in the Bθ component of the magnetic field in the first instance and confirmed through plasma flow data where available. The plasmoids are interpreted as detached structures, observed by the spacecraft tailward of the reconnection site, and the TCRs are interpreted as the effects of the draping and compression of lobe magnetic field lines around passing plasmoids. We focus on the analysis and interpretation of the tailward moving (south-to-north field change) plasmoids and TCRs in this work, considering the planetward moving signatures only from the point of view of understanding the reconnection x-line position and recurrence rates. We discuss the location spread of the observations, showing that where spacecraft coverage is symmetric about midnight, reconnection signatures are observed more frequently on the dawn flank than on the dusk flank. We show an example of a chain of two plasmoids and two TCRs over 3 hours and suggest that such a scenario is associated with a single-reconnection event, ejecting multiple successive plasmoids. Plasma data reveal that one of these plasmoids contains H+ at lower energy and W+ at higher energy, consistent with an inner magnetospheric source, and the total flow speed inside the plasmoid is estimated with an upper limit of 170 km/s. We probe the interior structure of plasmoids and find that the vast majority of examples at Saturn

  17. Maximum stellar iron core mass

    F W Giacobbe


    An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is significantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.

  18. Compressive myelopathy in fluorosis: MRI

    Gupta, R.K. [MR Section, Department of Radiology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow-226014 (India); Agarwal, P. [MR Section, Department of Radiology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow-226014 (India); Kumar, S. [MR Section, Department of Radiology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow-226014 (India); Surana, P.K. [Department of Neurology, SGPGIMS, Lucknow-226014 (India); Lal, J.H. [MR Section, Department of Radiology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow-226014 (India); Misra, U.K. [Department of Neurology, SGPGIMS, Lucknow-226014 (India)


    We examined four patients with fluorosis, presenting with compressive myelopathy, by MRI, using spin-echo and fast low-angle shot sequences. Cord compression due to ossification of the posterior longitudinal ligament (PLL) and ligamentum flavum (LF) was demonstrated in one and ossification of only the LF in one. Marrow signal was observed in the PLL and LF in all the patients on all pulse sequences. In patients with compressive myelopathy secondary to ossification of PLL and/or LF, fluorosis should be considered as a possible cause, especially in endemic regions. (orig.). With 2 figs., 1 tab.

  19. Partial transparency of compressed wood

    Sugimoto, Hiroyuki; Sugimori, Masatoshi


    We have developed novel wood composite with optical transparency at arbitrary region. Pores in wood cells have a great variation in size. These pores expand the light path in the sample, because the refractive indexes differ between constituents of cell and air in lumen. In this study, wood compressed to close to lumen had optical transparency. Because the condition of the compression of wood needs the plastic deformation, wood was impregnated phenolic resin. The optimal condition for high transmission is compression ratio above 0.7.

  20. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu


    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  1. Deformation Curve Characteristics of Rapeseeds and Sunflower Seeds Under Compression Loading

    Divišová M.


    Full Text Available The deformation curve characteristics of rapeseeds and sunflower seeds compressed using the equipment ZDM 50-2313/56/18 and varying vessel diameters (40, 60, 80, and 100 mm were investigated. Maximum compressive force of 100 kN was applied on bulk oilseeds of rape and sunflower of measured height 20-80 mm and deformed at a speed of 60 mm∙min-1. The compression test using the vessel diameters of 40 and 60 mm showed a serration effect while the vessel diameters of 80 and 100 mm indicated an increasing function effect on the force-deformation characteristic curves. Clearly, the increasing function effect described the region with oil flow and that of serration effect described the region without any oil flow. However, it was observed that the serration effect could be due to the higher compressive stress inside the smaller vessel diameters (40 and 60 mm compared to those with bigger vessel diameters (80 and 100 mm. Parameters such as deformation, deformation energy, and energy density were determined from the force-deformation curves dependency showing both increasing function and serration effect. The findings of the study provide useful information for the determination of specific compressive force and energy requirements for extracting maximum oil from oilseed crops such as rape and sunflower.

  2. On Network Functional Compression

    Feizi, Soheil


    In this paper, we consider different aspects of the network functional compression problem where computation of a function (or, some functions) of sources located at certain nodes in a network is desired at receiver(s). The rate region of this problem has been considered in the literature under certain restrictive assumptions, particularly in terms of the network topology, the functions and the characteristics of the sources. In this paper, we present results that significantly relax these assumptions. Firstly, we consider this problem for an arbitrary tree network and asymptotically lossless computation. We show that, for depth one trees with correlated sources, or for general trees with independent sources, a modularized coding scheme based on graph colorings and Slepian-Wolf compression performs arbitrarily closely to rate lower bounds. For a general tree network with independent sources, optimal computation to be performed at intermediate nodes is derived. We introduce a necessary and sufficient condition...

  3. Compressive Sensing Over Networks

    Feizi, Soheil; Effros, Michelle


    In this paper, we demonstrate some applications of compressive sensing over networks. We make a connection between compressive sensing and traditional information theoretic techniques in source coding and channel coding. Our results provide an explicit trade-off between the rate and the decoding complexity. The key difference of compressive sensing and traditional information theoretic approaches is at their decoding side. Although optimal decoders to recover the original signal, compressed by source coding have high complexity, the compressive sensing decoder is a linear or convex optimization. First, we investigate applications of compressive sensing on distributed compression of correlated sources. Here, by using compressive sensing, we propose a compression scheme for a family of correlated sources with a modularized decoder, providing a trade-off between the compression rate and the decoding complexity. We call this scheme Sparse Distributed Compression. We use this compression scheme for a general multi...

  4. Compression limits in cascaded quadratic soliton compression

    Bache, Morten; Bang, Ole; Krolikowski, Wieslaw;


    Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency.......Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency....

  5. Satellite data compression

    Huang, Bormin


    Satellite Data Compression covers recent progress in compression techniques for multispectral, hyperspectral and ultra spectral data. A survey of recent advances in the fields of satellite communications, remote sensing and geographical information systems is included. Satellite Data Compression, contributed by leaders in this field, is the first book available on satellite data compression. It covers onboard compression methodology and hardware developments in several space agencies. Case studies are presented on recent advances in satellite data compression techniques via various prediction-

  6. Maximum Autocorrelation Factorial Kriging

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete


    This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...

  7. Shock compression of nitrobenzene

    Kozu, Naoshi; Arai, Mitsuru; Tamura, Masamitsu; Fujihisa, Hiroshi; Aoki, Katsutoshi; Yoshida, Masatake; Kondo, Ken-Ichi


    The Hugoniot (4 - 30 GPa) and the isotherm (1 - 7 GPa) of nitrobenzene have been investigated by shock and static compression experiments. Nitrobenzene has the most basic structure of nitro aromatic compounds, which are widely used as energetic materials, but nitrobenzene has been considered not to explode in spite of the fact its calculated heat of detonation is similar to TNT, about 1 kcal/g. Explosive plane-wave generators and diamond anvil cell were used for shock and static compression, respectively. The obtained Hugoniot consists of two linear lines, and the kink exists around 10 GPa. The upper line agrees well with the Hugoniot of detonation products calculated by KHT code, so it is expected that nitrobenzene detonates in that area. Nitrobenzene solidifies under 1 GPa of static compression, and the isotherm of solid nitrobenzene was obtained by X-ray diffraction technique. Comparing the Hugoniot and the isotherm, nitrobenzene is in liquid phase under experimented shock condition. From the expected phase diagram, shocked nitrobenzene seems to remain metastable liquid in solid phase region on that diagram.

  8. Compressive Fatigue in Wood

    Clorius, Christian Odin; Pedersen, Martin Bo Uhre; Hoffmeyer, Preben;


    An investigation of fatigue failure in wood subjected to load cycles in compression parallel to grain is presented. Small clear specimens of spruce are taken to failure in square wave formed fatigue loading at a stress excitation level corresponding to 80% of the short term strength. Four...... frequencies ranging from 0.01 Hz to 10 Hz are used. The number of cycles to failure is found to be a poor measure of the fatigue performance of wood. Creep, maximum strain, stiffness and work are monitored throughout the fatigue tests. Accumulated creep is suggested identified with damage and a correlation...... is observed between stiffness reduction and accumulated creep. A failure model based on the total work during the fatigue life is rejected, and a modified work model based on elastic, viscous and non-recovered viscoelastic work is experimentally supported, and an explanation at a microstructural level...

  9. Efficient compression of molecular dynamics trajectory files.

    Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James


    We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases.

  10. High-quality lossy compression: current and future trends

    McLaughlin, Steven W.


    This paper is concerned with current and future trends in the lossy compression of real sources such as imagery, video, speech and music. We put all lossy compression schemes into common framework where each can be characterized in terms of three well-defined advantages: cell shape, region shape and memory advantages. We concentrate on image compression and discuss how new entropy constrained trellis-based compressors achieve cell- shape, region-shape and memory gain resulting in high fidelity and high compression.

  11. Acupuncture at the "Huatuojiaji" point affects nerve root regional interleukin-1 level in a rat model of lumbar nerve root compression

    Yaochi Wu; Junfeng Zhang; Chongmiao Wang; Yanyan Xie; Jinghui Zhou


    BACKGROUND: It has been shown that interleukin-1 (IL-1) may cause inflammatory reactions, which stimulate the nerve root of patients with lumbar intervertebral disc protrusion and leads to pain. Whether the clinical curative effects of acupuncture in the treatment of lumbar and leg pain are linked to an inhibition of local IL-1 secretion is unknown.OBJECTIVE: To assess the influence of acupuncture on IL-1, this study was designed to verify the effects of acupuncture at the "Huatuojiaji (Extra)" point on the nerve root in a rat model of lumbar nerve root compression, compared with administration of meloxicam, a non-steroidal anti-inflammatory drug.DESIGN, TIME AND SETTING: Randomized, controlled, molecular biology experiment, performed at the Experimental Center, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University between September 2005 and April 2006.MATERIALS: Forty healthy adult Sprague Dawley rats of either gender were included in this study. The rats were randomly and evenly divided into the following four groups: normal control, model, acupuncture,and meloxicam groups. Lumbar nerve root compression was induced in rats in the model, acupuncture,and meloxicam groups by inserting a specially made silicon rubber slice at the juncture of the L5 nerve root and the dural sac. The acupuncture needle (pattern number N3030, 30#, 1.5 inch) was purchased from Suzhou Medical Appliance Factory, China. IL-1 enzyme linked immunosorbent assay (ELISA) kit was purchased from Santa Cruz Biotechnology, Inc., USA.METHODS: The acupuncture group was acupunctured at the "Huatuojiaji" point, which is lateral to the compressed L5-6 nerve root, with an acupuncture depth of 0.5 cm. There were two treatment courses, each of involved seven 20-minute acupuncture sessions, one session a day. The meloxicam group was administered intragastrically 3.75 mg/kg meloxicam (5 mg meloxicam/10 mL physiological saline). Rats in the normal control group and model group received an

  12. Simulated variations of eolian dust from inner Asian deserts at the mid-Pliocene, last glacial maximum, and present day: contributions from the regional tectonic uplift and global climate change

    Shi, Zhengguo; Liu, Xiaodong; An, Zhisheng [Chinese Academy of Sciences, State Key Laboratory of Loess Quaternary Geology (SKLLQG), Institute of Earth Environment, Xi' an (China); Yi, Bingqi; Yang, Ping [Texas A and M University, College Station, TX (United States); Mahowald, Natalie [Cornell University, Ithaca, NY (United States)


    Northern Tibetan Plateau uplift and global climate change are regarded as two important factors responsible for a remarkable increase in dust concentration originating from inner Asian deserts during the Pliocene-Pleistocene period. Dust cycles during the mid-Pliocene, last glacial maximum (LGM), and present day are simulated with a global climate model, based on reconstructed dust source scenarios, to evaluate the relative contributions of the two factors to the increment of dust sedimentation fluxes. In the focused downwind regions of the Chinese Loess Plateau/North Pacific, the model generally produces a light eolian dust mass accumulation rate (MAR) of 7.1/0.28 g/cm{sup 2}/kyr during the mid-Pliocene, a heavier MAR of 11.6/0.87 g/cm{sup 2}/kyr at present, and the heaviest MAR of 24.5/1.15 g/cm{sup 2}/kyr during the LGM. Our results are in good agreement with marine and terrestrial observations. These MAR increases can be attributed to both regional tectonic uplift and global climate change. Comparatively, the climatic factors, including the ice sheet and sea surface temperature changes, have modulated the regional surface wind field and controlled the intensity of sedimentation flux over the Loess Plateau. The impact of the Tibetan Plateau uplift, which increased the areas of inland deserts, is more important over the North Pacific. The dust MAR has been widely used in previous studies as an indicator of inland Asian aridity; however, based on the present results, the interpretation needs to be considered with greater caution that the MAR is actually not only controlled by the source areas but the surface wind velocity. (orig.)

  13. Maximum likely scale estimation

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo


    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  14. Un método para el análisis de frecuencia regional de lluvias máximas diarias: aplicación en los Andes bolivianos A method for regional frequency analysis of maximum daily rainfall: application in the Bolivian Andes

    José Antonio Luna Vera


    Full Text Available Se presenta un análisis de frecuencia regional con series de lluvia diaria máxima anual para una zona con escasa información. La compleja orografía de montañas y el altiplano de una región en la cordillera de Los Andes, Bolivia, produce diferentes patrones de lluvia diaria. La combinación de los Momentos-L y el análisis de conglomerados resultan adecuados para identificarlas regiones homogéneas de las series máximas anuales. El trabajo desarrollado define 4 regiones homogéneas. La región 1 comprende las estaciones ubicadas en el altiplano y la zona Sur Este. La región 2 abarca el altiplano central y la cuenca del Río La Paz, compuesto por cuencas interandinas. La 3 delimita claramente las estaciones de la zona tropical amazónica; y la 4 está compuesta por estaciones ubicadas en las montañas del Norte. Se probaron diversas distribuciones para el análisis regional de frecuencias aplicando la técnica de estaciones-año; los mejores resultados se obtuvieron con las funciones Gumbel y Doble Gumbel. Finalmente se expresan las ecuaciones regionales y se comparan con algunas series puntuales de cada región, con el objeto de verificar la aplicabilidad de la metodología propuesta para fines de diseño hidrológico.A regional frequency analysis of daily annual maximum rainfall series for an area with poor information is presented. The complex topography mountains and the highlands region in the Cordillera de Los Andes, Bolivia, produce different patterns of daily rainfall. The combination of L-Moments and cluster analysis are adequate to identify the homogeneous regions of the annual maximum series. The work defines 4 homogeneous regions. Region 1 includes the stations located in the highlands and south-east. Region 2 covers the central highlands and La Paz River Basin, consisting of inter-Andean basins. Region 3 clearly defines the Amazonian basin stations and 4 is composed of stations located in the northern mountains. Different

  15. Compression of a bundle of light rays.

    Marcuse, D


    The performance of ray compression devices is discussed on the basis of a phase space treatment using Liouville's theorem. It is concluded that the area in phase space of the input bundle of rays is determined solely by the required compression ratio and possible limitations on the maximum ray angle at the output of the device. The efficiency of tapers and lenses as ray compressors is approximately equal. For linear tapers and lenses the input angle of the useful rays must not exceed the compression ratio. The performance of linear tapers and lenses is compared to a particular ray compressor using a graded refractive index distribution.

  16. SEMICONDUCTOR DEVICES A compressed wide period-tunable grating working at low voltage

    Xiang, Liu; Tie, Li; Anjie, Ming; Yuelin, Wang


    A MEMS compressed period-tunable grating device with a wide tuning range has been designed, fabricated and characterized. To increase the tuning range, avoid instability with tuning and improve the performance, we propose in this paper a period-tunable grating which is compressed by large-displacement comb actuators with tilted folded beams. The experimental results show that the designed grating device has a compression range of up to 144 μm within 37 V driving voltage. The period of the grating can be adjusted continuously from 16 to 14 μm with a tuning range of 12.5%. The maximum tuning range of the first-order diffraction angle is 0.34° at 632.8 nm and the reflectivity of the grating is more than 92.6% in the mid-infrared region. The grating device can be fabricated by simple processes and finds applications in mid-infrared spectrometers.

  17. A sciatic nerve lesion secondary to compression by a heterotopic ossification in the hip and thigh region--an electrodiagnostic approach.

    Abayev, Boris; Ha, Edward; Cruise, Cathy


    A sciatic nerve lesion secondary to compression by a heterotopic ossification is rare. Operative release of the encased sciatic nerve in some cases may restore the function of the nerve partially or completely. However, in some cases the injury may be permanent. An electrophysiologic study is very useful to determine the location and severity of nerve damage, including axonal loss, demyelination, or both. An electrophysiologic study can emphasize the portion of the sciatic nerve that has been involved the most (lateral versus medial or peroneal versus tibial). In some cases an electrophysiologic study can suggest whether surgery should be postponed if a recovery pattern from the nerve injury is obvious. The prognostic value of follow-up studies is considerable. The authors reviewed literature available to them since 1971 and found 6 cases, including their own. This is the first attempt to put together all the information available in the literature about this condition.

  18. Maximum entropy analysis of EGRET data

    Pohl, M.; Strong, A.W.


    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  19. Maximum Likelihood Associative Memories

    Gripon, Vincent; Rabbat, Michael


    Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...

  20. Maximum likely scale estimation

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo


    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....

  1. Focus on Compression Stockings

    ... the stocking every other day with a mild soap. Do not use Woolite™ detergent. Use warm water ... compression clothing will lose its elasticity and its effectiveness. Compression stockings last for about 4-6 months ...

  2. A Compressive Superresolution Display

    Heide, Felix


    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  3. Vestige: Maximum likelihood phylogenetic footprinting

    Maxwell Peter


    Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

  4. Microbunching and RF Compression

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.


    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  5. Maximum Entropy Fundamentals

    F. Topsøe


    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  6. Hyperspectral data compression

    Motta, Giovanni; Storer, James A


    Provides a survey of results in the field of compression of remote sensed 3D data, with a particular interest in hyperspectral imagery. This work covers topics such as compression architecture, lossless compression, lossy techniques, and more. It also describes a lossless algorithm based on vector quantization.

  7. Compressed gas manifold

    Hildebrand, Richard J.; Wozniak, John J.


    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  8. Compressing Binary Decision Diagrams

    Hansen, Esben Rune; Satti, Srinivasa Rao; Tiedemann, Peter


    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...

  9. Compressing Binary Decision Diagrams

    Rune Hansen, Esben; Srinivasa Rao, S.; Tiedemann, Peter

    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...

  10. Compressing Binary Decision Diagrams

    Hansen, Esben Rune; Satti, Srinivasa Rao; Tiedemann, Peter


    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...

  11. Regularized maximum correntropy machine

    Wang, Jim Jing-Yan


    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  12. Foam behavior of solid glass spheres – Zn22Al2Cu composites under compression stresses

    Aragon-Lezama, J.A., E-mail: [Departamento de Materiales, Universidad Autónoma Metropolitana-A, Avenida San Pablo 180, Colonia Reynosa Tamaulipas, 02200 México, D.F., México (Mexico); Garcia-Borquez, A., E-mail: [Ciencia de Materiales, ESFM – Instituto Politécnico Nacional, Edif. 9, Unid. Prof. A. Lopez Mateos, Colonia Lindavista, 07738 México, D.F., México (Mexico); Torres-Villaseñor, G., E-mail: [Departamento de Metálicos y Cerámicos, Instituto de Investigaciones en Materiales, Universidad Nacional Autónoma de México, Apdo., P 70-360, México, D.F., México (Mexico)


    Solid glass spheres – Zn22Al2Cu composites, having different densities and microstructures, were elaborated and studied under compression. Their elaboration process involves alloy melting, spheres submersion into the liquid alloy and finally air cooling. The achieved composites with densities 2.6884, 2.7936 and 3.1219 g/cm{sup 3} were studied in casting and thermally induced, fine-grain matrix microstructures. Test samples of the composites were compressed at a 10{sup −3} s{sup −1} strain rate, and their microstructure characterized before and after compression by using optical and scanning electron microscopes. Although they exhibit different compression behavior depending on their density and microstructure, all of them show an elastic region at low strains, reach their maximum stress (σ{sub max}) at hundreds of MPa before the stress fall or collapse up to a lowest yield point (LYP), followed by an important plastic deformation at nearly constant stress (σ{sub p}): beyond this plateau, an extra deformation can be limitedly reached only by a significant stress increase. This behavior under compression stresses is similar to that reported for metal foams, being the composites with fine microstructure which nearest behave to metal foams under this pattern. Nevertheless, the relative values of the elastic modulus, and maximum and plateau stresses do not follow the Ashby equations by changing the relative density. Generally, the studied composites behave as foams under compression, except for their peculiar parameters values (σ{sub max}, LYP, and σ{sub p})

  13. The Effects of Wet Compression by the Electronic Expansion Valve Opening on the Performance of a Heat Pump System

    Kyoungjin Seong


    Full Text Available In this study, by controlling the Electronic Expansion Valve opening, the influence of wet compression on a heat pump system was experimentally investigated in different heating conditions. The results demonstrate that the discharge temperature decreased and the mass flow rate increased, due to quality of the rising liquid droplets. It was also found that the heating capacity and power input of wet compression increased more than that of dry compression, with a superheat of 10 °C. The maximum COP (Coefficient of Performance exists at a specific quality of ca. 0.94 to 0.90, as the power input in the region of wet compression is proportionally larger than the increase in the heating capacity, according to the decreasing quality. When the Entering Water Temperature of the Outdoor Heat Exchanger was 10 °C, 5 °C, and 0 °C, the COP increased by a maximum of ca. 12.4%, 10.6%, and 10.2%, respectively, in comparison to the superheat of 10 °C. In addition, the superheat at the discharge line is proposed as a proper controlling parameter to adjust the quality at the suction line, by varying the opening of the expansion valve during wet compression.

  14. Equalized near maximum likelihood detector


    This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.

  15. Generalized Maximum Entropy

    Cheeseman, Peter; Stutz, John


    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  16. Maximum floodflows in the conterminous United States

    Crippen, John R.; Bue, Conrad D.


    Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.

  17. Lossless Medical Image Compression

    Nagashree G


    Full Text Available Image compression has become an important process in today‟s world of information exchange. Image compression helps in effective utilization of high speed network resources. Medical Image Compression is very important in the present world for efficient archiving and transmission of images. In this paper two different approaches for lossless image compression is proposed. One uses the combination of 2D-DWT & FELICS algorithm for lossy to lossless Image Compression and another uses combination of prediction algorithm and Integer wavelet Transform (IWT. To show the effectiveness of the methodology used, different image quality parameters are measured and shown the comparison of both the approaches. We observed the increased compression ratio and higher PSNR values.

  18. Celiac Artery Compression Syndrome

    Mohammed Muqeetadnan


    Full Text Available Celiac artery compression syndrome is a rare disorder characterized by episodic abdominal pain and weight loss. It is the result of external compression of celiac artery by the median arcuate ligament. We present a case of celiac artery compression syndrome in a 57-year-old male with severe postprandial abdominal pain and 30-pound weight loss. The patient eventually responded well to surgical division of the median arcuate ligament by laparoscopy.

  19. The Sherpa Maximum Likelihood Estimator

    Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.


    A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.

  20. Compressed sensing & sparse filtering

    Carmi, Avishy Y; Godsill, Simon J


    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  1. Wavelet image compression

    Pearlman, William A


    This book explains the stages necessary to create a wavelet compression system for images and describes state-of-the-art systems used in image compression standards and current research. It starts with a high level discussion of the properties of the wavelet transform, especially the decomposition into multi-resolution subbands. It continues with an exposition of the null-zone, uniform quantization used in most subband coding systems and the optimal allocation of bitrate to the different subbands. Then the image compression systems of the FBI Fingerprint Compression Standard and the JPEG2000 S

  2. Stiffness of compression devices

    Giovanni Mosti


    Full Text Available This issue of Veins and Lymphatics collects papers coming from the International Compression Club (ICC Meeting on Stiffness of Compression Devices, which took place in Vienna on May 2012. Several studies have demonstrated that the stiffness of compression products plays a major role for their hemodynamic efficacy. According to the European Committee for Standardization (CEN, stiffness is defined as the pressure increase produced by medical compression hosiery (MCH per 1 cm of increase in leg circumference.1 In other words stiffness could be defined as the ability of the bandage/stockings to oppose the muscle expansion during contraction.

  3. Study for region-regenerating shape of the granular medium surface

    Yousheng Yu


    Full Text Available When an object rolls on the surface of the mountain, the structure of surface may be destroyed subsequently with a regeneration region. Corresponding experimental simulation presents that the regenerating region exists three regions: compression, thin, and accumulation regions; the shape of regeneration region, as a quasi-parabola, is related to the size and initial velocity of the sphere as well as the slope of surface. Our study suggests that the length maximum of regenerating region is not associated with the initial velocity of sphere; it is found that the length of thin region increases with both the sphere size and the slope of surface.

  4. Fundamental Interactions in Gasoline Compression Ignition Engines with Fuel Stratification

    Wolk, Benjamin Matthew

    ) a 98-species version including nitric oxide formation reactions. Development of reduced mechanisms is necessary because the detailed mechanism is computationally prohibitive in three-dimensional CFD and chemical kinetics simulations. Simulations of Partial Fuel Stratification (PFS), a GCI strategy, have been performed using CONVERGE with the 96-species reduced mechanism developed in this work for a 4-component gasoline surrogate. Comparison is made to experimental data from the Sandia HCCI/GCI engine at a compression ratio 14:1 at intake pressures of 1 bar and 2 bar. Analysis of the heat release and temperature in the different equivalence ratio regions reveals that sequential auto-ignition of the stratified charge occurs in order of increasing equivalence ratio for 1 bar intake pressure and in order of decreasing equivalence ratio for 2 bar intake pressure. Increased low- and intermediate-temperature heat release with increasing equivalence ratio at 2 bar intake pressure compensates for decreased temperatures in higher-equivalence ratio regions due to evaporative cooling from the liquid fuel spray and decreased compression heating from lower values of the ratio of specific heats. The presence of low- and intermediate-temperature heat release at 2 bar intake pressure alters the temperature distribution of the mixture stratification before hot-ignition, promoting the desired sequential auto-ignition. At 1 bar intake pressure, the sequential auto-ignition occurs in the reverse order compared to 2 bar intake pressure and too fast for useful reduction of the maximum pressure rise rate compared to HCCI. Additionally, the premixed portion of the charge auto-ignites before the highest-equivalence ratio regions. Conversely, at 2 bar intake pressure, the premixed portion of the charge auto-ignites last, after the higher-equivalence ratio regions. More importantly, the sequential auto-ignition occurs over a longer time period for 2 bar intake pressure than at 1 bar intake

  5. An Enhanced Static Data Compression Scheme Of Bengali Short Message

    Arif, Abu Shamim Mohammod; Islam, Rashedul


    This paper concerns a modified approach of compressing Short Bengali Text Message for small devices. The prime objective of this research technique is to establish a low complexity compression scheme suitable for small devices having small memory and relatively lower processing speed. The basic aim is not to compress text of any size up to its maximum level without having any constraint on space and time, rather than the main target is to compress short messages up to an optimal level which needs minimum space, consume less time and the processor requirement is lower. We have implemented Character Masking, Dictionary Matching, Associative rule of data mining and Hyphenation algorithm for syllable based compression in hierarchical steps to achieve low complexity lossless compression of text message for any mobile devices. The scheme to choose the diagrams are performed on the basis of extensive statistical model and the static Huffman coding is done through the same context.

  6. Performance Analysis of Multi Spectral Band Image Compression using Discrete Wavelet Transform

    S. S. Ramakrishnan


    Full Text Available Problem statement: Efficient and effective utilization of transmission bandwidth and storage capacity have been a core area of research for remote sensing images. Hence image compression is required for multi-band satellite imagery. In addition, image quality is also an important factor after compression and reconstruction. Approach: In this investigation, the discrete wavelet transform is used to compress the Landsat5 agriculture and forestry image using various wavelets and the spectral signature graph is drawn. Results: The compressed image performance is analyzed using Compression Ratio (CR, Peak Signal to Noise Ratio (PSNR. The compressed image using dmey wavelet is selected based on its Digital Number Minimum (DNmin and Digital Number Maximum (DNmax. Then it is classified using maximum likelihood classification and the accuracy is determined using error matrix, kappa statistics and over all accuracy. Conclusion: Hence the proposed compression technique is well suited to compress the agriculture and forestry multi-band image.

  7. Compression Ratio Adjuster

    Akkerman, J. W.


    New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.

  8. Equation-of-state model for shock compression of hot dense matter

    Pain, J C


    A quantum equation-of-state model is presented and applied to the calculation of high-pressure shock Hugoniot curves beyond the asymptotic fourfold density, close to the maximum compression where quantum effects play a role. An analytical estimate for the maximum attainable compression is proposed. It gives a good agreement with the equation-of-state model.

  9. Spectral Animation Compression

    Chao Wang; Yang Liu; Xiaohu Guo; Zichun Zhong; Binh Le; Zhigang Deng


    This paper presents a spectral approach to compress dynamic animation consisting of a sequence of homeomor-phic manifold meshes. Our new approach directly compresses the field of deformation gradient defined on the surface mesh, by decomposing it into rigid-body motion (rotation) and non-rigid-body deformation (stretching) through polar decompo-sition. It is known that the rotation group has the algebraic topology of 3D ring, which is different from other operations like stretching. Thus we compress these two groups separately, by using Manifold Harmonics Transform to drop out their high-frequency details. Our experimental result shows that the proposed method achieves a good balance between the reconstruction quality and the compression ratio. We compare our results quantitatively with other existing approaches on animation compression, using standard measurement criteria.

  10. Defocus cue and saliency preserving video compression

    Khanna, Meera Thapar; Chaudhury, Santanu; Lall, Brejesh


    There are monocular depth cues present in images or videos that aid in depth perception in two-dimensional images or videos. Our objective is to preserve the defocus depth cue present in the videos along with the salient regions during compression application. A method is provided for opportunistic bit allocation during the video compression using visual saliency information comprising both the image features, such as color and contrast, and the defocus-based depth cue. The method is divided into two steps: saliency computation followed by compression. A nonlinear method is used to combine pure and defocus saliency maps to form the final saliency map. Then quantization values are assigned on the basis of these saliency values over a frame. The experimental results show that the proposed scheme yields good results over standard H.264 compression as well as pure and defocus saliency methods.

  11. Peculiarities of fracture in submicrocrystalline Al-Mg-Mn alloy under impact compression

    Petrova, A. N.; Brodova, I. G.; Razorenov, S. V.


    The method of nondestructive X-ray computed tomography (CT) has been used to study the structure of A5083 (magnesium- and manganese-doped aluminum) alloy samples upon impact compression. The initial samples had an average grain size of 600 nm and submicrocrystalline (SMC) structure formed by dynamic equal-channel angular pressing. Three-dimensional CT images of local fracture regions were obtained and the degree of material damage was estimated by calculating the average and maximum size of discontinuities (pores and microcracks) in various cross sections. The techniques of transmission and scanning electron microscopy were used to trace evolution of the SMC structure of impact-compressed alloy and determine the morphological characteristics of spallation surfaces and other defects.

  12. Whole brain susceptibility mapping using compressed sensing.

    Wu, Bing; Li, Wei; Guidon, Arnaud; Liu, Chunlei


    The derivation of susceptibility from image phase is hampered by the ill-conditioned filter inversion in certain k-space regions. In this article, compressed sensing is used to compensate for the k-space regions where direct filter inversion is unstable. A significantly lower level of streaking artifacts is produced in the resulting susceptibility maps for both simulated and in vivo data sets compared to outcomes obtained using the direct threshold method. It is also demonstrated that the compressed sensing based method outperforms regularization based methods. The key difference between the regularized inversions and compressed sensing compensated inversions is that, in the former case, the entire k-space spectrum estimation is affected by the ill-conditioned filter inversion in certain k-space regions, whereas in the compressed sensing based method only the ill-conditioned k-space regions are estimated. In the susceptibility map calculated from the phase measurement obtained using a 3T scanner, not only are the iron-rich regions well depicted, but good contrast between white and gray matter interfaces that feature a low level of susceptibility variations are also obtained. The correlation between the iron content and the susceptibility levels in iron-rich deep nucleus regions is studied, and strong linear relationships are observed which agree with previous findings.

  13. ROI-based DICOM image compression for telemedicine

    Vinayak K Bairagi; Ashok M Sapkal


    Many classes of images contain spatial regions which are more important than other regions. Compression methods capable of delivering higher reconstruction quality for important parts are attractive in this situation. For medical images, only a small portion of the image might be diagnostically useful, but the cost of a wrong interpretation is high. Hence, Region Based Coding (RBC) technique is significant for medical image compression and transmission. Lossless compression schemes with secure transmission play a key role in telemedicine applications that help in accurate diagnosis and research. In this paper, we propose lossless scalable RBC for Digital Imaging and Communications in Medicine (DICOM) images based on Integer Wavelet Transform (IWT) and with distortion limiting compression technique for other regions in image. The main objective of this work is to reject the noisy background and reconstruct the image portions losslessly. The compressed image can be accessed and sent over telemedicine network using personal digital assistance (PDA) like mobile.

  14. Vascular compression syndromes.

    Czihal, Michael; Banafsche, Ramin; Hoffmann, Ulrich; Koeppel, Thomas


    Dealing with vascular compression syndromes is one of the most challenging tasks in Vascular Medicine practice. This heterogeneous group of disorders is characterised by external compression of primarily healthy arteries and/or veins as well as accompanying nerval structures, carrying the risk of subsequent structural vessel wall and nerve damage. Vascular compression syndromes may severely impair health-related quality of life in affected individuals who are typically young and otherwise healthy. The diagnostic approach has not been standardised for any of the vascular compression syndromes. Moreover, some degree of positional external compression of blood vessels such as the subclavian and popliteal vessels or the celiac trunk can be found in a significant proportion of healthy individuals. This implies important difficulties in differentiating physiological from pathological findings of clinical examination and diagnostic imaging with provocative manoeuvres. The level of evidence on which treatment decisions regarding surgical decompression with or without revascularisation can be relied on is generally poor, mostly coming from retrospective single centre studies. Proper patient selection is critical in order to avoid overtreatment in patients without a clear association between vascular compression and clinical symptoms. With a focus on the thoracic outlet-syndrome, the median arcuate ligament syndrome and the popliteal entrapment syndrome, the present article gives a selective literature review on compression syndromes from an interdisciplinary vascular point of view.

  15. Critical Data Compression

    Scoville, John


    A new approach to data compression is developed and applied to multimedia content. This method separates messages into components suitable for both lossless coding and 'lossy' or statistical coding techniques, compressing complex objects by separately encoding signals and noise. This is demonstrated by compressing the most significant bits of data exactly, since they are typically redundant and compressible, and either fitting a maximally likely noise function to the residual bits or compressing them using lossy methods. Upon decompression, the significant bits are decoded and added to a noise function, whether sampled from a noise model or decompressed from a lossy code. This results in compressed data similar to the original. For many test images, a two-part image code using JPEG2000 for lossy coding and PAQ8l for lossless coding produces less mean-squared error than an equal length of JPEG2000. Computer-generated images typically compress better using this method than through direct lossy coding, as do man...

  16. Artificial Neural Network Model for Predicting Compressive

    Salim T. Yousif


    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  17. Reconnection dynamics with secondary tearing instability in compressible Hall plasmas

    Ma, Z. W., E-mail:; Wang, L. C.; Li, L. J. [Institute for Fusion Theory and Simulation, Zhejiang University, Hangzhou 310027 (China)


    The dynamics of a secondary tearing instability is systematically investigated based on compressible Hall magnetohydrodynamic. It is found that in the early nonlinear phase of magnetic reconnection before onset of the secondary tearing instability, the geometry of the magnetic field in the reconnection region tends to form a Y-type structure in a weak Hall regime, instead of an X-type structure in a strong Hall regime. A new scaling law is found that the maximum reconnection rate in the early nonlinear stage is proportional to the square of the ion inertial length (γ∝d{sub i}{sup 2}) in the weak Hall regime. In the late nonlinear phase, the thin elongated current sheet associated with the Y-type geometry of the magnetic field breaks up to form a magnetic island due to a secondary tearing instability. After the onset of the secondary tearing mode, the reconnection rate is substantially boosted by the formation of the X-type geometries of magnetic field in the reconnection regions. With a strong Hall effect, the maximum reconnection rate linearly increases with the increase of the ion inertial length (γ∝d{sub i})

  18. Virtually Lossless Compression of Astrophysical Images

    Alparone Luciano


    Full Text Available We describe an image compression strategy potentially capable of preserving the scientific quality of astrophysical data, simultaneously allowing a consistent bandwidth reduction to be achieved. Unlike strictly lossless techniques, by which moderate compression ratios are attainable, and conventional lossy techniques, in which the mean square error of the decoded data is globally controlled by users, near-lossless methods are capable of locally constraining the maximum absolute error, based on user's requirements. An advanced lossless/near-lossless differential pulse code modulation (DPCM scheme, recently introduced by the authors and relying on a causal spatial prediction, is adjusted to the specific characteristics of astrophysical image data (high radiometric resolution, generally low noise, etc.. The background noise is preliminarily estimated to drive the quantization stage for high quality, which is the primary concern in most of astrophysical applications. Extensive experimental results of lossless, near-lossless, and lossy compression of astrophysical images acquired by the Hubble space telescope show the advantages of the proposed method compared to standard techniques like JPEG-LS and JPEG2000. Eventually, the rationale of virtually lossless compression, that is, a noise-adjusted lossles/near-lossless compression, is highlighted and found to be in accordance with concepts well established for the astronomers' community.

  19. Wave energy devices with compressible volumes.

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John


    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m(3) and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.

  20. Compressed Adjacency Matrices: Untangling Gene Regulatory Networks.

    Dinkla, K; Westenberg, M A; van Wijk, J J


    We present a novel technique-Compressed Adjacency Matrices-for visualizing gene regulatory networks. These directed networks have strong structural characteristics: out-degrees with a scale-free distribution, in-degrees bound by a low maximum, and few and small cycles. Standard visualization techniques, such as node-link diagrams and adjacency matrices, are impeded by these network characteristics. The scale-free distribution of out-degrees causes a high number of intersecting edges in node-link diagrams. Adjacency matrices become space-inefficient due to the low in-degrees and the resulting sparse network. Compressed adjacency matrices, however, exploit these structural characteristics. By cutting open and rearranging an adjacency matrix, we achieve a compact and neatly-arranged visualization. Compressed adjacency matrices allow for easy detection of subnetworks with a specific structure, so-called motifs, which provide important knowledge about gene regulatory networks to domain experts. We summarize motifs commonly referred to in the literature, and relate them to network analysis tasks common to the visualization domain. We show that a user can easily find the important motifs in compressed adjacency matrices, and that this is hard in standard adjacency matrix and node-link diagrams. We also demonstrate that interaction techniques for standard adjacency matrices can be used for our compressed variant. These techniques include rearrangement clustering, highlighting, and filtering.

  1. Nonrepetitive Colouring via Entropy Compression

    Dujmović, Vida; Wood, David R


    A vertex colouring of a graph is \\emph{nonrepetitive} if there is no path whose first half receives the same sequence of colours as the second half. A graph is nonrepetitively $k$-choosable if given lists of at least $k$ colours at each vertex, there is a nonrepetitive colouring such that each vertex is coloured from its own list. It is known that every graph with maximum degree $\\Delta$ is $c\\Delta^2$-choosable, for some constant $c$. We prove this result with $c=4$. We then prove that every subdivision of a graph with sufficiently many division vertices per edge is nonrepetitively 6-choosable. The proofs of both these results are based on the Moser-Tardos entropy-compression method, and a recent extension by Grytczuk, Kozik and Micek for the nonrepetitive choosability of paths. Finally, we prove that every graph with pathwidth $k$ is nonrepetitively ($2k^2+6k+1$)-colourable.

  2. LDPC Codes for Compressed Sensing

    Dimakis, Alexandros G; Vontobel, Pascal O


    We present a mathematical connection between channel coding and compressed sensing. In particular, we link, on the one hand, \\emph{channel coding linear programming decoding (CC-LPD)}, which is a well-known relaxation of maximum-likelihood channel decoding for binary linear codes, and, on the other hand, \\emph{compressed sensing linear programming decoding (CS-LPD)}, also known as basis pursuit, which is a widely used linear programming relaxation for the problem of finding the sparsest solution of an under-determined system of linear equations. More specifically, we establish a tight connection between CS-LPD based on a zero-one measurement matrix over the reals and CC-LPD of the binary linear channel code that is obtained by viewing this measurement matrix as a binary parity-check matrix. This connection allows the translation of performance guarantees from one setup to the other. The main message of this paper is that parity-check matrices of "good" channel codes can be used as provably "good" measurement ...

  3. Prediction by Compression

    Ratsaby, Joel


    It is well known that text compression can be achieved by predicting the next symbol in the stream of text data based on the history seen up to the current symbol. The better the prediction the more skewed the conditional probability distribution of the next symbol and the shorter the codeword that needs to be assigned to represent this next symbol. What about the opposite direction ? suppose we have a black box that can compress text stream. Can it be used to predict the next symbol in the stream ? We introduce a criterion based on the length of the compressed data and use it to predict the next symbol. We examine empirically the prediction error rate and its dependency on some compression parameters.

  4. LZW Data Compression

    Dheemanth H N


    Full Text Available Lempel–Ziv–Welch (LZW is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. LZW compression is one of the Adaptive Dictionary techniques. The dictionary is created while the data are being encoded. So encoding can be done on the fly. The dictionary need not be transmitted. Dictionary can be built up at receiving end on the fly. If the dictionary overflows then we have to reinitialize the dictionary and add a bit to each one of the code words. Choosing a large dictionary size avoids overflow, but spoils compressions. A codebook or dictionary containing the source symbols is constructed. For 8-bit monochrome images, the first 256 words of the dictionary are assigned to the gray levels 0-255. Remaining part of the dictionary is filled with sequences of the gray levels.LZW compression works best when applied on monochrome images and text files that contain repetitive text/patterns.

  5. Reference Based Genome Compression

    Chern, Bobbie; Manolakos, Alexandros; No, Albert; Venkat, Kartik; Weissman, Tsachy


    DNA sequencing technology has advanced to a point where storage is becoming the central bottleneck in the acquisition and mining of more data. Large amounts of data are vital for genomics research, and generic compression tools, while viable, cannot offer the same savings as approaches tuned to inherent biological properties. We propose an algorithm to compress a target genome given a known reference genome. The proposed algorithm first generates a mapping from the reference to the target genome, and then compresses this mapping with an entropy coder. As an illustration of the performance: applying our algorithm to James Watson's genome with hg18 as a reference, we are able to reduce the 2991 megabyte (MB) genome down to 6.99 MB, while Gzip compresses it to 834.8 MB.

  6. Deep Blind Compressed Sensing

    Singh, Shikha; Singhal, Vanika; Majumdar, Angshul


    This work addresses the problem of extracting deeply learned features directly from compressive measurements. There has been no work in this area. Existing deep learning tools only give good results when applied on the full signal, that too usually after preprocessing. These techniques require the signal to be reconstructed first. In this work we show that by learning directly from the compressed domain, considerably better results can be obtained. This work extends the recently proposed fram...

  7. Reference Based Genome Compression

    Chern, Bobbie; Ochoa, Idoia; Manolakos, Alexandros; No, Albert; Venkat, Kartik; Weissman, Tsachy


    DNA sequencing technology has advanced to a point where storage is becoming the central bottleneck in the acquisition and mining of more data. Large amounts of data are vital for genomics research, and generic compression tools, while viable, cannot offer the same savings as approaches tuned to inherent biological properties. We propose an algorithm to compress a target genome given a known reference genome. The proposed algorithm first generates a mapping from the reference to the target gen...

  8. Alternative Compression Garments

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.


    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  9. The maximum rotation of a galactic disc

    Bottema, R


    The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...

  10. Maximum magnitude earthquakes induced by fluid injection

    McGarr, Arthur F.


    Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.

  11. Maximum magnitude earthquakes induced by fluid injection

    McGarr, A.


    Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.

  12. Inelastic compression legging produces gradient compression and significantly higher skin surface pressures compared with an elastic compression stocking.

    Kline, Cassie N; Macias, Brandon R; Kraus, Emily; Neuschwander, Timothy B; Angle, Niren; Bergan, John; Hargens, Alan R


    The purposes of this study were to (1) investigate compression levels beneath an inelastic legging equipped with a new pressure-adjustment system, (2) compare the inelastic compression levels with those provided by a well-known elastic stocking, and (3) evaluate each support's gradient compression production. Eighteen subjects without venous reflux and 12 patients with previously documented venous reflux received elastic and inelastic compression supports sized for the individual. Skin surface pressures under the elastic (Sigvaris 500, 30-40 mm Hg range, Sigvaris, Inc., Peachtree City, GA) and inelastic (CircAid C3 with Built-in-Pressure System [BPS], CircAid Medical Products, San Diego, CA) supports were measured using a calibrated Tekscan I-Scan device (Tekscan, Inc., Boston, MA). The elastic stocking produced significantly lower skin surface pressures than the inelastic legging. Mean pressures (+/- standard error) beneath the elastic stocking were 26 +/- 2 and 23 +/- 1 mm Hg at the ankle and below-knee regions, respectively. Mean pressures (+/- standard error) beneath the inelastic legging with the BPS were 50 +/- 3 and 38 +/- 2 mm Hg at the ankle and below-knee regions, respectively. Importantly, our study indicates that only the inelastic legging with the BPS produces significant ankle to knee gradient compression (p = .001).

  13. The dynamics of surge in compression systems

    A N Vishwanatha Rao; O N Ramesh


    In air-compression systems, instabilities occur during operation close to their peak pressure-rise capability. However, the peak efficiency of a compression system lies close to this region of instability. A surge is a violent mode of instability where there is total breakdown of flow in the system and pressure-rise capability is lost drastically. Generally, all compression systems operate with a margin defined as the ‘surge margin’, and, consequently, system operational efficiency is lower. It is of interest to study compression-system surge to understand its dynamics in order to operate compression systems close to the instability for achieving high efficiency safely without encountering surge. Unsteady pressure data from a compression system, captured during surge oscillations, reveal many aspects of flow physics and are analysed to understand the surge dynamics of the system. A set of controlled experiments was conducted with a simple desktop experimental test set-up and essential aspects of surge dynamics have been characterised.

  14. Rendimiento de materia seca y calidad nutritiva del pasto Panicum maximum vc. Likoni en un suelo fluvisol de la región oriental de Cuba - Yield of dry matter and nutritious quality of the grass Panicum maximum vc. Likoni in a soil of region east of Cuba

    Ramírez, J. L.


    Full Text Available ResumenEn un diseño de bloques al azar con 4 réplicas se evaluó la influencia de la edad de rebrote (30 a 105 días y los factores del clima en el rendimiento de materia y calidad nutritiva del pasto Panicum maximum vc. Likoni. El experimento se desarrolló en un suelo fluvisol en secano y sin fertilización. El rendimiento de MS se incrementó significativamente con la edad (P<0,001 y se ajustaron ecuaciones cuadráticas entre este y la edad, para ambos períodos, con valores superiores a los 90 días (7.23 lluvioso y 2,16 t/ha/corte poco lluvioso. Las variables climáticas mostraron altascorrelaciones (positivas y negativas con el rendimiento y la composición química, más acentuadas en el período poco lluvioso. La proteína bruta, digestibilidad de la MS y MO disminuyeron con la edad (P<0,001 y se ajustaron ecuaciones de regresión cuadrática entre estas variables y la edad, los mayores porcentajes se mostraron a la edad de 30 días en ambos períodos. La FND, FAD, lignina y la Celulosa se incrementaron con la edad (P<0,001, mostrando sus mayores valores a los 105 días de rebrote en ambos períodos y se ajustaron ecuaciones de regresión cuadrática de estas variables respecto a la edad. Se concluye que la edad y las condiciones climáticas tuvieron un marcado efecto en el comportamiento de los indicadores evaluados, más acentuado en el período lluvioso al disminuir la calidad nutritiva.SummaryIn a design of blocks the influence of the days of regrowth was evaluated at random (30 to 105 days and the factors of the climate in the matter yield and nutritious quality of the grass Panicum aximum vc. Likoni. No fertilization or irrigation was practiced. The yield of DM was increased significantly with the age (P <0,001 and quadratic equations were adjusted between this and the age, for both periods, with values superiors to the 90 days (7.23 rainy season and 2,16 dry season t/ha/cut. The climatic variables showed discharges correlations

  15. 基于最大熵原理的区域农业干旱度概率分布模型%Probability distribution model of regional agricultural drought degree based on the maximum entropy principle

    陈海涛; 黄鑫; 邱林; 王文川


      提出了构建综合考虑自然因素与农作物生长周期之间量化关系的干旱度评价指标,并基于最大熵原理建立了项目区干旱度分布密度函数,避免了以往构建概率分布的随意性,实现了对区域农业干旱度进行量化评价的目的。首先根据作物在非充分灌溉条件下的减产率,建立了干旱程度的量化评价指标,然后通过蒙特卡罗法生成了长系列降雨资料,并计算历年干旱度指标,最后利用最大熵原理,构建了农业干旱度分布的概率分布密度函数。以河南省濮阳市渠村灌区为对象进行了实例计算。结果表明,该模型概念清晰,计算简便实用,结果符合实际,是一种较好的评估方法。%The evaluation index of drought degree,which comprehensively considering the quantitative rela⁃tionship between the crop growing period and natural factors, is presented in this paper. The distribution density function of drought degree has been established based on the maximum-entropy principle. It can avoid the randomness of probability distribution previous constructed and has realized purpose of quantita⁃tive evaluation of agricultural drought degree. Firstly, the quantitative evaluation index of drought degree was established according to the yield reduction rate of deficit irrigation conditions. Secondly,a long series rainfall data were generated by Monte-Carlo method and the past years index of drought degree were calcu⁃lated. Finally, the density function of probability distribution of agricultural drought degree distribution was constructed by using maximum entropy principle. As an example, the calculation results of the distribution of drought degree of agriculture in Qucun irrigation area were presented. The results show that the model provides a better evaluation method with clear concept,simple and practical approach,and reasonable out⁃comes.

  16. Envera Variable Compression Ratio Engine

    Charles Mendler


    the compression ratio can be raised (to as much as 18:1) providing high engine efficiency. It is important to recognize that for a well designed VCR engine cylinder pressure does not need to be higher than found in current production turbocharged engines. As such, there is no need for a stronger crankcase, bearings and other load bearing parts within the VCR engine. The Envera VCR mechanism uses an eccentric carrier approach to adjust engine compression ratio. The crankshaft main bearings are mounted in this eccentric carrier or 'crankshaft cradle' and pivoting the eccentric carrier 30 degrees adjusts compression ratio from 9:1 to 18:1. The eccentric carrier is made up of a casting that provides rigid support for the main bearings, and removable upper bearing caps. Oil feed to the main bearings transits through the bearing cap fastener sockets. The eccentric carrier design was chosen for its low cost and rigid support of the main bearings. A control shaft and connecting links are used to pivot the eccentric carrier. The control shaft mechanism features compression ratio lock-up at minimum and maximum compression ratio settings. The control shaft method of pivoting the eccentric carrier was selected due to its lock-up capability. The control shaft can be rotated by a hydraulic actuator or an electric motor. The engine shown in Figures 3 and 4 has a hydraulic actuator that was developed under the current program. In-line 4-cylinder engines are significantly less expensive than V engines because an entire cylinder head can be eliminated. The cost savings from eliminating cylinders and an entire cylinder head will notably offset the added cost of the VCR and supercharging. Replacing V6 and V8 engines with in-line VCR 4-cylinder engines will provide high fuel economy at low cost. Numerous enabling technologies exist which have the potential to increase engine efficiency. The greatest efficiency gains are realized when the right combination of advanced and new

  17. Working Characteristics of Variable Intake Valve in Compressed Air Engine

    Qihui Yu


    Full Text Available A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine.

  18. Working characteristics of variable intake valve in compressed air engine.

    Yu, Qihui; Shi, Yan; Cai, Maolin


    A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine.

  19. Information preserving image compression for archiving NMR images.

    Li, C C; Gokmen, M; Hirschman, A D; Wang, Y


    This paper presents a result on information preserving compression of NMR images for the archiving purpose. Both Lynch-Davisson coding and linear predictive coding have been studied. For NMR images of 256 x 256 x 12 resolution, the Lynch-Davisson coding with a block size of 64 as applied to prediction error sequences in the Gray code bit planes of each image gave an average compression ratio of 2.3:1 for 14 testing images. The predictive coding with a third order linear predictor and the Huffman encoding of the prediction error gave an average compression ratio of 3.1:1 for 54 images under test, while the maximum compression ratio achieved was 3.8:1. This result is one step further toward the improvement, albeit small, of the information preserving image compression for medical applications.

  20. OECD Maximum Residue Limit Calculator

    With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.

  1. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq


    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  2. Transverse Compression of Tendons.

    Salisbury, S T Samuel; Buckley, C Paul; Zavatsky, Amy B


    A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon.


    Li Hongbo


    In an inner-product space, an invertible vector generates a reflection with re-spect to a hyperplane, and the Clifford product of several invertible vectors, called a versor in Clifford algebra, generates the composition of the corresponding reflections, which is an orthogonal transformation. Given a versor in a Clifford algebra, finding another sequence of invertible vectors of strictly shorter length but whose Clifford product still equals the input versor, is called versor compression. Geometrically, versor compression is equivalent to decomposing an orthogoual transformation into a shorter sequence of reflections. This paper proposes a simple algorithm of compressing versors of symbolic form in Clifford algebra. The algorithm is based on computing the intersections of lines with planes in the corresponding Grassmann-Cayley algebra, and is complete in the case of Euclidean or Minkowski inner-product space.

  4. Image compression for dermatology

    Cookson, John P.; Sneiderman, Charles; Colaianni, Joseph; Hood, Antoinette F.


    Color 35mm photographic slides are commonly used in dermatology for education, and patient records. An electronic storage and retrieval system for digitized slide images may offer some advantages such as preservation and random access. We have integrated a system based on a personal computer (PC) for digital imaging of 35mm slides that depict dermatologic conditions. Such systems require significant resources to accommodate the large image files involved. Methods to reduce storage requirements and access time through image compression are therefore of interest. This paper contains an evaluation of one such compression method that uses the Hadamard transform implemented on a PC-resident graphics processor. Image quality is assessed by determining the effect of compression on the performance of an image feature recognition task.

  5. Maximum margin Bayesian network classifiers.

    Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian


    We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  6. Maximum Entropy in Drug Discovery

    Chih-Yuan Tseng


    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  7. Compressive Shift Retrieval

    Ohlsson, Henrik; Eldar, Yonina C.; Yang, Allen Y.; Sastry, S. Shankar


    The classical shift retrieval problem considers two signals in vector form that are related by a shift. The problem is of great importance in many applications and is typically solved by maximizing the cross-correlation between the two signals. Inspired by compressive sensing, in this paper, we seek to estimate the shift directly from compressed signals. We show that under certain conditions, the shift can be recovered using fewer samples and less computation compared to the classical setup. Of particular interest is shift estimation from Fourier coefficients. We show that under rather mild conditions only one Fourier coefficient suffices to recover the true shift.

  8. Graph Compression by BFS

    Alberto Apostolico


    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

  9. Image data compression investigation

    Myrie, Carlos


    NASA continuous communications systems growth has increased the demand for image transmission and storage. Research and analysis was conducted on various lossy and lossless advanced data compression techniques or approaches used to improve the efficiency of transmission and storage of high volume stellite image data such as pulse code modulation (PCM), differential PCM (DPCM), transform coding, hybrid coding, interframe coding, and adaptive technique. In this presentation, the fundamentals of image data compression utilizing two techniques which are pulse code modulation (PCM) and differential PCM (DPCM) are presented along with an application utilizing these two coding techniques.

  10. Image compression in local helioseismology

    Löptien, Björn; Gizon, Laurent; Schou, Jesper


    Context. Several upcoming helioseismology space missions are very limited in telemetry and will have to perform extensive data compression. This requires the development of new methods of data compression. Aims. We give an overview of the influence of lossy data compression on local helioseismology. We investigate the effects of several lossy compression methods (quantization, JPEG compression, and smoothing and subsampling) on power spectra and time-distance measurements of supergranulation flows at disk center. Methods. We applied different compression methods to tracked and remapped Dopplergrams obtained by the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory. We determined the signal-to-noise ratio of the travel times computed from the compressed data as a function of the compression efficiency. Results. The basic helioseismic measurements that we consider are very robust to lossy data compression. Even if only the sign of the velocity is used, time-distance helioseismology is still...

  11. Velocity-vorticity correlation structures in compressible turbulent boundary layer

    Chen, Jun; Li, Shi-Yao; She, Zhen-Su


    A velocity-vorticity correlation structure (VVCS) analysis is applied to analyze data of 3-dimensional (3-D) direct numerical simulations (DNS), to investigate the quantitative properties of the most correlated vortex structures in compressible turbulent boundary layer (CTBL) at Mach numbers, Ma = 2 . 25 and 6 . 0 . It is found that the geometry variation of the VVCS closely reflects the streamwise development of CTBL. In laminar region, the VVCS captures the instability wave number of the boundary layer. The transition region displays a distinct scaling change of the dimensions of VVCS. The developed turbulence region is characterized by a constant spatial extension of the VVCS. For various Mach numbers, the maximum correlation coefficient of the VVCS presents a clear multi-layer structure with the same scaling laws as a recent symmetry analysis proposed to quantifying the sublayer, the log-layer, and the wake flow. A surprising discovery is that the wall friction coefficient, Cf, holds a "-1"-power law of the wall normal distance of the VVCS, ys. This validates the speculation that the wall friction is determined by the near-wall coherent structure, which clarifies the correlation between statistical structures and the near-wall dynamics. Project 11452002 and 11172006 supported by National Natural Science Foundation of China.

  12. An Interval Maximum Entropy Method for Quadratic Programming Problem

    RUI Wen-juan; CAO De-xin; SONG Xie-wu


    With the idea of maximum entropy function and penalty function methods, we transform the quadratic programming problem into an unconstrained differentiable optimization problem, discuss the interval extension of the maximum entropy function, provide the region deletion test rules and design an interval maximum entropy algorithm for quadratic programming problem. The convergence of the method is proved and numerical results are presented. Both theoretical and numerical results show that the method is reliable and efficient.

  13. Chronic nerve root entrapment: compression and degeneration

    Vanhoestenberghe, A.


    Electrode mounts are being developed to improve electrical stimulation and recording. Some are tight-fitting, or even re-shape the nervous structure they interact with, for a more selective, fascicular, access. If these are to be successfully used chronically with human nerve roots, we need to know more about the possible damage caused by the long-term entrapment and possible compression of the roots following electrode implantation. As there are, to date, no such data published, this paper presents a review of the relevant literature on alternative causes of nerve root compression, and a discussion of the degeneration mechanisms observed. A chronic compression below 40 mmHg would not compromise the functionality of the root as far as electrical stimulation and recording applications are concerned. Additionally, any temporary increase in pressure, due for example to post-operative swelling, should be limited to 20 mmHg below the patient’s mean arterial pressure, with a maximum of 100 mmHg. Connective tissue growth may cause a slower, but sustained, pressure increase. Therefore, mounts large enough to accommodate the root initially without compressing it, or compliant, elastic, mounts, that may stretch to free a larger cross-sectional area in the weeks after implantation, are recommended.

  14. Negative linear compressibility in common materials

    Miller, W.; Evans, K. E.; Marmier, A., E-mail: [College of Engineering Mathematics and Physical Science, University of Exeter, Exeter EX4 4QF (United Kingdom)


    Negative linear compressibility (NLC) is still considered an exotic property, only observed in a few obscure crystals. The vast majority of materials compress axially in all directions when loaded in hydrostatic compression. However, a few materials have been observed which expand in one or two directions under hydrostatic compression. At present, the list of materials demonstrating this unusual behaviour is confined to a small number of relatively rare crystal phases, biological materials, and designed structures, and the lack of widespread availability hinders promising technological applications. Using improved representations of elastic properties, this study revisits existing databases of elastic constants and identifies several crystals missed by previous reviews. More importantly, several common materials-drawn polymers, certain types of paper and wood, and carbon fibre laminates-are found to display NLC. We show that NLC in these materials originates from the misalignment of polymers/fibres. Using a beam model, we propose that maximum NLC is obtained for misalignment of 26°. The existence of such widely available materials increases significantly the prospects for applications of NLC.

  15. Sandia computerized shock compression bibliographical database

    Wilbeck, J.S.; Anderson, C.E.; Hokanson, J.C.; Asay, J.R.; Grady, D.E.; Graham, R.A.; Kipp, M.E.


    A searchable and updateable bibliographical database is being developed which will be designed, controlled, and evaluated by working technical experts in the field of shock-compression science. It will emphasize shock-compression properties in the stress region of a few tens of GPa and provide a broad and complete base of bibliographical information on the shock-compression behavior of materials. Through the operation of technical advisors, the database provides authoritative blbliographical and keyword data for use by both the inexperienced and expert user. In its current form, it consists of: (1) a library of journal articles, reports, books, and symposia papers in the areas of shock physics and shock mechanics; and (2) a computerized database system containing complete bibliographical information, exhaustive keyword descriptions, and author abstracts for each of the documents in the database library.

  16. Finding maximum JPEG image block code size

    Lakhani, Gopal


    We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.

  17. Fingerprints in Compressed Strings

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li


    The Karp-Rabin fingerprint of a string is a type of hash value that due to its strong properties has been used in many string algorithms. In this paper we show how to construct a data structure for a string S of size N compressed by a context-free grammar of size n that answers fingerprint queries...

  18. Multiple snapshot compressive beamforming

    Gerstoft, Peter; Xenaki, Angeliki; Mecklenbrauker, Christoph F.


    For sound fields observed on an array, compressive sensing (CS) reconstructs the multiple source signals at unknown directions-of-arrival (DOAs) using a sparsity constraint. The DOA estimation is posed as an underdetermined problem expressing the field at each sensor as a phase-lagged superposition...

  19. Compressive CFAR radar detection

    Anitori, L.; Otten, M.P.G.; Rossum, W.L. van; Maleki, A.; Baraniuk, R.


    In this paper we develop the first Compressive Sensing (CS) adaptive radar detector. We propose three novel architectures and demonstrate how a classical Constant False Alarm Rate (CFAR) detector can be combined with ℓ1-norm minimization. Using asymptotic arguments and the Complex Approximate Messag

  20. Compressive CFAR Radar Processing

    Anitori, L.; Rossum, W.L. van; Otten, M.P.G.; Maleki, A.; Baraniuk, R.


    In this paper we investigate the performance of a combined Compressive Sensing (CS) Constant False Alarm Rate (CFAR) radar processor under different interference scenarios using both the Cell Averaging (CA) and Order Statistic (OS) CFAR detectors. Using the properties of the Complex Approximate Mess

  1. Beamforming Using Compressive Sensing


    dB to align the peak at 7.3o. Comparing peaks to val- leys , compressive sensing provides a greater main to interference (and noise) ratio...elements. Acknowledgments This research was supported by the Office of Naval Research. The authors would like to especially thank of Roger Gauss and Joseph

  2. Magnetic Compression Experiment at General Fusion

    Dunlea, Carl; Howard, Stephen; Epp, Kelly; Zawalski, Wade; Kim, Charlson; Fusion Team, General


    The magnetic compression experiment at General Fusion was designed as a repetitive non-destructive test to study plasma physics applicable to Magnetic Target Fusion compression. A spheromak compact torus (CT) is formed with a co-axial gun into a containment region with an hour-glass shaped inner flux conserver, and an insulating outer wall. The experiment has external coils to keep the CT off the outer wall (levitation) and then rapidly compress it inwards. Experiments used a variety of levitation/compression field profiles. The optimal configuration was seen to improve levitated CT lifetime by around 50% over that with the original design field. Suppression of impurity influx to the plasma is thought to be a significant factor in the improvement, as supported by spectrometer data. Improved levitation field may reduce the amount of edge plasma and current that intersects the insulating outer wall during the formation process. Higher formation current and stuffing field, and correspondingly higher CT flux, was possible with the improved configuration. Significant field and density compression factors were routinely observed. The level of MHD activity was reduced, and lifetime was increased further by matching the decay rate of the levitation field to that of the CT fields. Details of experimental results and comparisons to equilibrium models and MHD simulations will be presented.

  3. The Maximum Density of Water.

    Greenslade, Thomas B., Jr.


    Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)

  4. Abolishing the maximum tension principle

    Dabrowski, Mariusz P


    We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.

  5. Abolishing the maximum tension principle

    Mariusz P. Da̧browski


    Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.


    Jayroe, R. R.


    Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available

  7. Compressive Deformation Induced Nanocrystallization of a Supercooled Zr-Based Bulk Metallic Glass

    GUO Xiao-Lin; SHAN De-Bin; MA Ming-Zhen; GUO Bin


    The nanocrystallization behaviour of a bulk Zr-based metallic glass subjected to compressive stress is investigated in the supercooled liquid region. Compared with annealing treatments without compressive stress, compressive deformation promotes the development of nucleation and suppresses the coarsening of nanocrystallites at high ternperatures.

  8. Comparative compressibility of hydrous wadsleyite

    Chang, Y.; Jacobsen, S. D.; Thomas, S.; Bina, C. R.; Smyth, J. R.; Frost, D. J.; Hauri, E. H.; Meng, Y.; Dera, P. K.


    Determining the effects of hydration on the density and elastic properties of wadsleyite, β-Mg2SiO4, is critical to constraining Earth’s global geochemical water cycle. Whereas previous studies of the bulk modulus (KT) have studied either hydrous Mg-wadsleyite, or anhydrous Fe-bearing wadsleyite, the combined effects of hydration and iron are under investigation. Also, whereas KT from compressibility studies is relatively well constrained by equation of state fitting to P-V data, the pressure derivative of the bulk modulus (K’) is usually not well constrained either because of poor data resolution, uncertainty in pressure calibrations, or narrow pressure ranges of previous single-crystal studies. Here we report the comparative compressibility of dry versus hydrous wadsleyite with Fo90 composition containing 1.9(2) wt% H2O, nearly the maximum water storage capacity of this phase. The composition was characterized by EMPA and nanoSIMS. The experiments were carried out using high-pressure, single-crystal diffraction up to 30 GPa at HPCAT, Advanced Photon Source. By loading three crystals each of hydrous and anhydrous wadsleyite together in the same diamond-anvil cell, we achieve good hkl coverage and eliminate the pressure scale as a variable in comparing the relative value of K’ between the dry and hydrous samples. We used MgO as an internal diffraction standard, in addition to recording ruby fluorescence pressures. By using neon as a pressure medium and about 1 GPa pressure steps up to 30 GPa, we obtain high-quality diffraction data for constraining the effect of hydration on the density and K’ of hydrous wadsleyite. Due to hydration, the initial volume of hydrous Fo90 wadsleyite is larger than anhydrous Fo90 wadsleyite, however the higher compressibility of hydrous wadsleyite leads to a volume crossover at 6 GPa. Hydration to 2 wt% H2O reduces the bulk modulus of Fo90 wadsleyite from 170(2) to 157(2) GPa, or about 7.6% reduction. In contrast to previous

  9. Mortar constituent of concrete under cyclic compression

    Maher, A.; Darwin, D.


    The behavior of the mortar constituent of concrete under cyclic compression was studied and a simple analytic model was developed to represent its cyclic behavior. Experimental work consisted of monotonic and cyclic compressive loading of mortar. Two mixes were used, with proportions corresponding to concretes having water cement ratios of 0.5 and 0.6. Forty-four groups of specimens were tested at ages ranging from 5 to 70 days. complete monotonic and cyclic stress strain envelopes were obtained. A number of loading regimes were investigated, including cycles to a constant maximum strain. Major emphasis was placed on tests using relatively high stress cycles. Degradation was shown to be a continuous process and a function of both total strain and load history. No stability or fatigue limit was apparent.

  10. Randomness Testing of Compressed Data

    Chang, Weiling; Yun, Xiaochun; Wang, Shupeng; Yu, Xiangzhan


    Random Number Generators play a critical role in a number of important applications. In practice, statistical testing is employed to gather evidence that a generator indeed produces numbers that appear to be random. In this paper, we reports on the studies that were conducted on the compressed data using 8 compression algorithms or compressors. The test results suggest that the output of compression algorithms or compressors has bad randomness, the compression algorithms or compressors are not suitable as random number generator. We also found that, for the same compression algorithm, there exists positive correlation relationship between compression ratio and randomness, increasing the compression ratio increases randomness of compressed data. As time permits, additional randomness testing efforts will be conducted.

  11. TEM Video Compressive Sensing

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.


    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  12. Image quality, compression and segmentation in medicine.

    Morgan, Pam; Frankish, Clive


    This review considers image quality in the context of the evolving technology of image compression, and the effects image compression has on perceived quality. The concepts of lossless, perceptually lossless, and diagnostically lossless but lossy compression are described, as well as the possibility of segmented images, combining lossy compression with perceptually lossless regions of interest. The different requirements for diagnostic and training images are also discussed. The lack of established methods for image quality evaluation is highlighted and available methods discussed in the light of the information that may be inferred from them. Confounding variables are also identified. Areas requiring further research are illustrated, including differences in perceptual quality requirements for different image modalities, image regions, diagnostic subtleties, and tasks. It is argued that existing tools for measuring image quality need to be refined and new methods developed. The ultimate aim should be the development of standards for image quality evaluation which take into consideration both the task requirements of the images and the acceptability of the images to the users.

  13. Maximum Work of Free-Piston Stirling Engine Generators

    Kojima, Shinji


    Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.

  14. Tree compression with top trees

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.;


    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  15. Tree compression with top trees

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.


    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  16. Tree compression with top trees

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.


    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  17. Reinterpreting Compression in Infinitary Rewriting

    Ketema, J.; Tiwari, Ashish


    Departing from a computational interpretation of compression in infinitary rewriting, we view compression as a degenerate case of standardisation. The change in perspective comes about via two observations: (a) no compression property can be recovered for non-left-linear systems and (b) some standar

  18. Lossless Compression of Broadcast Video

    Martins, Bo; Eriksen, N.; Faber, E.


    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...

  19. Maximum Genus of Strong Embeddings

    Er-ling Wei; Yan-pei Liu; Han Ren


    The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.

  20. Modelling of pressure-strain correlation in compressible turbulent flow

    Siyuan Huang; Song Fu


    Previous studies carried out in the early 1990s conjectured that the main compressible effects could be associated with the dilatational effects of velocity fluctuation.Later,it was shown that the main compressibility effect came from the reduced pressure-strain term due to reduced pressure fluctuations.Although better understanding of the compressible turbulence is generally achieved with the increased DNS and experimental research effort,there are still some discrepancies among these recent findings.Analysis of the DNS and experimental data suggests that some of the discrepancies are apparent if the compressible effect is related to the turbulent Mach number,Mt.From the comparison of two classes of compressible flow,homogenous shear flow and inhomogeneous shear flow(mixing layer),we found that the effect of compressibility on both classes of shear flow can be characterized in three categories corresponding to three regions of turbulent Mach numbers:the low-Mt,the moderate-Mt and high-Mt regions.In these three regions the effect of compressibility on the growth rate of the turbulent mixing layer thickness is rather different.A simple approach to the reduced pressure-strain effect may not necessarily reduce the mixing-layer growth rate,and may even cause an increase in the growth rate.The present work develops a new second-moment model for the compressible turbulence through the introduction of some blending functions of Mt to account for the compressibility effects on the flow.The model has been successfully applied to the compressible mixing layers.

  1. Oncologic image compression using both wavelet and masking techniques.

    Yin, F F; Gao, Q


    A new algorithm has been developed to compress oncologic images using both wavelet transform and field masking methods. A compactly supported wavelet transform is used to decompose the original image into high- and low-frequency subband images. The region-of-interest (ROI) inside an image, such as an irradiated field in an electronic portal image, is identified using an image segmentation technique and is then used to generate a mask. The wavelet transform coefficients outside the mask region are then ignored so that these coefficients can be efficiently coded to minimize the image redundancy. In this study, an adaptive uniform scalar quantization method and Huffman coding with a fixed code book are employed in subsequent compression procedures. Three types of typical oncologic images are tested for compression using this new algorithm: CT, MRI, and electronic portal images with 256 x 256 matrix size and 8-bit gray levels. Peak signal-to-noise ratio (PSNR) is used to evaluate the quality of reconstructed image. Effects of masking and image quality on compression ratio are illustrated. Compression ratios obtained using wavelet transform with and without masking for the same PSNR are compared for all types of images. The addition of masking shows an increase of compression ratio by a factor of greater than 1.5. The effect of masking on the compression ratio depends on image type and anatomical site. A compression ratio of greater than 5 can be achieved for a lossless compression of various oncologic images with respect to the region inside the mask. Examples of reconstructed images with compression ratio greater than 50 are shown.

  2. D(Maximum)=P(Argmaximum)

    Remizov, Ivan D


    In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.

  3. The Testability of Maximum Magnitude

    Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.


    Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.

  4. Alternative Multiview Maximum Entropy Discrimination.

    Chao, Guoqing; Sun, Shiliang


    Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.

  5. Algorithm for Compressing Time-Series Data

    Hawkins, S. Edward, III; Darlington, Edward Hugo


    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  6. Building indifferentiable compression functions from the PGV compression functions

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde


    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black...... cipher is ideal. We address the problem of building indifferentiable compression functions from the PGV compression functions. We consider a general form of 64 PGV compression functions and replace the linear feed-forward operation in this generic PGV compression function with an ideal block cipher...... independent of the one used in the generic PGV construction. This modified construction is called a generic modified PGV (MPGV). We analyse indifferentiability of the generic MPGV construction in the ideal cipher model and show that 12 out of 64 MPGV compression functions in this framework...

  7. Compressive Principal Component Pursuit

    Wright, John; Min, Kerui; Ma, Yi


    We consider the problem of recovering a target matrix that is a superposition of low-rank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured high-dimensional signals such as videos and hyperspectral images, as well as in the analysis of transformation invariant low-rank recovery. We analyze the performance of the natural convex heuristic for solving this problem, under the assumption that measurements are chosen uniformly at random. We prove that this heuristic exactly recovers low-rank and sparse terms, provided the number of observations exceeds the number of intrinsic degrees of freedom of the component signals by a polylogarithmic factor. Our analysis introduces several ideas that may be of independent interest for the more general problem of compressed sensing and decomposing superpositions of multiple structured signals.

  8. Hamming Compressed Sensing

    Zhou, Tianyi


    Compressed sensing (CS) and 1-bit CS cannot directly recover quantized signals and require time consuming recovery. In this paper, we introduce \\textit{Hamming compressed sensing} (HCS) that directly recovers a k-bit quantized signal of dimensional $n$ from its 1-bit measurements via invoking $n$ times of Kullback-Leibler divergence based nearest neighbor search. Compared with CS and 1-bit CS, HCS allows the signal to be dense, takes considerably less (linear) recovery time and requires substantially less measurements ($\\mathcal O(\\log n)$). Moreover, HCS recovery can accelerate the subsequent 1-bit CS dequantizer. We study a quantized recovery error bound of HCS for general signals and "HCS+dequantizer" recovery error bound for sparse signals. Extensive numerical simulations verify the appealing accuracy, robustness, efficiency and consistency of HCS.

  9. Compressive Spectral Renormalization Method

    Bayindir, Cihan


    In this paper a novel numerical scheme for finding the sparse self-localized states of a nonlinear system of equations with missing spectral data is introduced. As in the Petviashivili's and the spectral renormalization method, the governing equation is transformed into Fourier domain, but the iterations are performed for far fewer number of spectral components (M) than classical versions of the these methods with higher number of spectral components (N). After the converge criteria is achieved for M components, N component signal is reconstructed from M components by using the l1 minimization technique of the compressive sampling. This method can be named as compressive spectral renormalization (CSRM) method. The main advantage of the CSRM is that, it is capable of finding the sparse self-localized states of the evolution equation(s) with many spectral data missing.

  10. Speech Compression and Synthesis


    phonological rules combined with diphone improved the algorithms used by the phonetic synthesis prog?Im for gain normalization and time... phonetic vocoder, spectral template. i0^Th^TreprtTörc"u’d1sTuV^ork for the past two years on speech compression’and synthesis. Since there was an...from Block 19: speech recognition, pnoneme recogmtion. initial design for a phonetic recognition program. We also recorded ana partially labeled a

  11. Compressed sensing electron tomography

    Leary, Rowan, E-mail: [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ (United Kingdom); Saghi, Zineb; Midgley, Paul A. [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ (United Kingdom); Holland, Daniel J. [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom)


    The recent mathematical concept of compressed sensing (CS) asserts that a small number of well-chosen measurements can suffice to reconstruct signals that are amenable to sparse or compressible representation. In addition to powerful theoretical results, the principles of CS are being exploited increasingly across a range of experiments to yield substantial performance gains relative to conventional approaches. In this work we describe the application of CS to electron tomography (ET) reconstruction and demonstrate the efficacy of CS–ET with several example studies. Artefacts present in conventional ET reconstructions such as streaking, blurring of object boundaries and elongation are markedly reduced, and robust reconstruction is shown to be possible from far fewer projections than are normally used. The CS–ET approach enables more reliable quantitative analysis of the reconstructions as well as novel 3D studies from extremely limited data. - Highlights: • Compressed sensing (CS) theory and its application to electron tomography (ET) is described. • The practical implementation of CS–ET is outlined and its efficacy demonstrated with examples. • High fidelity tomographic reconstruction is possible from a small number of images. • The CS–ET reconstructions can be more reliably segmented and analysed quantitatively. • CS–ET is applicable to different image content by choice of an appropriate sparsifying transform.

  12. Ultraspectral sounder data compression review

    Bormin HUANG; Hunglung HUANG


    Ultraspectral sounders provide an enormous amount of measurements to advance our knowledge of weather and climate applications. The use of robust data compression techniques will be beneficial for ultraspectral data transfer and archiving. This paper reviews the progress in lossless compression of ultra-spectral sounder data. Various transform-based, pre-diction-based, and clustering-based compression methods are covered. Also studied is a preprocessing scheme for data reordering to improve compression gains. All the coding experiments are performed on the ultraspectral compression benchmark dataset col-lected from the NASA Atmospheric Infrared Sounder (AIRS) observations.

  13. Engineering Relative Compression of Genomes

    Grabowski, Szymon


    Technology progress in DNA sequencing boosts the genomic database growth at faster and faster rate. Compression, accompanied with random access capabilities, is the key to maintain those huge amounts of data. In this paper we present an LZ77-style compression scheme for relative compression of multiple genomes of the same species. While the solution bears similarity to known algorithms, it offers significantly higher compression ratios at compression speed over a order of magnitude greater. One of the new successful ideas is augmenting the reference sequence with phrases from the other sequences, making more LZ-matches available.

  14. Determination of Optimum Compression Ratio: A Tribological Aspect

    L. Yüksek


    Full Text Available Internal combustion engines are the primary energy conversion machines both in industry and transportation. Modern technologies are being implemented to engines to fulfill today's low fuel consumption demand. Friction energy consumed by the rubbing parts of the engines are becoming an important parameter for higher fuel efficiency. Rate of friction loss is primarily affected by sliding speed and the load acting upon rubbing surfaces. Compression ratio is the main parameter that increases the peak cylinder pressure and hence normal load on components. Aim of this study is to investigate the effect of compression ratio on total friction loss of a diesel engine. A variable compression ratio diesel engine was operated at four different compression ratios which were "12.96", "15:59", "18:03", "20:17". Brake power and speed was kept constant at predefined value while measuring the in- cylinder pressure. Friction mean effective pressure ( FMEP data were obtained from the in cylinder pressure curves for each compression ratio. Ratio of friction power to indicated power of the engine was increased from 22.83% to 37.06% with varying compression ratio from 12.96 to 20:17. Considering the thermal efficiency , FMEP and maximum in- cylinder pressure optimum compression ratio interval of the test engine was determined as 18.8 ÷ 19.6.

  15. Estimativa das temperaturas máximas e mínimas do ar para a região do Circuito das Frutas, SP Estimation of maximum and minimum air temperatures for the "Circuito das Frutas" region (São Paulo State, Brazil

    Ludmila Bardin


    Full Text Available Desenvolveram-se, neste trabalho modelos de estimativa da temperatura do ar com base em fatores geográficos, visando estimar os valores máximos e mínimos médios mensais e anuais na região compreendida pelos municípios que compõem o Polo Turístico do Circuito das Frutas do Estado de São Paulo. Obtiveram-se equações de regressão múltipla em função da altitude, latitude e longitude e simples em função da altitude, cujos coeficientes de determinação variam entre 0,91 a 0,96, para as temperaturas máximas e 0,71 a 0,94 para as mínimas e se apresentam as variações espaciais das temperaturas máximas e mínimas médias mensais e anuais da região de estudo na forma de mapas.Multiple regression equations to estimate mean monthy and annual maximum and minimum temperatures were developed as a function of altitude, latitude, and longitude for the "Pólo Turístico do Circuito das Frutas" region. The obtained correlation coefficients varied from 0.91 to 0.96 and 0.71 to 0.94 of the maximum and minimum air temperature, respectively. Also, maps with the spacial variability of the maximum and minimum mean monthly and annual temperatures are presented for the region.


    Xu Xiaorong; Zhang Jianwu; Huang Aiping; Jiang Bin


    An Adaptive Measurement Scheme (AMS) is investigated with Compressed Sensing (CS)theory in Cognitive Wireless Sensor Network (C-WSN).Local sensing information is collected via energy detection with Analog-to-Information Converter (AIC) at massive cognitive sensors,and sparse representation is considered with the exploration of spatial temporal correlation structure of detected signals.Adaptive measurement matrix is designed in AMS,which is based on maximum energy subset selection.Energy subset is calculated with sparse transformation of sensing information,and maximum energy subset is selected as the row vector of adaptive measurement matrix.In addition,the measurement matrix is constructed by orthogonalization of those selected row vectors,which also satisfies the Restricted Isometry Property (RIP) in CS theory.Orthogonal Matching Pursuit (OMP) reconstruction algorithm is implemented at sink node to recover original information.Simulation results are performed with the comparison of Random Measurement Scheme (RMS).It is revealed that,signal reconstruction effect based on AMS is superior to conventional RMS Gaussian measurement.Moreover,AMS has better detection performance than RMS at lower compression rate region,and it is suitable for large-scale C-WSN wideband spectrum sensing.

  17. Prediction of 28-day Compressive Strength of Concrete from Early Strength and Accelerated Curing Parameters

    T.R. Neelakantan; S. Ramasundaram; Shanmugavel, R.; R. Vinoth


    Predicting 28-day compressive strength of concrete is an important research task for many years. In this study, concrete specimens were cured in two phases, initially at room temperature for a maximum of 30 h and later at a higher temperature for accelerated curing for a maximum of 3 h. Using the early strength obtained after the two-phase curing and the curing parameters, regression equations were developed to predict the 28-day compressive strength. For the accelerated curing (higher temper...

  18. Cacti with maximum Kirchhoff index

    Wang, Wen-Rui; Pan, Xiang-Feng


    The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...

  19. Generic maximum likely scale selection

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo


    The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....

  20. Revision of regional maximum flood (RMF) estimation in Namibia


    Nov 26, 2013 ... This paper revisits the Kovacs RMF flood model applicable to Namibia, and incorporates 30 ..... The Namibia Meteorological Office in Windhoek made avail- ... recorded flood peaks which were used to calculate K-values.

  1. Economics and Maximum Entropy Production

    Lorenz, R. D.


    Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.

  2. Biomechanics of turtle shells: how whole shells fail in compression.

    Magwene, Paul M; Socha, John J


    Turtle shells are a form of armor that provides varying degrees of protection against predation. Although this function of the shell as armor is widely appreciated, the mechanical limits of protection and the modes of failure when subjected to breaking stresses have not been well explored. We studied the mechanical properties of whole shells and of isolated bony tissues and sutures in four species of turtles (Trachemys scripta, Malaclemys terrapin, Chrysemys picta, and Terrapene carolina) using a combination of structural and mechanical tests. Structural properties were evaluated by subjecting whole shells to compressive and point loads in order to quantify maximum load, work to failure, and relative shell deformations. The mechanical properties of bone and sutures from the plastral region of the shell were evaluated using three-point bending experiments. Analysis of whole shell structural properties suggests that small shells undergo relatively greater deformations before failure than do large shells and similar amounts of energy are required to induce failure under both point and compressive loads. Location of failures occurred far more often at sulci than at sutures (representing the margins of the epidermal scutes and the underlying bones, respectively), suggesting that the small grooves in the bone created by the sulci introduce zones of weakness in the shell. Values for bending strength, ultimate bending strain, Young's modulus, and energy absorption, calculated from the three-point bending data, indicate that sutures are relatively weaker than the surrounding bone, but are able to absorb similar amounts of energy due to higher ultimate strain values. Copyright © 2012 Wiley Periodicals, Inc.

  3. The compression of liquids

    Whalley, E.

    The compression of liquids can be measured either directly by applying a pressure and noting the volume change, or indirectly, by measuring the magnitude of the fluctuations of the local volume. The methods used in Ottawa for the direct measurement of the compression are reviewed. The mean-square deviation of the volume from the mean at constant temperature can be measured by X-ray and neutron scattering at low angles, and the meansquare deviation at constant entropy can be measured by measuring the speed of sound. The speed of sound can be measured either acoustically, using an acoustic transducer, or by Brillouin spectroscopy. Brillouin spectroscopy can also be used to study the shear waves in liquids if the shear relaxation time is > ∼ 10 ps. The relaxation time of water is too short for the shear waves to be studied in this way, but they do occur in the low-frequency Raman and infrared spectra. The response of the structure of liquids to pressure can be studied by neutron scattering, and recently experiments have been done at Atomic Energy of Canada Ltd, Chalk River, on liquid D 2O up to 15.6 kbar. They show that the near-neighbor intermolecular O-D and D-D distances are less spread out and at shorter distances at high pressure. Raman spectroscopy can also provide information on the structural response. It seems that the O-O distance in water decreases much less with pressure than it does in ice. Presumably, the bending of O-O-O angles tends to increase the O-O distance, and so to largely compensate the compression due to the direct effect of pressure.

  4. Compressive Transient Imaging

    Sun, Qilin


    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  5. Compressive Tectonics around Tibetan Plateau Edges

    Zhao Zhixin; Xu Jiren


    Various earthquake fault types, mechanism solutions, stress field, and other geophysical data were analyzed for study on the crust movement in the Tibetan plateau and its tectonic implications. The results show that numbers of thrust fault and strike-slip fault type earthquakes with strong compressive stress near NNE-SSW direction occurred in the edges around the plateau except the eastern boundary. Some normal faulting type earthquakes concentrate in the Central Tibetan plateau. The strikes of fault planes of thrust and strike-slip faulting earthquakes are almost in the E-W direction based on the analyses of the Wulff stereonet diagrams of fault plane solutions. This implies that the dislocation slip vectors of the thrust and strike-slip faulting type events have quite great components in the N-S direction. The compression motion mainly probably plays the tectonic active regime around the plateau edges. The compressive stress in N-S or NE-SW directions predominates earthquake occurrence in the thrust and strike-slip faulting event region around the plateau. The compressive motion around the Tibetan plateau edge is attributable to the northward motion of the Indian subcontinent plate. The northward motion of the Tibetan plateau shortened in the N-S direction encounters probably strong obstructions at the western and northern margins.

  6. Image Compression using Space Adaptive Lifting Scheme

    Ramu Satyabama


    Full Text Available Problem statement: Digital images play an important role both in daily life applications as well as in areas of research and technology. Due to the increasing traffic caused by multimedia information and digitized form of representation of images; image compression has become a necessity. Approach: Wavelet transform has demonstrated excellent image compression performance. New algorithms based on Lifting style implementation of wavelet transforms have been presented in this study. Adaptively is introduced in lifting by choosing the prediction operator based on the local properties of the image. The prediction filters are chosen based on the edge detection and the relative local variance. In regions where the image is locally smooth, we use higher order predictors and near edges we reduce the order and thus the length of the predictor. Results: We have applied the adaptive prediction algorithms to test images. The original image is transformed using adaptive lifting based wavelet transform and it is compressed using Set Partitioning In Hierarchical Tree algorithm (SPIHT and the performance is compared with the popular 9/7 wavelet transform. The performance metric Peak Signal to Noise Ratio (PSNR for the reconstructed image is computed. Conclusion: The proposed adaptive algorithms give better performance than 9/7 wavelet, the most popular wavelet transforms. Lifting allows us to incorporate adaptivity and nonlinear operators into the transform. The proposed methods efficiently represent the edges and appear promising for image compression. The proposed adaptive methods reduce edge artifacts and ringing and give improved PSNR for edge dominated images.

  7. Statistical Mechanical Analysis of Compressed Sensing Utilizing Correlated Compression Matrix

    Takeda, Koujin


    We investigate a reconstruction limit of compressed sensing for a reconstruction scheme based on the L1-norm minimization utilizing a correlated compression matrix with a statistical mechanics method. We focus on the compression matrix modeled as the Kronecker-type random matrix studied in research on multi-input multi-output wireless communication systems. We found that strong one-dimensional correlations between expansion bases of original information slightly degrade reconstruction performance.

  8. Osmotic compressibility of soft colloidal systems.

    Tan, Beng H; Tam, Kam C; Lam, Yee C; Tan, Chee B


    A turbidimetric analysis of particle interaction of model pH-responsive microgel systems consisting of methacrylic acid-ethyl acrylate cross-linked with diallyl phthalate in colloidal suspensions is described. The structure factor at zero scattering angle, S(0), can be determined with good precision for wavelengths greater than 500 nm, and it measures the dispersion's resistance to particle compression. The structure factor of microgels at various cross-linked densities and ionic strengths falls onto a master curve when plotted against the effective volume fraction, phi(eff) = kc, which clearly suggests that particle interaction potential and osmotic compressibility is a function of effective volume fraction. In addition, the deviation of the structure factor, S(0), of our microgel systems with the structure factor of hard spheres, S(PY)(0), exhibits a maximum at phi(eff) approximately 0.2. Beyond this point the osmotic de-swelling force exceeds the osmotic pressure inside the soft particles resulting in particle shrinkage. Good agreement was obtained when the structural properties of our microgel systems obtained from turbidimetric analysis and rheology measurements were compared. Therefore, a simple turbidimetric analysis of these model pH-responsive microgel systems permits a quantitative evaluation of factors governing particle osmotic compressibility.

  9. Compressive full waveform lidar

    Yang, Weiyi; Ke, Jun


    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  10. Metal Hydride Compression

    Johnson, Terry A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bowman, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Barton [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Anovitz, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jensen, Craig [Hawaii Hydrogen Carriers LLC, Honolulu, HI (United States)


    Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methods of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H2 at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H2) gas compressor with a

  11. Beamforming using compressive sensing.

    Edelmann, Geoffrey F; Gaumond, Charles F


    Compressive sensing (CS) is compared with conventional beamforming using horizontal beamforming of at-sea, towed-array data. They are compared qualitatively using bearing time records and quantitatively using signal-to-interference ratio. Qualitatively, CS exhibits lower levels of background interference than conventional beamforming. Furthermore, bearing time records show increasing, but tolerable, levels of background interference when the number of elements is decreased. For the full array, CS generates signal-to-interference ratio of 12 dB, but conventional beamforming only 8 dB. The superiority of CS over conventional beamforming is much more pronounced with undersampling.

  12. Objects of maximum electromagnetic chirality

    Fernandez-Corbaton, Ivan


    We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.

  13. Maximum mutual information regularized classification

    Wang, Jim Jing-Yan


    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  14. Adaptive Super-Spatial Prediction Approach For Lossless Image Compression

    Arpita C. Raut,


    Full Text Available Existing prediction based lossless image compression schemes perform prediction of an image data using their spatial neighborhood technique which can’t predict high-frequency image structure components, such as edges, patterns, and textures very well which will limit the image compression efficiency. To exploit these structure components, adaptive super-spatial prediction approach is developed. The super-spatial prediction approach is adaptive to compress high frequency structure components from the grayscale image. The motivation behind the proposed prediction approach is taken from motion prediction in video coding, which attempts to find an optimal prediction of structure components within the previously encoded image regions. This prediction approach is efficient for image regions with significant structure components with respect to parameters as compression ratio, bit rate as compared to CALIC (Context-based adaptive lossless image coding.

  15. The strong maximum principle revisited

    Pucci, Patrizia; Serrin, James

    In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.

  16. Conjugate ground and multisatellite observations of compression-related EMIC Pc1 waves and associated proton precipitation

    Usanova, M. E.; Mann, I. R.; Kale, Z. C.; Rae, I. J.; Sydora, R. D.; Sandanger, M.; Søraas, F.; Glassmeier, K.-H.; Fornacon, K.-H.; Matsui, H.; Puhl-Quinn, P. A.; Masson, A.; Vallières, X.


    We present coordinated ground satellite observations of solar wind compression-related dayside electromagnetic ion cyclotron (EMIC) waves from 25 September 2005. On the ground, dayside structured EMIC wave activity was observed by the CARISMA and STEP magnetometer arrays for several hours during the period of maximum compression. The EMIC waves were also registered by the Cluster satellites for half an hour, as they consecutively crossed the conjugate equatorial plasmasphere on their perigee passes at L ˜ 5. Simultaneously, conjugate to Cluster, NOAA 17 passed through field lines supporting EMIC wave activity and registered a localized enhancement of precipitating protons with energies >30 keV. Our observations suggest that generation of the EMIC waves and consequent loss of energetic protons may last for several hours while the magnetosphere remains compressed. The EMIC waves were confined to the outer plasmasphere region, just inside the plasmapause. Analysis of lower-frequency Pc5 waves observed both by the Cluster electron drift instrument (EDI) and fluxgate magnetometer (FGM) instruments and by the ground magnetometers show that the repetitive structure of EMIC wave packets observed on the ground cannot be explained by the ultra low frequency (ULF) wave modulation theory. However, the EMIC wave repetition period on the ground was close to the estimated field-aligned Alfvénic travel time. For a short interval of time, there was some evidence that EMIC wave packet repetition period in the source region was half of that on the ground, which further suggests bidirectional propagation of wave packets.

  17. 应用负压引流结合区域弹性加压包扎术预防腮腺术后涎瘘的临床研究%Use of negative pressure drainage combined with regional elastic compression dressing in preventing postoperative salivary fistula

    董希银; 张文忠; 朱学芬; 杨雯君


    Objective To study the application value of negative pressure drainage in combination with regional elastic compression dressing in parotid gland operation, and to explore the factors related to salivary fistula after parotidectomy. Method 200 cases of pa⁃tients with benign parotid tumors who needed operation treatment were randomly divided into 2 groups:100 cases were treated with neg⁃ative pressure drainage combined with regional elastic compression dressing in the parotid gland area, and 100 cases were treated with traditional bandage. The incidence of postoperative salivary fistula, and the possible factors leading to salivary fistula during and after the operation were analyzed. Statistical analysis was made by SPSS16.0 software package. Results The incident of salivary fistula in the group using negative pressure drainage combined with regional elastic compression dressing ( 2%) was significantly lower than that in the group using traditional bandage ( 12%) , and salivary fistula could even be prevented in the former group. The difference between the two groups was significant (P<0.05). Conclusion The negative pressure drainage combined with regional elastic compression dressing is obviously better than the traditional bandage in preventing salivary fistula after parotidectomy. With less dressing time and a beautiful appearance, this method is also comfortable for the patients, which does not affect their eating, speaking and hearing, and makes no compression pain in the head and face. With good clinical application value, negative pressure drainage combined with re⁃gional elastic compression dressing is worthy of popularization.%目的:研究腮腺术区负压引流结合区域弹性加压包扎术在腮腺手术的应用价值,探讨腮腺术后涎瘘发生的相关因素。方法选择200例腮腺良性肿瘤需手术治疗的患者,随机分为100例腮腺术区负压引流去除后区域弹性加压包扎组和100例

  18. Compressive sensing in medical imaging.

    Graff, Christian G; Sidky, Emil Y


    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  19. Speech Compression Using Multecirculerletet Transform

    Sulaiman Murtadha


    Full Text Available Compressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension on speech compression. DWT and MCT performances in terms of compression ratio (CR, mean square error (MSE and peak signal to noise ratio (PSNR are assessed. Computer simulation results indicate that the two dimensions MCT offer a better compression ratio, MSE and PSNR than DWT.

  20. libpolycomp: Compression/decompression library

    Tomasi, Maurizio


    Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.

  1. Image Compression using GSOM Algorithm



    Full Text Available compression. Conventional techniques such as Huffman coding and the Shannon Fano method, LZ Method, Run Length Method, LZ-77 are more recent methods for the compression of data. A traditional approach to reduce the large amount of data would be to discard some data redundancy and introduce some noise after reconstruction. We present a neural network based Growing self-organizing map technique that may be a reliable and efficient way to achieve vector quantization. Typical application of such algorithm is image compression. Moreover, Kohonen networks realize a mapping between an input and an output space that preserves topology. This feature can be used to build new compression schemes which allow obtaining better compression rate than with classical method as JPEG without reducing the image quality .the experiment result show that proposed algorithm improve the compression ratio in BMP, JPG and TIFF File.

  2. Data compression on the sphere

    McEwen, J D; Eyers, D M; 10.1051/0004-6361/201015728


    Large data-sets defined on the sphere arise in many fields. In particular, recent and forthcoming observations of the anisotropies of the cosmic microwave background (CMB) made on the celestial sphere contain approximately three and fifty mega-pixels respectively. The compression of such data is therefore becoming increasingly important. We develop algorithms to compress data defined on the sphere. A Haar wavelet transform on the sphere is used as an energy compression stage to reduce the entropy of the data, followed by Huffman and run-length encoding stages. Lossless and lossy compression algorithms are developed. We evaluate compression performance on simulated CMB data, Earth topography data and environmental illumination maps used in computer graphics. The CMB data can be compressed to approximately 40% of its original size for essentially no loss to the cosmological information content of the data, and to approximately 20% if a small cosmological information loss is tolerated. For the topographic and il...

  3. Energy transfer in compressible turbulence

    Bataille, Francoise; Zhou, YE; Bertoglio, Jean-Pierre


    This letter investigates the compressible energy transfer process. We extend a methodology developed originally for incompressible turbulence and use databases from numerical simulations of a weak compressible turbulence based on Eddy-Damped-Quasi-Normal-Markovian (EDQNM) closure. In order to analyze the compressible mode directly, the well known Helmholtz decomposition is used. While the compressible component has very little influence on the solenoidal part, we found that almost all of the compressible turbulence energy is received from its solenoidal counterpart. We focus on the most fundamental building block of the energy transfer process, the triadic interactions. This analysis leads us to conclude that, at low turbulent Mach number, the compressible energy transfer process is dominated by a local radiative transfer (absorption) in both inertial and energy containing ranges.

  4. Perceptually Lossless Wavelet Compression

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John


    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  5. Compressive Sensing DNA Microarrays

    Richard G. Baraniuk


    Full Text Available Compressive sensing microarrays (CSMs are DNA-based sensors that operate using group testing and compressive sensing (CS principles. In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM, each sensor responds to a set of targets. We study the problem of designing CSMs that simultaneously account for both the constraints from CS theory and the biochemistry of probe-target DNA hybridization. An appropriate cross-hybridization model is proposed for CSMs, and several methods are developed for probe design and CS signal recovery based on the new model. Lab experiments suggest that in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, out-of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications in which only short hybridization times are allowed.

  6. Compressive light field sensing.

    Babacan, S Derin; Ansorge, Reto; Luessi, Martin; Matarán, Pablo Ruiz; Molina, Rafael; Katsaggelos, Aggelos K


    We propose a novel design for light field image acquisition based on compressive sensing principles. By placing a randomly coded mask at the aperture of a camera, incoherent measurements of the light passing through different parts of the lens are encoded in the captured images. Each captured image is a random linear combination of different angular views of a scene. The encoded images are then used to recover the original light field image via a novel Bayesian reconstruction algorithm. Using the principles of compressive sensing, we show that light field images with a large number of angular views can be recovered from only a few acquisitions. Moreover, the proposed acquisition and recovery method provides light field images with high spatial resolution and signal-to-noise-ratio, and therefore is not affected by limitations common to existing light field camera designs. We present a prototype camera design based on the proposed framework by modifying a regular digital camera. Finally, we demonstrate the effectiveness of the proposed system using experimental results with both synthetic and real images.

  7. Splines in Compressed Sensing

    S. Abhishek


    Full Text Available It is well understood that in any data acquisition system reduction in the amount of data reduces the time and energy, but the major trade-off here is the quality of outcome normally, lesser the amount of data sensed, lower the quality. Compressed Sensing (CS allows a solution, for sampling below the Nyquist rate. The challenging problem of increasing the reconstruction quality with less number of samples from an unprocessed data set is addressed here by the use of representative coordinate selected from different orders of splines. We have made a detailed comparison with 10 orthogonal and 6 biorthogonal wavelets with two sets of data from MIT Arrhythmia database and our results prove that the Spline coordinates work better than the wavelets. The generation of two new types of splines such as exponential and double exponential are also briefed here .We believe that this is one of the very first attempts made in Compressed Sensing based ECG reconstruction problems using raw data.  

  8. Compressibility effects on the non-linear receptivity of boundary layers to dielectric barrier discharges

    Denison, Marie F. C.

    The reduction of drag and aerodynamic heating caused by boundary layer transition is of central interest for the development of hypersonic vehicles. Receptivity to flow perturbation in the form of Tollmien-Schlichting (TS) wave growth often determines the first stage of the transition process, which can be delayed by depositing specific excitations into the boundary layer. Weakly ionized Dielectric Barrier Discharge (DBD) actuators are being investigated as possible sources of such excitations, but little is known today about their interaction with high-speed flows. In this framework, the first part of the thesis is dedicated to a receptivity study of laminar compressible boundary layers over a flat plate by linear stability analysis following an adjoint operator formulation, under DBD representative excitations assumed independent of flow conditions. The second part of the work concentrates on the development of a coupled plasma-Navier and Stokes solver targeted at the study of supersonic flow and compressibility effects on DBD forcing and non-parallel receptivity. The linear receptivity study of quasi-parallel compressible flows reveals several interesting features such as a significant shift of the region of maximum receptivity deeper into the flow at high Mach number and strong wave amplitude reduction compared to incompressible flows. The response to DBD relevant excitation distributions and to variations of the base flow conditions and system length scales follows these trends. Observed absolute amplitude changes and relative sensitivity modifications between source types are related to the evolution of the offset between forcing peak profile and relevant adjoint mode maximum. The analysis highlights the crucial importance of designing and placing the actuator in a way that matches its force field to the position of maximum boundary layer receptivity for the specific flow conditions of interest. In order to address the broad time and length scale spectrum

  9. Maximum twin shear stress factor criterion for sliding mode fracture initiation

    黎振兹; 李慧剑; 黎晓峰; 周洪彬; 郝圣旺


    Previous researches on the mixed mode fracture initiation criteria were mostly focused on opening mode fracture. In this study, the authors proposed a new criterion for mixed mode sliding fracture initiation, which is the maximum twin shear stress factor criterion. The authors studied a finite width plate with central slant crack, subject to a far-field uniform uniaxial tensile or compressive stress.

  10. q-ary compressive sensing

    Mroueh, Youssef; Rosasco, Lorenzo


    We introduce q-ary compressive sensing, an extension of 1-bit compressive sensing. We propose a novel sensing mechanism and a corresponding recovery procedure. The recovery properties of the proposed approach are analyzed both theoretically and empirically. Results in 1-bit compressive sensing are recovered as a special case. Our theoretical results suggest a tradeoff between the quantization parameter q, and the number of measurements m in the control of the error of the resulting recovery a...

  11. Introduction to compressible fluid flow

    Oosthuizen, Patrick H


    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices


    Lyashenko P. A.


    Full Text Available The odometric compression of sand with constant rate of loading (CRL or constant rate of deformation (CRD and continuous registration of the corresponding reaction allows to identify the effect of stepwise changes of deformation (at the CRL and the power reaction (at the CRD. Physical modeling of compression on the sandy model showed the same effect. The physical model was made of fine sand with marks, mimicking large inclusions. Compression of the soil at the CRD was uneven, stepwise, and the strain rate of the upper boundary of the sandy model changed cyclically. Maximum amplitudes of cycles passed through a maximum. Inside of the sand model, the uneven strain resulted in the mutual displacement of the adjacent parts located at the same depth. The growth of external pressure, the marks showed an increase or decrease in displacement and even move opposite to the direction of movement (settlement the upper boundary of the model ‒ "floating" of marks. Marks, at different depths, got at the same time different movements, including mutually contradictory. The mark settlements sudden growth when the sufficiently large pressure. These increments in settlements remained until the end of loading decreasing with depth. They were a confirmation of the hypothesis about the total destruction of the soil sample at a pressure of "structural strength". The hypothesis of the "floating" reason based on the obvious assumption that the marks are moved together with the surrounding sand. The explanation of the effect of "floating" is supported by the fact that the value of "floating" the more, the greater the depth

  13. Efficiency at Maximum Power of Interacting Molecular Machines

    Golubeva, Natalia; Imparato, Alberto


    We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many- motor...... system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range....

  14. Compressive sensing of sparse tensors.

    Friedland, Shmuel; Li, Qun; Schonfeld, Dan


    Compressive sensing (CS) has triggered an enormous research activity since its first appearance. CS exploits the signal's sparsity or compressibility in a particular domain and integrates data compression and acquisition, thus allowing exact reconstruction through relatively few nonadaptive linear measurements. While conventional CS theory relies on data representation in the form of vectors, many data types in various applications, such as color imaging, video sequences, and multisensor networks, are intrinsically represented by higher order tensors. Application of CS to higher order data representation is typically performed by conversion of the data to very long vectors that must be measured using very large sampling matrices, thus imposing a huge computational and memory burden. In this paper, we propose generalized tensor compressive sensing (GTCS)-a unified framework for CS of higher order tensors, which preserves the intrinsic structure of tensor data with reduced computational complexity at reconstruction. GTCS offers an efficient means for representation of multidimensional data by providing simultaneous acquisition and compression from all tensor modes. In addition, we propound two reconstruction procedures, a serial method and a parallelizable method. We then compare the performance of the proposed method with Kronecker compressive sensing (KCS) and multiway compressive sensing (MWCS). We demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both reconstruction accuracy (within a range of compression ratios) and processing speed. The major disadvantage of our methods (and of MWCS as well) is that the compression ratios may be worse than that offered by KCS.

  15. Uncommon upper extremity compression neuropathies.

    Knutsen, Elisa J; Calfee, Ryan P


    Hand surgeons routinely treat carpal and cubital tunnel syndromes, which are the most common upper extremity nerve compression syndromes. However, more infrequent nerve compression syndromes of the upper extremity may be encountered. Because they are unusual, the diagnosis of these nerve compression syndromes is often missed or delayed. This article reviews the causes, proposed treatments, and surgical outcomes for syndromes involving compression of the posterior interosseous nerve, the superficial branch of the radial nerve, the ulnar nerve at the wrist, and the median nerve proximal to the wrist. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Image Compression Algorithms Using Dct

    Er. Abhishek Kaushik


    Full Text Available Image compression is the application of Data compression on digital images. The discrete cosine transform (DCT is a technique for converting a signal into elementary frequency components. It is widely used in image compression. Here we develop some simple functions to compute the DCT and to compress images. An image compression algorithm was comprehended using Matlab code, and modified to perform better when implemented in hardware description language. The IMAP block and IMAQ block of MATLAB was used to analyse and study the results of Image Compression using DCT and varying co-efficients for compression were developed to show the resulting image and error image from the original images. Image Compression is studied using 2-D discrete Cosine Transform. The original image is transformed in 8-by-8 blocks and then inverse transformed in 8-by-8 blocks to create the reconstructed image. The inverse DCT would be performed using the subset of DCT coefficients. The error image (the difference between the original and reconstructed image would be displayed. Error value for every image would be calculated over various values of DCT co-efficients as selected by the user and would be displayed in the end to detect the accuracy and compression in the resulting image and resulting performance parameter would be indicated in terms of MSE , i.e. Mean Square Error.

  17. Maximum-entropy probability distributions under Lp-norm constraints

    Dolinar, S.


    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  18. Squeezing the muscle: compression clothing and muscle metabolism during recovery from high intensity exercise.

    Billy Sperlich

    Full Text Available The purpose of this experiment was to investigate skeletal muscle blood flow and glucose uptake in m. biceps (BF and m. quadriceps femoris (QF 1 during recovery from high intensity cycle exercise, and 2 while wearing a compression short applying ~37 mmHg to the thigh muscles. Blood flow and glucose uptake were measured in the compressed and non-compressed leg of 6 healthy men by using positron emission tomography. At baseline blood flow in QF (P = 0.79 and BF (P = 0.90 did not differ between the compressed and the non-compressed leg. During recovery muscle blood flow was higher compared to baseline in both compressed (P<0.01 and non-compressed QF (P<0.001 but not in compressed (P = 0.41 and non-compressed BF (P = 0.05; effect size = 2.74. During recovery blood flow was lower in compressed QF (P<0.01 but not in BF (P = 0.26 compared to the non-compressed muscles. During baseline and recovery no differences in blood flow were detected between the superficial and deep parts of QF in both, compressed (baseline P = 0.79; recovery P = 0.68 and non-compressed leg (baseline P = 0.64; recovery P = 0.06. During recovery glucose uptake was higher in QF compared to BF in both conditions (P<0.01 with no difference between the compressed and non-compressed thigh. Glucose uptake was higher in the deep compared to the superficial parts of QF (compression leg P = 0.02. These results demonstrate that wearing compression shorts with ~37 mmHg of external pressure reduces blood flow both in the deep and superficial regions of muscle tissue during recovery from high intensity exercise but does not affect glucose uptake in BF and QF.

  19. Compression and texture in socks enhance football kicking performance.

    Hasan, Hosni; Davids, Keith; Chow, Jia Yi; Kerr, Graham


    The purpose of this study was to observe effects of wearing textured insoles and clinical compression socks on organisation of lower limb interceptive actions in developing athletes of different skill levels in association football. Six advanced learners and six completely novice football players (15.4±0.9years) performed 20 instep kicks with maximum velocity, in four randomly organised insoles and socks conditions, (a) Smooth Socks with Smooth Insoles (SSSI); (b) Smooth Socks with Textured Insoles (SSTI); (c) Compression Socks with Smooth Insoles (CSSI) and (d), Compression Socks with Textured Insoles (CSTI). Reflective markers were placed on key anatomical locations and the ball to facilitate three-dimensional (3D) movement recording and analysis. Data on 3D kinematic variables and initial ball velocity were analysed using one-way mixed model ANOVAs. Results revealed that wearing textured and compression materials enhanced performance in key variables, such as the maximum velocity of the instep kick and increased initial ball velocity, among advanced learners compared to the use of non-textured and compression materials. Adding texture to football boot insoles appeared to interact with compression materials to improve kicking performance, captured by these important measures. This improvement in kicking performance is likely to have occurred through enhanced somatosensory system feedback utilised for foot placement and movement organisation of the lower limbs. Data suggested that advanced learners were better at harnessing the augmented feedback information from compression and texture to regulate emerging movement patterns compared to novices. Copyright © 2016. Published by Elsevier B.V.

  20. Maximum entropy production in daisyworld

    Maunu, Haley A.; Knuth, Kevin H.


    Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.

  1. Maximum Matchings via Glauber Dynamics

    Jindal, Anant; Pal, Manjish


    In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...

  2. Preliminary technical and economic viability for the implantation of fluvial transport of CNG (Compressed Natural Gas) for barges in Amazon Region; Avaliacao preliminar de viabilidade tecnico-economica para implantacao de transporte fluvial de GNC (Gas Natual Comprimido) por barcacas na Regiao Amazonica

    Araujo, Marcos C.C. de; Porto, Paulo L. Lemgruber [Interocean Engenharia e Ship Management, Rio de janeiro, RJ (Brazil); Cunha, Rafael H. da [Metro Rio, RJ (Brazil); Garcia, Rafael M. [Pic Brasil (Brazil); Almeida, Marco A.R. de [Universidade Gama Filho (UGF), Rio de Janeiro, RJ (Brazil)


    The isolated regions of the Amazon present difficulties for integration with the electrical system which is creating some economic problems due to the consequent costs of electric generation of subsidies as a function of the fossil fuel use as oils diesel and fuel. A viable option is the use of Natural Gas - NG that is Also available in the region. Its modal of transport possible in the Region North they are for gas-lines or barges. The Compressed Natural Gas transport is distinguished that - CNG for barges was still not tested operationally in Brazil. Soon, to develop a Preliminary Study of Viability Technician - Economic - SVTE for the implantation of fluvial transport of CNG between the cities of Coari and Manaus is basic, therefore it is created strategical alternative for the electric generation in this region. The electric sector, the characteristics of the NG and the transport in this region had been analyzed to support to the work. The gas line and the fluvial transport of CNG for barges in this region are not conflicting, and they in a complementary form can act. The SVTE presented a Liquid Present Value and Internal Tax of very attractive Return justifying its implantation. (author)

  3. The role of elastic compressibility in dynamic subduction models

    Austmann, Walter; Govers, Rob; Burov, Evgenii


    Recent advances in geodynamic numerical models show a trend towards more realistic rheologies. The Earth is no longer modeled as a purely viscous fluid, but the effects of, for example, elasticity and plasticity are also included. However, by making such improvements, it is essential to include these more complex rheologies in a consistent way. Specifically, compressibility needs also to be included, an effect that is commonly neglected in numerical models. Recently, we showed that the effect of elastic compressibility is significant. This was done for a gravity driven cylinder in a homogeneous Maxwell fluid bounded by closed boundaries. For a fluid with a realistic compressibility (Poisson ratio equals 0.3), the settling velocity showed a discrepancy with the semi-analytical steady state incompressible solution of approximately 40%. The motion of the fluid was no longer restricted by a small region around the cylinder, but the motion of the cylinder compressed also the fluid near the bottom boundary. This compression decreased the resistance on the cylinder and resulted in a larger settling velocity. Here, we examine the influence of elastic compressibility in an oceanic subduction setting. The slab is driven by slab pull and a far field prescribed plate motion. Preliminary results indicate that elastic compressibility has a significant effect on the fluid motion. Differences with respect to nearly incompressible solution are most significant near material boundaries. In line with our earlier findings, the flow is increased in regions of confined flow, such as the mantle wedge or the subduction channel. As a consequence, an increasing compressibility results in a larger slab velocity. We seek to identify surface observables, such as topography and plate motion, that allow us to distinguish the compressible and incompressible behavior.

  4. 76 FR 1504 - Pipeline Safety: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure...


    ...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...

  5. Axial Compressive Strength of Foamcrete with Different Profiles and Dimensions

    Othuman Mydin M.A.


    Full Text Available Lightweight foamcrete is a versatile material; primarily consist of a cement based mortar mixed with at least 20% volume of air. High flow ability, lower self-weight, minimal requirement of aggregate, controlled low strength and good thermal insulation properties are a few characteristics of foamcrete. Its dry densities, typically, is below 1600kg/m3 with compressive strengths maximum of 15MPa. The ASTM standard provision specifies a correction factor for concrete strengths of between 14 and 42MPa to compensate for the reduced strength when the aspect height-to-diameter ratio of specimen is less than 2.0, while the CEB-FIP provision specifically mentions the ratio of 150 x 300mm cylinder strength to 150 mm cube strength. However, both provisions requirements do not specifically clarify the applicability and/or modification of the correction factors for the compressive strength of foamcrete. This proposed laboratory work is intended to study the effect of different dimensions and profiles on the axial compressive strength of concrete. Specimens of various dimensions and profiles are cast with square and circular cross-sections i.e., cubes, prisms and cylinders, and to investigate their behavior in compression strength at 7 and 28 days. Hypothetically, compressive strength will decrease with the increase of concrete specimen dimension and concrete specimen with cube profile would yield comparable compressive strength to cylinder (100 x 100 x 100mm cube to 100dia x 200mm cylinder.

  6. Compressive sensing by learning a Gaussian mixture model from measurements.

    Yang, Jianbo; Liao, Xuejun; Yuan, Xin; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence


    Compressive sensing of signals drawn from a Gaussian mixture model (GMM) admits closed-form minimum mean squared error reconstruction from incomplete linear measurements. An accurate GMM signal model is usually not available a priori, because it is difficult to obtain training signals that match the statistics of the signals being sensed. We propose to solve that problem by learning the signal model in situ, based directly on the compressive measurements of the signals, without resorting to other signals to train a model. A key feature of our method is that the signals being sensed are treated as random variables and are integrated out in the likelihood. We derive a maximum marginal likelihood estimator (MMLE) that maximizes the likelihood of the GMM of the underlying signals given only their linear compressive measurements. We extend the MMLE to a GMM with dominantly low-rank covariance matrices, to gain computational speedup. We report extensive experimental results on image inpainting, compressive sensing of high-speed video, and compressive hyperspectral imaging (the latter two based on real compressive cameras). The results demonstrate that the proposed methods outperform state-of-the-art methods by significant margins.

  7. Conceptual design of heavy ion beam compression using a wedge

    Jonathan C. Wong


    Full Text Available Heavy ion beams are a useful tool for conducting high energy density physics (HEDP experiments. Target heating can be enhanced by beam compression, because a shorter pulse diminishes hydrodynamic expansion during irradiation. A conceptual design is introduced to compress ∼100  MeV/u to ∼GeV/u heavy ion beams using a wedge. By deflecting the beam with a time-varying field and placing a tailor-made wedge amid its path downstream, each transverse slice passes through matter of different thickness. The resulting energy loss creates a head-to-tail velocity gradient, and the wedge shape can be designed by using stopping power models to give maximum compression at the target. The compression ratio at the target was found to vary linearly with (head-to-tail centroid offset/spot radius at the wedge. The latter should be approximately 10 to attain tenfold compression. The decline in beam quality due to projectile ionization, energy straggling, fragmentation, and scattering is shown to be acceptable for well-chosen wedge materials. A test experiment is proposed to verify the compression scheme and to study the beam-wedge interaction and its associated beam dynamics, which will facilitate further efforts towards a HEDP facility.


    张同文; 刘禹; 袁玉江; 魏文寿; 喻树龙; 陈峰


    for de-trending. After all the processes,we obtained three kinds of chronologies( STD, RES and ARS)of tree-ring width data and gray values.Based on the tree-ring data analysis, mean maximum temperature from May to August of the Gongnaisi region from 1777 to 2008 A. D. Has been reconstructed by the tree-ring average gray values. For the calibrated period (1958 ~ 2008 A. D. ) ,the predictor variable accounts for 39% of the variance of mean maximum temperature data. The mean maximum temperature reconstruction shows that there are 34 warm years and 38 cold years. The warm events (lasting for more than three years)were 1861 ~ 1864 A. D., 1873 ~ 1876A. D. And 1917 ~ 1919A. D. ; and the cold events were 1816 ~ 1818A. D., 1948 ~ 1950A. D. And 1957 - 1959A. D. Furthermore, these years and events correspond well with historical documents. By applying a 11-year moving average to our reconstruction, only one period with above average reconstructed mean maximum temperature (1777 ~ 2008A. D. ) comprise 1845 ~ 1925A. D. ; the two periods below average consist of 1788 ~ 1844A. D. And 1926~2001 A. D. The reconstructed mean maximum temperature has increased since the 1990s and agreed well with instrumental measurements in the North Western China in the recent 50 years. The power spectrum analysis shows that there are 154-,77-,2. 7- and 2. 3-years cycles in our reconstruction, which may be associated with solar activity and quasi-biennial oscillation ( QBO). The moving t-test indicates that the significant abrupt changes were presented in about 1842A. D., 1880A. D. And 1923A. D. The significant correlations between our reconstruction and the gridded dataset of the Northern Hemisphere and three kinds of index (SOI, APOI, and AOI) may imply that mean maximum temperature of the Gongnaisi region is possibly influenced not only by local,but also by multiple large-scale climate changes to some extent.

  9. An underwater acoustic data compression method based on compressed sensing

    郭晓乐; 杨坤德; 史阳; 段睿


    The use of underwater acoustic data has rapidly expanded with the application of multichannel, large-aperture underwater detection arrays. This study presents an underwater acoustic data compression method that is based on compressed sensing. Underwater acoustic signals are transformed into the sparse domain for data storage at a receiving terminal, and the improved orthogonal matching pursuit (IOMP) algorithm is used to reconstruct the original underwater acoustic signals at a data processing terminal. When an increase in sidelobe level occasionally causes a direction of arrival estimation error, the proposed compression method can achieve a 10 times stronger compression for narrowband signals and a 5 times stronger compression for wideband signals than the orthogonal matching pursuit (OMP) algorithm. The IOMP algorithm also reduces the computing time by about 20% more than the original OMP algorithm. The simulation and experimental results are discussed.

  10. TPC data compression

    Berger, Jens; Frankenfeld, Ulrich; Lindenstruth, Volker; Plamper, Patrick; Roehrich, Dieter; Schaefer, Erich; W. Schulz, Markus; M. Steinbeck, Timm; Stock, Reinhard; Sulimma, Kolja; Vestboe, Anders; Wiebalck, Arne E-mail:


    In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.

  11. TPC data compression

    Berger, Jens; Lindenstruth, Volker; Plamper, Patrick; Röhrich, Dieter; Schafer, Erich; Schulz, M W; Steinbeck, T M; Stock, Reinhard; Sulimma, Kolja; Vestbo, Anders S; Wiebalck, Arne


    In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.

  12. TPC data compression

    Berger, Jens; Frankenfeld, Ulrich; Lindenstruth, Volker; Plamper, Patrick; Röhrich, Dieter; Schäfer, Erich; Schulz, Markus W.; Steinbeck, Timm M.; Stock, Reinhard; Sulimma, Kolja; Vestbø, Anders; Wiebalck, Arne


    In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.

  13. Waves and compressible flow

    Ockendon, Hilary


    Now in its second edition, this book continues to give readers a broad mathematical basis for modelling and understanding the wide range of wave phenomena encountered in modern applications.  New and expanded material includes topics such as elastoplastic waves and waves in plasmas, as well as new exercises.  Comprehensive collections of models are used to illustrate the underpinning mathematical methodologies, which include the basic ideas of the relevant partial differential equations, characteristics, ray theory, asymptotic analysis, dispersion, shock waves, and weak solutions. Although the main focus is on compressible fluid flow, the authors show how intimately gasdynamic waves are related to wave phenomena in many other areas of physical science.   Special emphasis is placed on the development of physical intuition to supplement and reinforce analytical thinking. Each chapter includes a complete set of carefully prepared exercises, making this a suitable textbook for students in applied mathematics, ...

  14. Central cooling: compressive chillers

    Christian, J.E.


    Representative cost and performance data are provided in a concise, useable form for three types of compressive liquid packaged chillers: reciprocating, centrifugal, and screw. The data are represented in graphical form as well as in empirical equations. Reciprocating chillers are available from 2.5 to 240 tons with full-load COPs ranging from 2.85 to 3.87. Centrifugal chillers are available from 80 to 2,000 tons with full load COPs ranging from 4.1 to 4.9. Field-assemblied centrifugal chillers have been installed with capacities up to 10,000 tons. Screw-type chillers are available from 100 to 750 tons with full load COPs ranging from 3.3 to 4.5.

  15. Compression-based Similarity

    Vitanyi, Paul M B


    First we consider pair-wise distances for literal objects consisting of finite binary files. These files are taken to contain all of their meaning, like genomes or books. The distances are based on compression of the objects concerned, normalized, and can be viewed as similarity distances. Second, we consider pair-wise distances between names of objects, like "red" or "christianity." In this case the distances are based on searches of the Internet. Such a search can be performed by any search engine that returns aggregate page counts. We can extract a code length from the numbers returned, use the same formula as before, and derive a similarity or relative semantics between names for objects. The theory is based on Kolmogorov complexity. We test both similarities extensively experimentally.

  16. Adaptively Compressed Exchange Operator

    Lin, Lin


    The Fock exchange operator plays a central role in modern quantum chemistry. The large computational cost associated with the Fock exchange operator hinders Hartree-Fock calculations and Kohn-Sham density functional theory calculations with hybrid exchange-correlation functionals, even for systems consisting of hundreds of atoms. We develop the adaptively compressed exchange operator (ACE) formulation, which greatly reduces the computational cost associated with the Fock exchange operator without loss of accuracy. The ACE formulation does not depend on the size of the band gap, and thus can be applied to insulating, semiconducting as well as metallic systems. In an iterative framework for solving Hartree-Fock-like systems, the ACE formulation only requires moderate modification of the code, and can be potentially beneficial for all electronic structure software packages involving exchange calculations. Numerical results indicate that the ACE formulation can become advantageous even for small systems with tens...

  17. Compressed sensing performance bounds under Poisson noise

    Raginsky, Maxim; Marcia, Roummel F; Willett, Rebecca M


    This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and/or bounded noise models do not apply to Poisson noise, which is non-additive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical $\\ell_2$--$\\ell_1$ minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective functi...

  18. Adaptive compressive sensing camera

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold


    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  19. Vazões máximas e mínimas para bacias hidrográficas da região alto Rio Grande, MG Maximum and minimum discharges for Alto Rio Grande region basins, Minas Gerais state, Brazil

    Carlos Rogério de Mello


    Full Text Available Vazões máximas são grandezas hidrológicas aplicadas a projetos de obras hidráulicas e vazões mínimas são utilizadas para a avaliação das disponibilidades hídricas em bacias hidrográficas e comportamento do escoamento subterrâneo. Neste estudo, objetivou-se à construção de intervalos de confiança estatísticos para vazões máximas e mínimas diárias anuais e sua relação com as características fisiográficas das 6 maiores bacias hidrográficas da região Alto Rio Grande à montante da represa da UHE-Camargos/CEMIG. As distribuições de probabilidades Gumbel e Gama foram aplicadas, respectivamente, para séries históricas de vazões máximas e mínimas, utilizando os estimadores de Máxima Verossimilhança. Os intervalos de confiança constituem-se em uma importante ferramenta para o melhor entendimento e estimativa das vazões, sendo influenciado pelas características geológicas das bacias. Com base nos mesmos, verificou-se que a região Alto Rio Grande possui duas áreas distintas: a primeira, abrangendo as bacias Aiuruoca, Carvalhos e Bom Jardim, que apresentaram as maiores vazões máximas e mínimas, significando potencialidade para cheias mais significativas e maiores disponibilidades hídricas; a segunda, associada às bacias F. Laranjeiras, Madre de Deus e Andrelândia, que apresentaram as menores disponibilidades hídricas.Maximum discharges are applied to hydraulic structure design and minimum discharges are used to characterize water availability in hydrographic basins and subterranean flow. This study is aimed at estimating the confidence statistical intervals for maximum and minimum annual discharges and their relationship wih the physical characteristics of basins in the Alto Rio Grande Region, State of Minas Gerais. The study was developed for the six (6 greatest Alto Rio Grande Region basins at upstream of the UHE-Camargos/CEMIG reservoir. Gumbel and Gama probability distribution models were applied to the

  20. Parameter estimation in X-ray astronomy using maximum likelihood

    Wachter, K.; Leach, R.; Kellogg, E.


    Methods of estimation of parameter values and confidence regions by maximum likelihood and Fisher efficient scores starting from Poisson probabilities are developed for the nonlinear spectral functions commonly encountered in X-ray astronomy. It is argued that these methods offer significant advantages over the commonly used alternatives called minimum chi-squared because they rely on less pervasive statistical approximations and so may be expected to remain valid for data of poorer quality. Extensive numerical simulations of the maximum likelihood method are reported which verify that the best-fit parameter value and confidence region calculations are correct over a wide range of input spectra.

  1. Application specific compression : final report.

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.


    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  2. Streaming Compression of Hexahedral Meshes

    Isenburg, M; Courbet, C


    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  3. Data Compression with Linear Algebra

    Etler, David


    A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.

  4. Compressed sensing for body MRI.

    Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh


    The introduction of compressed sensing for increasing imaging speed in magnetic resonance imaging (MRI) has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This article presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and nonlinear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the article discusses current challenges and future opportunities. 5 J. Magn. Reson. Imaging 2017;45:966-987. © 2016 International Society for Magnetic Resonance in Medicine.

  5. An infrared-visible image fusion scheme based on NSCT and compressed sensing

    Zhang, Qiong; Maldague, Xavier


    Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.

  6. Compression Maps and Stable Relations

    Price, Kenneth L


    Balanced relations were defined by G. Abrams to extend the convolution product used in the construction of incidence rings. We define stable relations,which form a class between balanced relations and preorders. We also define a compression map to be a surjective function between two sets which preserves order, preserves off-diagonal relations, and has the additional property every transitive triple is the image of a transitive triple. We show a compression map preserves the balanced and stable properties but the compression of a preorder may be stable and not transitive. We also cover an example of a stable relation which is not the compression of a preorder. In our main theorem we provide necessary and sufficient conditions for a finite stable relation to be the compression of a preorder.

  7. The Diagonal Compression Field Method using Circular Fans

    Hansen, Thomas


    In a concrete beam with transverse stirrups the shear forces are carried by inclined compression in the concrete. Along the tensile zone and the compression zone of the beam the transverse components of the inclined compressions are transferred to the stirrups, which are thus subjected to tension....... Since the eighties the diagonal compression field method has been used to design transverse shear reinforcement in concrete beams. The method is based on the lower-bound theorem of the theory of plasticity, and it has been adopted in Eurocode 2. The paper presents a new design method, which...... with low shear stresses. The larger inclination (the smaller -value) of the uniaxial concrete stress the more transverse shear reinforcement is needed; hence it would be optimal if the -value for a given beam could be set to a low value in regions with high shear stresses and thereafter increased...

  8. Vascular compression syndrome of sciatic nerve caused by gluteal varicosities.

    Hu, Ming-Hsiao; Wu, Kuan-Wen; Jian, Yu-Ming; Wang, Chen-Ti; Wu, I-Hui; Yang, Shu-Hua


    Sciatica is defined as pain or discomfort along the regions innervated by the sciatic nerve. Compression or irritation of lumbar spinal roots, most commonly because of lumbar disc herniation or spinal stenosis, causes sciatica in the vast majority of cases. Although it is rather uncommon, many pathologies have reported to cause nondiscogenic sciatica. A 70-year-old woman presented with intractable sciatic pain which was not elicited by posture change or cough. Sitting on the affected side provoked more pain than standing or walking. Magnetic resonance imaging revealed both spondylolisthesis with lumbar stenosis and compression of the gluteal portion of the sciatic nerve by varicotic gluteal veins. Given the atypical presentation of spinal root compression, gluteal vascular compressive neuropathy was suspected. Ligation and resection of varicotic vein resulted in relief of the patient's pain. To our knowledge, cases with varicosity-caused sciatica were limited in the literature review.

  9. Compressive Sensing for Quantum Imaging

    Howland, Gregory A.

    This thesis describes the application of compressive sensing to several challenging problems in quantum imaging with practical and fundamental implications. Compressive sensing is a measurement technique that compresses a signal during measurement such that it can be dramatically undersampled. Compressive sensing has been shown to be an extremely efficient measurement technique for imaging, particularly when detector arrays are not available. The thesis first reviews compressive sensing through the lens of quantum imaging and quantum measurement. Four important applications and their corresponding experiments are then described in detail. The first application is a compressive sensing, photon-counting lidar system. A novel depth mapping technique that uses standard, linear compressive sensing is described. Depth maps up to 256 x 256 pixel transverse resolution are recovered with depth resolution less than 2.54 cm. The first three-dimensional, photon counting video is recorded at 32 x 32 pixel resolution and 14 frames-per-second. The second application is the use of compressive sensing for complementary imaging---simultaneously imaging the transverse-position and transverse-momentum distributions of optical photons. This is accomplished by taking random, partial projections of position followed by imaging the momentum distribution on a cooled CCD camera. The projections are shown to not significantly perturb the photons' momenta while allowing high resolution position images to be reconstructed using compressive sensing. A variety of objects and their diffraction patterns are imaged including the double slit, triple slit, alphanumeric characters, and the University of Rochester logo. The third application is the use of compressive sensing to characterize spatial entanglement of photon pairs produced by spontaneous parametric downconversion. The technique gives a theoretical speedup N2/log N for N-dimensional entanglement over the standard raster scanning technique


    Takamoto, Makoto [Max-Planck-Institut für Kernphysik, Heidelberg (Germany); Inoue, Tsuyoshi [Division of Theoretical Astronomy, National Astronomical Observatory of Japan (Japan); Lazarian, Alexandre, E-mail:, E-mail:, E-mail: [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States)


    We report on the turbulence effects on magnetic reconnection in relativistic plasmas using three-dimensional relativistic resistive magnetohydrodynamics simulations. We found that the reconnection rate became independent of the plasma resistivity due to turbulence effects similarly to non-relativistic cases. We also found that compressible turbulence effects modified the turbulent reconnection rate predicted in non-relativistic incompressible plasmas; the reconnection rate saturates, and even decays, as the injected velocity approaches to the Alfvén velocity. Our results indicate that compressibility cannot be neglected when a compressible component becomes about half of the incompressible mode, occurring when the Alfvén Mach number reaches about 0.3. The obtained maximum reconnection rate is around 0.05–0.1, which will be able to reach around 0.1–0.2 if injection scales are comparable to the sheet length.

  11. Prediction of Concrete Compressive Strength by Evolutionary Artificial Neural Networks

    Mehdi Nikoo


    Full Text Available Compressive strength of concrete has been predicted using evolutionary artificial neural networks (EANNs as a combination of artificial neural network (ANN and evolutionary search procedures, such as genetic algorithms (GA. In this paper for purpose of constructing models samples of cylindrical concrete parts with different characteristics have been used with 173 experimental data patterns. Water-cement ratio, maximum sand size, amount of gravel, cement, 3/4 sand, 3/8 sand, and coefficient of soft sand parameters were considered as inputs; and using the ANN models, the compressive strength of concrete is calculated. Moreover, using GA, the number of layers and nodes and weights are optimized in ANN models. In order to evaluate the accuracy of the model, the optimized ANN model is compared with the multiple linear regression (MLR model. The results of simulation verify that the recommended ANN model enjoys more flexibility, capability, and accuracy in predicting the compressive strength of concrete.

  12. Test Method for Compression Resilience Evaluation of Textiles

    Shui-yuan Hong


    Full Text Available A test method was proposed and a measurement system was developed to characterize the compression resilience properties of textiles based on the mechanical device, microelectronics, sensors and control system. Derived from the typical pressure-displacement curve and test data, four indices were defined to characterize the compression performance of textiles. The test principle and the evaluation method for compression resilience of textiles were introduced. Twelve types of textile fabrics with different structural features and made from different textile materials were tested. The one-way ANOVA analysis was carried out to identify the significance of the differences of the evaluation indices among the fabrics. The results show that each index is significantly different among different fabrics. The denim has the maximum compressional resilience and the polar fleece has the minimum compressional resilience.

  13. Advances in compressible turbulent mixing

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.


    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  14. A new compression design that increases proximal locking screw bending resistance in femur compression nails.

    Karaarslan, Ahmet Adnan; Karakaşli, Ahmet; Karci, Tolga; Aycan, Hakan; Yildirim, Serhat; Sesli, Erhan


    The aim is to present our new method of compression, a compression tube instead of conventional compression screw and to investigate the difference of proximal locking screw bending resistance between compression screw application (6 mm wide contact) and compression tube (two contact points with 13 mm gap) application. We formed six groups each consisting of 10 proximal locking screws. On metal cylinder representing lesser trochanter level, we performed 3-point bending tests with compression screw and with compression tube. We determined the yield points of the screws in 3-point bending tests using an axial compression testing machine. We determined the yield point of 5 mm screws as 1963±53 N (mean±SD) with compression screw, and as 2929±140 N with compression tubes. We found 51% more locking screw bending resistance with compression tube than with compression screw (p=0,000). Therefore compression tubes instead of compression screw must be preferred at femur compression nails.

  15. Compressed Submanifold Multifactor Analysis.

    Luu, Khoa; Savvides, Marios; Bui, Tien; Suen, Ching


    Although widely used, Multilinear PCA (MPCA), one of the leading multilinear analysis methods, still suffers from four major drawbacks. First, it is very sensitive to outliers and noise. Second, it is unable to cope with missing values. Third, it is computationally expensive since MPCA deals with large multi-dimensional datasets. Finally, it is unable to maintain the local geometrical structures due to the averaging process. This paper proposes a novel approach named Compressed Submanifold Multifactor Analysis (CSMA) to solve the four problems mentioned above. Our approach can deal with the problem of missing values and outliers via SVD-L1. The Random Projection method is used to obtain the fast low-rank approximation of a given multifactor dataset. In addition, it is able to preserve the geometry of the original data. Our CSMA method can be used efficiently for multiple purposes, e.g. noise and outlier removal, estimation of missing values, biometric applications. We show that CSMA method can achieve good results and is very efficient in the inpainting problem as compared to [1], [2]. Our method also achieves higher face recognition rates compared to LRTC, SPMA, MPCA and some other methods, i.e. PCA, LDA and LPP, on three challenging face databases, i.e. CMU-MPIE, CMU-PIE and Extended YALE-B.

  16. Compressibility effects on the flow past a rotating cylinder

    Teymourtash, A. R.; Salimipour, S. E.


    In this paper, laminar flow past a rotating circular cylinder placed in a compressible uniform stream is investigated via a two-dimensional numerical simulation and the compressibility effects due to the combination of the free-stream and cylinder rotation on the flow pattern such as forming, shedding, and removing of vortices and also the lift and drag coefficients are studied. The numerical simulation of the flow is based on the discretization of convective fluxes of the unsteady Navier-Stokes equations by second-order Roe's scheme and an explicit finite volume method. Because of the importance of the time dependent parameters in the solution, the second-order time accurate is applied by a dual time stepping approach. In order to validate the operation of a computer program, some results are compared with previous experimental and numerical data. The results of this study show that the effects due to flow compressibility such as normal shock wave caused the interesting variations on the flow around the cylinder even at a free-stream with a low Mach number. At incompressible flow around the rotating cylinder, increasing the speed ratio, α (ratio of the surface speed to free-stream velocity), causes the ongoing increase in the lift coefficient, but in compressible flow for each free-stream Mach number, increasing the speed ratio results in obtaining a limited lift coefficient (a maximum mean lift coefficient). In addition, results from the compressible flow indicate that by increasing the free-stream Mach number, the maximum mean lift coefficient is decreased, while the mean drag coefficient is increased. It is also found that by increasing the Reynolds number at low Mach numbers, the maximum mean lift coefficient and critical speed ratio are decreased and the mean drag coefficient and Strouhal number are increased. However at the higher Mach numbers, these parameters become independent of the Reynolds number.

  17. The OMV Data Compression System Science Data Compression Workshop

    Lewis, Garton H., Jr.


    The Video Compression Unit (VCU), Video Reconstruction Unit (VRU), theory and algorithms for implementation of Orbital Maneuvering Vehicle (OMV) source coding, docking mode, channel coding, error containment, and video tape preprocessed space imagery are presented in viewgraph format.

  18. Propane spectral resolution enhancement by the maximum entropy method

    Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.


    The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.

  19. A Near-Lossless Image Compression Algorithm Suitable for Hardware Design in Wireless Endoscopy System

    Xie Xiang


    Full Text Available In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm based on the Bayer format image suitable for hardware design. This algorithm can provide low average compression rate ( bits/pixel with high image quality (larger than dB for endoscopic images. Especially, it has low complexity hardware overhead (only two line buffers and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI and high-quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters. In addition, the VLSI architecture of this compression algorithm is also given out. Its hardware design has been implemented in m CMOS process.

  20. A Near-Lossless Image Compression Algorithm Suitable for Hardware Design in Wireless Endoscopy System

    ZhiHua Wang


    Full Text Available In order to decrease the communication bandwidth and save the transmitting power in the wireless endoscopy capsule, this paper presents a new near-lossless image compression algorithm based on the Bayer format image suitable for hardware design. This algorithm can provide low average compression rate (2.12 bits/pixel with high image quality (larger than 53.11 dB for endoscopic images. Especially, it has low complexity hardware overhead (only two line buffers and supports real-time compressing. In addition, the algorithm can provide lossless compression for the region of interest (ROI and high-quality compression for other regions. The ROI can be selected arbitrarily by varying ROI parameters. In addition, the VLSI architecture of this compression algorithm is also given out. Its hardware design has been implemented in 0.18μm CMOS process.

  1. Wearable EEG via lossless compression.

    Dufort, Guillermo; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo


    This work presents a wearable multi-channel EEG recording system featuring a lossless compression algorithm. The algorithm, based in a previously reported algorithm by the authors, exploits the existing temporal correlation between samples at different sampling times, and the spatial correlation between different electrodes across the scalp. The low-power platform is able to compress, by a factor between 2.3 and 3.6, up to 300sps from 64 channels with a power consumption of 176μW/ch. The performance of the algorithm compares favorably with the best compression rates reported up to date in the literature.

  2. Context-Aware Image Compression.

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  3. Compressive sensing for urban radar

    Amin, Moeness


    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  4. Designing experiments through compressed sensing.

    Young, Joseph G.; Ridzal, Denis


    In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.

  5. Spinal meningioma: relationship between degree of cord compression and outcome.

    Davies, Simon; Gregson, Barbara; Mitchell, Patrick


    The aim of this study was to find the relationships between the degree of cord compression as seen on MRIs with persisting cord atrophy after decompression and patient outcomes in spinal meningiomas. We undertook a retrospective analysis of 31 patients' pre- and postoperative MRIs, preoperative functional status and their outcomes at follow-up. The following metrics were analysed; percentage cord area at maximum compression, percentage tumour occupancy and percentage cord occupancy. These were then compared with outcome as measured by the Nurick scale. Of the 31 patients, 27 (87%) had thoracic meningiomas, 3 (10%) cervical and 1 (3%) cervicothoracic. The meningiomas were pathologically classified as grade 1 (29) or grade 2 (2) according to the WHO classification. The average remaining cord cross-sectional area was 61% of the estimated original value. The average tumour occupancy of the canal was 72%. The average cord occupancy of the spinal canal at maximum compression was 20%. No correlation between cord cross-section area and Nurick Scale was seen. On the postoperative scan, the average cord area had increased to 84%. No correlation was seen between this value and outcome. We found that cross-section area measurements on MRI scans have no obvious relationship with function before or after surgery. This is a base for future research into the mechanism of cord recovery and other compressive cord conditions.

  6. 40 CFR 35.145 - Maximum federal share.


    ... STATE AND LOCAL ASSISTANCE Environmental Program Grants Air Pollution Control (section 105) § 35.145 Maximum federal share. (a) The Regional Administrator may provide air pollution control agencies, as... programs for the prevention and control of air pollution or implementing national primary and...

  7. Compressive phase-only filtering at extreme compression rates

    Pastor-Calle, David; Pastuszczak, Anna; Mikołajczyk, Michał; Kotyński, Rafał


    We introduce an efficient method for the reconstruction of the correlation between a compressively measured image and a phase-only filter. The proposed method is based on two properties of phase-only filtering: such filtering is a unitary circulant transform, and the correlation plane it produces is usually sparse. Thanks to these properties, phase-only filters are perfectly compatible with the framework of compressive sensing. Moreover, the lasso-based recovery algorithm is very fast when phase-only filtering is used as the compression matrix. The proposed method can be seen as a generalization of the correlation-based pattern recognition technique, which is hereby applied directly to non-adaptively acquired compressed data. At the time of measurement, any prior knowledge of the target object for which the data will be scanned is not required. We show that images measured at extremely high compression rates may still contain sufficient information for target classification and localization, even if the compression rate is high enough, that visual recognition of the target in the reconstructed image is no longer possible. The method has been applied by us to highly undersampled measurements obtained from a single-pixel camera, with sampling based on randomly chosen Walsh-Hadamard patterns.

  8. Effects of graduated compression stockings on skin temperature after running.

    Priego Quesada, J I; Lucas-Cuevas, A G; Gil-Calvo, M; Giménez, J V; Aparicio, I; Cibrián Ortiz de Anda, R M; Salvador Palmer, R; Llana-Belloch, S; Pérez-Soriano, P


    High skin temperatures reduce the thermal gradient between the core and the skin and they can lead to a reduction in performance and increased risk of injury. Graduated compression stockings have become popular among runners in the last years and their use may influence the athlete's thermoregulation. The aim of this study was to investigate the effects of graduated compression stockings on skin temperature during running in a moderate indoor environment. Forty-four runners performed two running tests lasting 30min (10min of warm-up and 20min at 75% of their maximal aerobic speed) with and without graduated compressive stockings. Skin temperature was measured in 12 regions of interest on the lower limb by infrared thermography before and after running. Heart rate and perception of fatigue were assessed during the last minute of the running test. Compression stockings resulted in greater increase of temperature (p=0.002 and ES=2.2, 95% CI [0.11-0.45°C]) not only in the body regions in contact (tibialis anterior, ankle anterior and gastrocnemius) but also in the body regions that were not in contact with the garment (vastus lateralis, abductor and semitendinosus). No differences were observed between conditions in heart rate and perception of fatigue (p>0.05 and ESrunning with graduated compression stockings produces a greater increase of skin temperature without modifying the athlete's heart rate and perception of fatigue. Copyright © 2015 Elsevier Ltd. All rights reserved.


    DONG Zhi-yong; SU Pei-lan


    This paper presents an experimental investigation and a theoretical analysis of cavitation control by aeration and its compressible characteristics at the flow velocity V=20m/s-50m/s. Pressure waveforms with and without aeration in cavitation region were measured. The variation of compression ratio with air concentration was described, and the relation between the least air concentration to prevent cavitation erosion and flow velocity proposed based on our experimental study. The experimental results show that aeration remarkably increases the pressure in cavitation region, and the corresponding pressure wave exhibits a compression wave/shock wave. The pressure increase in cavitation region of high-velocity flow with aeration is due to the fact that the compression waves/shock wave after the flow is aerated. The compression ratio increases with air concentration rising. The relation between flow velocity and least air concentration to prevent cavitation erosion follows a semi-cubical parabola. Also, the speed of sound and Mach number of high-velocity aerated flow were analyzed.

  10. Conjugate ground and multisatellite observations of compression-related EMIC Pc1 waves and associated proton precipitation

    M. E. Usanova; I. R. Mann; Z. C. Kale; I. J. Rae; R. D. Sydora; M. Sandanger; F. Søraas; K.-H. Glassmeier; K.-H. Fornacon; H. Matsui; P. A. Puhl-Quinn; A. Masson; X. Vallières


    ...) waves from 25 September 2005. On the ground, dayside structured EMIC wave activity was observed by the CARISMA and STEP magnetometer arrays for several hours during the period of maximum compression...

  11. Strategies for high-performance resource-efficient compression of neural spike recordings.

    Thorbergsson, Palmi Thor; Garwicz, Martin; Schouenborg, Jens; Johansson, Anders J


    Brain-machine interfaces (BMIs) based on extracellular recordings with microelectrodes provide means of observing the activities of neurons that orchestrate fundamental brain function, and are therefore powerful tools for exploring the function of the brain. Due to physical restrictions and risks for post-surgical complications, wired BMIs are not suitable for long-term studies in freely behaving animals. Wireless BMIs ideally solve these problems, but they call for low-complexity techniques for data compression that ensure maximum utilization of the wireless link and energy resources, as well as minimum heat dissipation in the surrounding tissues. In this paper, we analyze the performances of various system architectures that involve spike detection, spike alignment and spike compression. Performance is analyzed in terms of spike reconstruction and spike sorting performance after wireless transmission of the compressed spike waveforms. Compression is performed with transform coding, using five different compression bases, one of which we pay special attention to. That basis is a fixed basis derived, by singular value decomposition, from a large assembly of experimentally obtained spike waveforms, and therefore represents a generic basis specially suitable for compressing spike waveforms. Our results show that a compression factor of 99.8%, compared to transmitting the raw acquired data, can be achieved using the fixed generic compression basis without compromising performance in spike reconstruction and spike sorting. Besides illustrating the relative performances of various system architectures and compression bases, our findings show that compression of spikes with a fixed generic compression basis derived from spike data provides better performance than compression with downsampling or the Haar basis, given that no optimization procedures are implemented for compression coefficients, and the performance is similar to that obtained when the optimal SVD based

  12. Strategies for high-performance resource-efficient compression of neural spike recordings.

    Palmi Thor Thorbergsson

    Full Text Available Brain-machine interfaces (BMIs based on extracellular recordings with microelectrodes provide means of observing the activities of neurons that orchestrate fundamental brain function, and are therefore powerful tools for exploring the function of the brain. Due to physical restrictions and risks for post-surgical complications, wired BMIs are not suitable for long-term studies in freely behaving animals. Wireless BMIs ideally solve these problems, but they call for low-complexity techniques for data compression that ensure maximum utilization of the wireless link and energy resources, as well as minimum heat dissipation in the surrounding tissues. In this paper, we analyze the performances of various system architectures that involve spike detection, spike alignment and spike compression. Performance is analyzed in terms of spike reconstruction and spike sorting performance after wireless transmission of the compressed spike waveforms. Compression is performed with transform coding, using five different compression bases, one of which we pay special attention to. That basis is a fixed basis derived, by singular value decomposition, from a large assembly of experimentally obtained spike waveforms, and therefore represents a generic basis specially suitable for compressing spike waveforms. Our results show that a compression factor of 99.8%, compared to transmitting the raw acquired data, can be achieved using the fixed generic compression basis without compromising performance in spike reconstruction and spike sorting. Besides illustrating the relative performances of various system architectures and compression bases, our findings show that compression of spikes with a fixed generic compression basis derived from spike data provides better performance than compression with downsampling or the Haar basis, given that no optimization procedures are implemented for compression coefficients, and the performance is similar to that obtained when the

  13. Characteristics of recent tectonic stress field in Jiashi, Xinjiang and adjacent regions

    CUI Xiao-feng


    In this paper, we analyze the general directional features of regional tectonic stress field in Jiashi, Xinjiang and adjacent regions from the data of focal mechanism solutions, borehole breakouts and fault slip. The direction of maximum horizontal principal stress given by these three sorts of stress data differs slightly, which indicates there is a NS-trending horizontal compression in the tectonic stress field in the region of interest. We also invert and analyze the temporal and spatial changes of recent tectonic stress field in the research region by using 137 focal mechanism solutions. The inverted results show that the maximum principal stress σ1 in Jiashi and adjacent regions is NNW-SSE with an azimuth of 162°. In the period from 1997 to 2003 before the occurrence of Jiashi-Bachu earthquake, the directions of the maximum principal stress σ1 and the minimum principal stress σ3 in Jiashi seismic source zone changed clockwise with respect to the tectonic stress field in the regions around. The maximum principal stress σ1 adjusted to the direction of NNE-SSW with an azimuth of 25°. Under the control of this tectonic stress field, a series of earthquakes happened, including the Jiashi strong earthquake swarm in 1997.Then, the tectonic stress field in the Jiashi seismic source zone might adjust again. And the tectonic stress field controlling the Jiashi-Bachu earthquake in 2003 was in accordance with the regions around.

  14. Receiver function estimated by maximum entropy deconvolution

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生


    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  15. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Xiangwei Li


    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  16. Efficient lossy compression for compressive sensing acquisition of images in compressive sensing imaging systems.

    Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning


    Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  17. When 'exact recovery' is exact recovery in compressed sensing simulation

    Sturm, Bob L.


    In a simulation of compressed sensing (CS), one must test whether the recovered solution \\(\\vax\\) is the true solution \\(\\vx\\), i.e., ``exact recovery.'' Most CS simulations employ one of two criteria: 1) the recovered support is the true support; or 2) the normalized squared error is less than...... for a given distribution of \\(\\vx\\)? We show that, in a best case scenario, \\(\\epsilon^2\\) sets a maximum allowed missed detection rate in a majority sense....

  18. Buckling localization in a cylindrical panel under axial compression

    Tvergaard, Viggo; Needleman, A.


    Localization of an initially periodic buckling pattern is investigated for an axially compressed elastic-plastic cylindrical panel of the type occurring between axial stiffeners on cylindrical shells. The phenomenon of buckling localization and its analogy with plastic flow localization in tensile...... test specimens is discussed in general. For the cylindrical panel, it is shown that buckling localization develops shortly after a maximum load has been attained, and this occurs for a purely elastic panel as well as for elastic-plastic panels. In a case where localization occurs after a load maximum...

  19. Maximum Power from a Solar Panel

    Michael Miller


    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.


    Fulkerson, E S


    In recent years the Lawrence Livermore National Laboratory (LLNL) has been conducting experiments that require pulsed high currents to be delivered into inductive loads. The loads fall into two categories (1) pulsed high field magnets and (2) the input stage of Magnetic Flux Compression Generators (MFCG). Three capacitor banks of increasing energy storage and controls sophistication have been designed and constructed to drive these loads. One bank was developed for the magnet driving application (20kV {approx} 30kJ maximum stored energy.) Two banks where constructed as MFCG seed banks (12kV {approx} 43kJ and 26kV {approx} 450kJ). This paper will describe the design of each bank including switching, controls, circuit protection and safety.

  1. Transfer induced compressive strain in graphene

    Larsen, Martin Benjamin Barbour Spanget; Mackenzie, David; Caridad, Jose


    We have used spatially resolved micro Raman spectroscopy to map the full width at half maximum (FWHM) of the graphene G-band and the 2D and G peak positions, for as-grown graphene on copper catalyst layers, for transferred CVD graphene and for micromechanically exfoliated graphene, in order...... to characterize the effects of a transfer process on graphene properties. Here we use the FWHM(G) as an indicator of the doping level of graphene, and the ratio of the shifts in the 2D and G bands as an indicator of strain. We find that the transfer process introduces an isotropic, spatially uniform, compressive...... strain in graphene, and increases the carrier concentration....

  2. Full-frame compression of discrete wavelet and cosine transforms

    Lo, Shih-Chung B.; Li, Huai; Krasner, Brian; Freedman, Matthew T.; Mun, Seong K.


    At the foreground of computerized radiology and the filmless hospital are the possibilities for easy image retrieval, efficient storage, and rapid image communication. This paper represents the authors' continuous efforts in compression research on full-frame discrete wavelet (FFDWT) and full-frame discrete cosine transforms (FFDCT) for medical image compression. Prior to the coding, it is important to evaluate the global entropy in the decomposed space. It is because of the minimum entropy, that a maximum compression efficiency can be achieved. In this study, each image was split into the top three most significant bit (MSB) and the remaining remapped least significant bit (RLSB) images. The 3MSB image was compressed by an error-free contour coding and received an average of 0.1 bit/pixel. The RLSB image was either transformed to a multi-channel wavelet or the cosine transform domain for entropy evaluation. Ten x-ray chest radiographs and ten mammograms were randomly selected from our clinical database and were used for the study. Our results indicated that the coding scheme in the FFDCT domain performed better than in FFDWT domain for high-resolution digital chest radiographs and mammograms. From this study, we found that decomposition efficiency in the DCT domain for relatively smooth images is higher than that in the DWT. However, both schemes worked just as well for low resolution digital images. We also found that the image characteristics of the `Lena' image commonly used in the compression literature are very different from those of radiological images. The compression outcome of the radiological images can not be extrapolated from the compression result based on the `Lena.'

  3. Compressive Acquisition of Dynamic Scenes

    Sankaranarayanan, Aswin C; Chellappa, Rama; Baraniuk, Richard G


    Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models difficult. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, and then reconstructing the image frames. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the...

  4. Normalized Compression Distance of Multiples

    Cohen, Andrew R


    Normalized compression distance (NCD) is a parameter-free similarity measure based on compression. The NCD between pairs of objects is not sufficient for all applications. We propose an NCD of finite multisets (multiples) of objacts that is metric and is better for many applications. Previously, attempts to obtain such an NCD failed. We use the theoretical notion of Kolmogorov complexity that for practical purposes is approximated from above by the length of the compressed version of the file involved, using a real-world compression program. We applied the new NCD for multiples to retinal progenitor cell questions that were earlier treated with the pairwise NCD. Here we get significantly better results. We also applied the NCD for multiples to synthetic time sequence data. The preliminary results are as good as nearest neighbor Euclidean classifier.

  5. Compression fractures of the back

    Taking steps to prevent and treat osteoporosis is the most effective way to prevent compression or insufficiency fractures. Getting regular load-bearing exercise (such as walking) can help you avoid bone loss.

  6. Compressed sensing for distributed systems

    Coluccia, Giulio; Magli, Enrico


    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  7. Preprocessing of compressed digital video

    Segall, C. Andrew; Karunaratne, Passant V.; Katsaggelos, Aggelos K.


    Pre-processing algorithms improve on the performance of a video compression system by removing spurious noise and insignificant features from the original images. This increases compression efficiency and attenuates coding artifacts. Unfortunately, determining the appropriate amount of pre-filtering is a difficult problem, as it depends on both the content of an image as well as the target bit-rate of compression algorithm. In this paper, we explore a pre- processing technique that is loosely coupled to the quantization decisions of a rate control mechanism. This technique results in a pre-processing system that operates directly on the Displaced Frame Difference (DFD) and is applicable to any standard-compatible compression system. Results explore the effect of several standard filters on the DFD. An adaptive technique is then considered.

  8. Compressed gas fuel storage system

    Wozniak, John J. (Columbia, MD); Tiller, Dale B. (Lincoln, NE); Wienhold, Paul D. (Baltimore, MD); Hildebrand, Richard J. (Edgemere, MD)


    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  9. Shock compression of polyvinyl chloride

    Neogi, Anupam; Mitra, Nilanjan


    This study presents shock compression simulation of atactic polyvinyl chloride (PVC) using ab-initio and classical molecular dynamics. The manuscript also identifies the limits of applicability of classical molecular dynamics based shock compression simulation for PVC. The mechanism of bond dissociation under shock loading and its progression is demonstrated in this manuscript using the density functional theory based molecular dynamics simulations. The rate of dissociation of different bonds at different shock velocities is also presented in this manuscript.

  10. Bridgman's concern (shock compression science)

    Graham, R. A.


    In 1956 P. W. Bridgman published a letter to the editor in the Journal of Applied Physics reporting results of electrical resistance measurements on iron under static high pressure. The work was undertaken to verify the existence of a polymorphic phase transition at 130 kbar (13 GPa) reported in the same journal and year by the Los Alamos authors, Bancroft, Peterson, and Minshall for high pressure, shock-compression loading. In his letter, Bridgman reported that he failed to find any evidence for the transition. Further, he raised some fundamental concerns as to the state of knowledge of shock-compression processes in solids. Later it was determined that Bridgman's static pressure scale was in error, and the shock observations became the basis for calibration of pressure values in static high pressure apparatuses. In spite of the error in pressure scales, Bridgman's concerns on descriptions of shock-compression processes were perceptive and have provided the basis for subsequent fundamental studies of shock-compressed solids. The present paper, written in response to receipt of the 1993 American Physical Society Shock-Compression Science Award, provides a brief contemporary assessment of those shock-compression issues which were the basis of Bridgman's 1956 concerns.

  11. Hidden force opposing ice compression

    Sun, Chang Q; Zheng, Weitao


    Coulomb repulsion between the unevenly-bound bonding and nonbonding electron pairs in the O:H-O hydrogen-bond is shown to originate the anomalies of ice under compression. Consistency between experimental observations, density functional theory and molecular dynamics calculations confirmed that the resultant force of the compression, the repulsion, and the recovery of electron-pair dislocations differentiates ice from other materials in response to pressure. The compression shortens and strengthens the longer-and-softer intermolecular O:H lone-pair virtual-bond; the repulsion pushes the bonding electron pair away from the H+/p and hence lengthens and weakens the intramolecular H-O real-bond. The virtual-bond compression and the real-bond elongation symmetrize the O:H-O as observed at ~60 GPa and result in the abnormally low compressibility of ice. The virtual-bond stretching phonons ( 3000 cm-1) softened upon compression. The cohesive energy of the real-bond dominates and its loss lowers the critical temperat...

  12. Aspects of forward scattering from the compression paddle in the dosimetry of mammography.

    Toroi, Paula; Könönen, Niina; Timonen, Marjut; Kortesniemi, Mika


    The best compression paddle position during air kerma measurement in mammography dosimetry was studied. The amount of forward scattering as a function of the compression paddle distance was measured with different X-ray spectra and different types of paddles and dose meters. The contribution of forward scattering to the air kerma did not present significant dependency on the beam quality or of the compression paddle type. The tested dose meter types detected different amounts of forward scattering due to different internal collimation. When the paddle was adjusted to its maximum clinical distance, the proportion of the detected forward scattering was only 1 % for all dose meter types. The most consistent way of performing air kerma measurements is to position the compression paddle at the maximum distance from the dose meter and use a constant forward scattering factor for all dose meters. Thus, the dosimetric uncertainty due to the forward scatter can be minimised.

  13. Quantitative vertebral compression fracture evaluation using a height compass

    Yao, Jianhua; Burns, Joseph E.; Wiese, Tatjana; Summers, Ronald M.


    Vertebral compression fractures can be caused by even minor trauma in patients with pathological conditions such as osteoporosis, varying greatly in vertebral body location and compression geometry. The location and morphology of the compression injury can guide decision making for treatment modality (vertebroplasty versus surgical fixation), and can be important for pre-surgical planning. We propose a height compass to evaluate the axial plane spatial distribution of compression injury (anterior, posterior, lateral, and central), and distinguish it from physiologic height variations of normal vertebrae. The method includes four steps: spine segmentation and partition, endplate detection, height compass computation and compression fracture evaluation. A height compass is computed for each vertebra, where the vertebral body is partitioned in the axial plane into 17 cells oriented about concentric rings. In the compass structure, a crown-like geometry is produced by three concentric rings which are divided into 8 equal length arcs by rays which are subtended by 8 common central angles. The radius of each ring increases multiplicatively, with resultant structure of a central node and two concentric surrounding bands of cells, each divided into octants. The height value for each octant is calculated and plotted against octants in neighboring vertebrae. The height compass shows intuitive display of the height distribution and can be used to easily identify the fracture regions. Our technique was evaluated on 8 thoraco-abdominal CT scans of patients with reported compression fractures and showed statistically significant differences in height value at the sites of the fractures.

  14. Effect of Lossy JPEG Compression of an Image with Chromatic Aberrations on Target Measurement Accuracy

    Matsuoka, R.


    This paper reports an experiment conducted to investigate the effect of lossy JPEG compression of an image with chromatic aberrations on the measurement accuracy of target center by the intensity-weighted centroid method. I utilized six images shooting a white sheet with 30 by 20 black filled circles in the experiment. The images were acquired by a digital camera Canon EOS 20D. The image data were compressed by using two compression parameter sets of a downsampling ratio, a quantization table and a Huffman code table utilized in EOS 20D. The experiment results clearly indicate that lossy JPEG compression of an image with chromatic aberrations would produce a significant effect on measurement accuracy of target center by the intensity-weighted centroid method. The maximum displacements of red, green and blue components caused by lossy JPEG compression were 0.20, 0.09, and 0.20 pixels respectively. The results also suggest that the downsampling of the chrominance components Cb and Cr in lossy JPEG compression would produce displacements between uncompressed image data and compressed image data. In conclusion, since the author consider that it would be unable to correct displacements caused by lossy JPEG compression, the author would recommend that lossy JPEG compression before recording an image in a digital camera should not be executed in case of highly precise image measurement by using color images acquired by a non-metric digital camera.

  15. Comparing image compression methods in biomedical applications

    Libor Hargas


    Full Text Available Compression methods suitable for image processing are described in this article in biomedical applications. The compression is often realized by reduction of irrelevance or redundancy. There are described lossless and lossy compression methods which can be use for compress of images in biomedical applications and comparison of these methods based on fidelity criteria.

  16. 29 CFR 1917.154 - Compressed air.


    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...

  17. Theoretical models for describing longitudinal bunch compression in the neutralized drift compression experiment

    Adam B. Sefkow


    Full Text Available Heavy ion drivers for warm dense matter and heavy ion fusion applications use intense charge bunches which must undergo transverse and longitudinal compression in order to meet the requisite high current densities and short pulse durations desired at the target. The neutralized drift compression experiment (NDCX at the Lawrence Berkeley National Laboratory is used to study the longitudinal neutralized drift compression of a space-charge-dominated ion beam, which occurs due to an imposed longitudinal velocity tilt and subsequent neutralization of the beam’s space charge by background plasma. Reduced theoretical models have been used in order to describe the realistic propagation of an intense charge bunch through the NDCX device. A warm-fluid model is presented as a tractable computational tool for investigating the nonideal effects associated with the experimental acceleration gap geometry and voltage waveform of the induction module, which acts as a means to pulse shape both the velocity and line density profiles. Self-similar drift compression solutions can be realized in order to transversely focus the entire charge bunch to the same focal plane in upcoming simultaneous transverse and longitudinal focusing experiments. A kinetic formalism based on the Vlasov equation has been employed in order to show that the peaks in the experimental current profiles are a result of the fact that only the central portion of the beam contributes effectively to the main compressed pulse. Significant portions of the charge bunch reside in the nonlinearly compressing part of the ion beam because of deviations between the experimental and ideal velocity tilts. Those regions form a pedestal of current around the central peak, thereby decreasing the amount of achievable longitudinal compression and increasing the pulse durations achieved at the focal plane. A hybrid fluid-Vlasov model which retains the advantages of both the fluid and kinetic approaches has been

  18. A maximum in the strength of nanocrystalline copper

    Schiøtz, Jakob; Jacobsen, Karsten Wedel


    We used molecular dynamics simulations with system sizes up to 100 million atoms to simulate plastic deformation of nanocrystalline copper. By varying the grain size between 5 and 50 nanometers, we show that the flow stress and thus the strength exhibit a maximum at a grain size of 10 to 15...... nanometers. This maximum is because of a shift in the microscopic deformation mechanism from dislocation-mediated plasticity in the coarse-grained material to grain boundary sliding in the nanocrystalline region. The simulations allow us to observe the mechanisms behind the grain-size dependence...

  19. The inverse maximum dynamic flow problem

    BAGHERIAN; Mehri


    We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.

  20. Correlated image set compression system based on new fast efficient algorithm of Karhunen-Loeve transform

    Musatenko, Yurij S.; Kurashov, Vitalij N.


    The paper presents improved version of our new method for compression of correlated image sets Optimal Image Coding using Karhunen-Loeve transform (OICKL). It is known that Karhunen-Loeve (KL) transform is most optimal representation for such a purpose. The approach is based on fact that every KL basis function gives maximum possible average contribution in every image and this contribution decreases most quickly among all possible bases. So, we lossy compress every KL basis function by Embedded Zerotree Wavelet (EZW) coding with essentially different loss that depends on the functions' contribution in the images. The paper presents new fast low memory consuming algorithm of KL basis construction for compression of correlated image ensembles that enable our OICKL system to work on common hardware. We also present procedure for determining of optimal losses of KL basic functions caused by compression. It uses modified EZW coder which produce whole PSNR (bitrate) curve during the only compression pass.

  1. Lossless compression of hyperspectral images based on the prediction error block

    Li, Yongjun; Li, Yunsong; Song, Juan; Liu, Weijia; Li, Jiaojiao


    A lossless compression algorithm of hyperspectral image based on distributed source coding is proposed, which is used to compress the spaceborne hyperspectral data effectively. In order to make full use of the intra-frame correlation and inter-frame correlation, the prediction error block scheme are introduced. Compared with the scalar coset based distributed compression method (s-DSC) proposed by E.Magli et al., that is , the bitrate of the whole block is determined by its maximum prediction error, and the s-DSC-classify scheme proposed by Song Juan that is based on classification and coset coding, the prediction error block scheme could reduce the bitrate efficiently. Experimental results on hyperspectral images show that the proposed scheme can offer both high compression performance and low encoder complexity and decoder complexity, which is available for on-board compression of hyperspectral images.

  2. Watermarking of ultrasound medical images in teleradiology using compressed watermark.

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohamad; Ali, Mushtaq


    The open accessibility of Internet-based medical images in teleradialogy face security threats due to the nonsecured communication media. This paper discusses the spatial domain watermarking of ultrasound medical images for content authentication, tamper detection, and lossless recovery. For this purpose, the image is divided into two main parts, the region of interest (ROI) and region of noninterest (RONI). The defined ROI and its hash value are combined as watermark, lossless compressed, and embedded into the RONI part of images at pixel's least significant bits (LSBs). The watermark lossless compression and embedding at pixel's LSBs preserve image diagnostic and perceptual qualities. Different lossless compression techniques including Lempel-Ziv-Welch (LZW) were tested for watermark compression. The performances of these techniques were compared based on more bit reduction and compression ratio. LZW was found better than others and used in tamper detection and recovery watermarking of medical images (TDARWMI) scheme development to be used for ROI authentication, tamper detection, localization, and lossless recovery. TDARWMI performance was compared and found to be better than other watermarking schemes.

  3. Maximum permissible voltage of YBCO coated conductors

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)


    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  4. Compressibility Effects in Turbulent Boundary Layers

    CAO Yu-Hui; PEI Jie; CHEN Jun; SHE Zhen-Su


    Local cascade (LC) scheme and space-time correlations are used to study turbulent structures and their convection behaviour in the near-wall region of compressible boundary layers at Ma = 0.8 and 1.3. The convection velocities of fluctuating velocity components u (streamwise) and v (vertical) are investigated by statistically analysing scale-dependent ensembles of LC structures. The results suggest that u is convected with entropy perturbations while v with an isentropic process. An abnormal thin layer distinct from the conventional viscous sub-layer is discovered in the immediate vicinity of the wall (y+≤1) in supersonic flows. While in the region 1 < y+ < 30,streamwise streaks dominate velocity, density and temperature fluctuations, the abnormal thin layer is dominated by spanwise streaks in vertical velocity and density fluctuations, where pressure and density fluctuations are strongly correlated. The LC scheme is proven to be effective in studying the nature of supersonic flows and compressibility effects on wall-bounded motions.

  5. Compressibility, turbulence and high speed flow

    Gatski, Thomas B


    Compressibility, Turbulence and High Speed Flow introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range, through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. The book provides the reader with the necessary background and current trends in the theoretical and experimental aspects of compressible turbulent flows and compressible turbulence. Detailed derivations of the pertinent equations describing the motion of such turbulent flows is provided and

  6. 30 CFR 75.1730 - Compressed air; general; compressed air systems.


    ... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... pressure has been relieved from that part of the system to be repaired. (d) At no time shall compressed air... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems...

  7. Auto-shape lossless compression of pharynx and esophagus fluoroscopic images.

    Arif, Arif Sameh; Mansor, Sarina; Logeswaran, Rajasvaran; Karim, Hezerul Abdul


    The massive number of medical images produced by fluoroscopic and other conventional diagnostic imaging devices demand a considerable amount of space for data storage. This paper proposes an effective method for lossless compression of fluoroscopic images. The main contribution in this paper is the extraction of the regions of interest (ROI) in fluoroscopic images using appropriate shapes. The extracted ROI is then effectively compressed using customized correlation and the combination of Run Length and Huffman coding, to increase compression ratio. The experimental results achieved show that the proposed method is able to improve the compression ratio by 400 % as compared to that of traditional methods.

  8. Imperfection analysis of flexible pipe armor wires in compression and bending

    Østergaard, Niels Højen; Lyckegaard, Anders; Andreasen, Jens H.


    The work presented in this paper is motivated by a specific failure mode known as lateral wire buckling occurring in the tensile armor layers of flexible pipes. The tensile armor is usually constituted by two layers of initially helically wound steel wires with opposite lay directions. During pipe...... laying in ultra deep waters, a flexible pipe experiences repeated bending cycles and longitudinal compression. These loading conditions are known to impose a danger to the structural integrity of the armoring layers, if the compressive load on the pipe exceeds the total maximum compressive load carrying...

  9. Word-Based Text Compression

    Platos, Jan


    Today there are many universal compression algorithms, but in most cases is for specific data better using specific algorithm - JPEG for images, MPEG for movies, etc. For textual documents there are special methods based on PPM algorithm or methods with non-character access, e.g. word-based compression. In the past, several papers describing variants of word-based compression using Huffman encoding or LZW method were published. The subject of this paper is the description of a word-based compression variant based on the LZ77 algorithm. The LZ77 algorithm and its modifications are described in this paper. Moreover, various ways of sliding window implementation and various possibilities of output encoding are described, as well. This paper also includes the implementation of an experimental application, testing of its efficiency and finding the best combination of all parts of the LZ77 coder. This is done to achieve the best compression ratio. In conclusion there is comparison of this implemented application wi...

  10. Superfast maximum-likelihood reconstruction for quantum tomography

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon


    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  11. Generalised maximum entropy and heterogeneous technologies

    Oude Lansink, A.G.J.M.


    Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam

  12. 20 CFR 229.48 - Family maximum.


    ... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...

  13. The maximum rotation of a galactic disc

    Bottema, R


    The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously

  14. Duality of Maximum Entropy and Minimum Divergence

    Shinto Eguchi


    Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.

  15. Morphological Transform for Image Compression

    Luis Pastor Sanchez Fernandez


    Full Text Available A new method for image compression based on morphological associative memories (MAMs is presented. We used the MAM to implement a new image transform and applied it at the transformation stage of image coding, thereby replacing such traditional methods as the discrete cosine transform or the discrete wavelet transform. Autoassociative and heteroassociative MAMs can be considered as a subclass of morphological neural networks. The morphological transform (MT presented in this paper generates heteroassociative MAMs derived from image subblocks. The MT is applied to individual blocks of the image using some transformation matrix as an input pattern. Depending on this matrix, the image takes a morphological representation, which is used to perform the data compression at the next stages. With respect to traditional methods, the main advantage offered by the MT is the processing speed, whereas the compression rate and the signal-to-noise ratio are competitive to conventional transforms.

  16. Compressive Sensing in Communication Systems

    Fyhn, Karsten


    Wireless communication is omnipresent today, but this development has led to frequency spectrum becoming a limited resource. Furthermore, wireless devices become more and more energy-limited, due to the demand for continual wireless communication of higher and higher amounts of information....... The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...

  17. Compressive Sensing for MIMO Radar

    Yu, Yao; Poor, H Vincent


    Multiple-input multiple-output (MIMO) radar systems have been shown to achieve superior resolution as compared to traditional radar systems with the same number of transmit and receive antennas. This paper considers a distributed MIMO radar scenario, in which each transmit element is a node in a wireless network, and investigates the use of compressive sampling for direction-of-arrival (DOA) estimation. According to the theory of compressive sampling, a signal that is sparse in some domain can be recovered based on far fewer samples than required by the Nyquist sampling theorem. The DOA of targets form a sparse vector in the angle space, and therefore, compressive sampling can be applied for DOA estimation. The proposed approach achieves the superior resolution of MIMO radar with far fewer samples than other approaches. This is particularly useful in a distributed scenario, in which the results at each receive node need to be transmitted to a fusion center for further processing.

  18. Compressive Sensing with Optical Chaos

    Rontani, D.; Choi, D.; Chang, C.-Y.; Locquet, A.; Citrin, D. S.


    Compressive sensing (CS) is a technique to sample a sparse signal below the Nyquist-Shannon limit, yet still enabling its reconstruction. As such, CS permits an extremely parsimonious way to store and transmit large and important classes of signals and images that would be far more data intensive should they be sampled following the prescription of the Nyquist-Shannon theorem. CS has found applications as diverse as seismology and biomedical imaging. In this work, we use actual optical signals generated from temporal intensity chaos from external-cavity semiconductor lasers (ECSL) to construct the sensing matrix that is employed to compress a sparse signal. The chaotic time series produced having their relevant dynamics on the 100 ps timescale, our results open the way to ultrahigh-speed compression of sparse signals.

  19. Compressive behavior of fine sand.

    Martin, Bradley E. (Air Force Research Laboratory, Eglin, FL); Kabir, Md. E. (Purdue University, West Lafayette, IN); Song, Bo; Chen, Wayne (Purdue University, West Lafayette, IN)


    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  20. Instability of ties in compression

    Buch-Hansen, Thomas Cornelius


    Masonry cavity walls are loaded by wind pressure and vertical load from upper floors. These loads results in bending moments and compression forces in the ties connecting the outer and the inner wall in a cavity wall. Large cavity walls are furthermore loaded by differential movements from...... the temperature gradient between the outer and the inner wall, which results in critical increase of the bending moments in the ties. Since the ties are loaded by combined compression and moment forces, the loadbearing capacity is derived from instability equilibrium equations. Most of them are iterative, since......-connectors in cavity walls was developed. The method takes into account constraint conditions limiting the free length of the wall tie, and the instability in case of pure compression which gives an optimal load bearing capacity. The model is illustrated with examples from praxis....

  1. Enhanced pulse compression induced by the interaction between the third-order dispersion and the cross-phase modulation in birefringent fibres

    徐文成; 陈伟成; 张书敏; 罗爱平; 刘颂豪


    In this paper, we report on the enhanced pulse compression due to the interaction between the positive third-order dispersion (TOD) and the nonlinear effect (cross-phase modulation effect) in birefringent fibres. Polarization soliton compression along the slow axis can be enhanced in a birefringent fibre with positive third-order dispersion. while the polarization soliton compression along the fast axis can be enhanced in the fibre with negative third-order dispersion.Moreover, there is an optimal third-order dispersion parameter for obtaining the optimal pulse compression.Redshifted initial chirp is helpful to the pulse compression, while blueshifted chirp is detrimental to the pulse compression. There is also an optimal chirp parameter to reach maximum pulse compression. The optimal pulse compression for TOD parameters under different N-order solitons is also found.

  2. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Gupta, Rajarshi


    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  3. Performance analysis of exhaust heat recovery using organic Rankine cycle in a passenger car with a compression ignition engine

    Ghilvacs, M.; Prisecaru, T.; Pop, H.; Apostol, V.; Prisecaru, M.; Pop, E.; Popescu, Gh; Ciobanu, C.; Mohanad, A.; Alexandru, A.


    Compression ignition engines transform approximately 40% of the fuel energy into power available at the crankshaft, while the rest part of the fuel energy is lost as coolant, exhaust gases and other waste heat. An organic Rankine cycle (ORC) can be used to recover this waste heat. In this paper, the characteristics of a system combining a compression ignition engine with an ORC which recover the waste heat from the exhaust gases are analyzed. The performance map of the diesel engine is measured on an engine test bench and the heat quantities wasted by the exhaust gases are calculated over the engine's entire operating region. Based on this data, the working parameters of ORC are defined, and the performance of a combined engine-ORC system is evaluated across this entire region. The results show that the net power of ORC is 6.304kW at rated power point and a maximum of 10% reduction in brake specific fuel consumption can be achieved.

  4. Negative compressibility observed in graphene containing resonant impurities

    Chen, X. L.; Wang, L.; Li, W.; Wang, Y.; He, Y. H.; Wu, Z. F.; Han, Y.; Zhang, M. W.; Xiong, W.; Wang, N. [Department of Physics and The William Mong Institute of Nano Science and Technology, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong (China)


    We observed negative compressibility in monolayer graphene containing resonant impurities under different magnetic fields. Hydrogenous impurities were introduced into graphene by electron beam (e-beam) irradiation. Resonant states located in the energy region of {+-}0.04 eV around the charge neutrality point were probed in e-beam-irradiated graphene capacitors. Theoretical results based on tight-binding and Lifshitz models agreed well with experimental observations of graphene containing a low concentration of resonant impurities. The interaction between resonant states and Landau levels was detected by varying the applied magnetic field. The interaction mechanisms and enhancement of the negative compressibility in disordered graphene are discussed.

  5. Compression-tracking photoacoustic perfusion and microvascular pressure measurements

    Choi, Min; Zemp, Roger


    We propose a method to measure blood pressure of small vessels non-invasively and in-vivo: by combining PA imaging with compression US. Using this method, we have shown pressure-lumen area tracking, as well as estimation of the internal vessel pressure, located 2 mm deep in tissue. Additionally, reperfusion can be tracked by measuring the total PA signal within a region of interest (ROI) after compression has been released. The ROI is updated using cross-correlation based displacement tracking1. The change in subcutaneous perfusion rates can be seen when the temperature of the hand of a human subject drops below the normal.

  6. Survey of compressed domain audio features and their expressiveness

    Pfeiffer, Silvia; Vincent, Thomas


    We give an overview of existing audio analysis approaches in the compressed domain and incorporate them into a coherent formal structure. After examining the kinds of information accessible in an MPEG-1 compressed audio stream, we describe a coherent approach to determine features from them and report on a number of applications they enable. Most of them aim at creating an index to the audio stream by segmenting the stream into temporally coherent regions, which may be classified into pre-specified types of sounds such as music, speech, speakers, animal sounds, sound effects, or silence. Other applications centre around sound recognition such as gender, beat or speech recognition.

  7. Screening genes that change expression during compression wood formation in Chamaecyparis obtusa.

    Yamashita, Saori; Yoshida, Masato; Yamamoto, Hiroyuki; Okuyama, Takashi


    We screened cDNA fragments that change their expression during compression wood formation by fluorescent differential display (FDD) in five adult trees (Chamaecyparis obtusa (Siebold & Zucc.) Endl.) growing naturally at an angle to the vertical, and in two saplings, one vertical, the other inclined. We conducted anatomical observations and measurements of the released strain of growth stress on the five adult trees to confirm that they formed compression wood on the lower side of the inclined trunks. Based on sequencing results from selected cDNA fragments, we conducted homology searches of the GenBank database and designed specific primers for the 67 screened fragments. Using these primers and different saplings from those used for the FDD screening, we tested the expression levels of each fragment in normal, compression and opposite wood regions of saplings by semiquantitative reverse-transcription polymerase chain reaction. Twenty-four fragments showed reproducible expression patterns, indicating that these fragments changed their expression during compression wood formation. Some fragments showed differential expression between the apical and basal regions of the lower side of the inclined stem in the region of compression wood formation. Anatomical observations indicated more intense compression wood formation in the basal region than in the apical region of the stem, demonstrating a relationship between compression wood development and gene expression.

  8. Definition of the Existence Region of the Solution of the Problem of an Arbitrary Gas-dynamic Discontinuity Breakdown at Interaction of Flat Supersonic Jets with Formation of Two Outgoing Compression Shocks

    Pavel Viktorovich Bulat


    Full Text Available We have considered the modern theory of breakdown of an arbitrary gas-dynamic discontinuity for the space-time dimension equal to two. The regions of solutions existence for a one-dimensional non-stationary case and a two-dimensional stationary case have been compared. The Riemann problem of breakdown of an arbitrary discontinuity of parameters of two flat flows angle collision is considered. The problem is solved in accurate setting. The problem parameter areas where outgoing waves appear as two jumps are specified. Two depression waves solution are not covered. The special Mach numbers of interacting flows dividing the parameter plane into areas with different outgoing discontinuities are given.

  9. Fast, efficient lossless data compression

    Ross, Douglas


    This paper presents lossless data compression and decompression algorithms which can be easily implemented in software. The algorithms can be partitioned into their fundamental parts which can be implemented at various stages within a data acquisition system. This allows for efficient integration of these functions into systems at the stage where they are most applicable. The algorithms were coded in Forth to run on a Silicon Composers Single Board Computer (SBC) using the Harris RTX2000 Forth processor. The algorithms require very few system resources and operate very fast. The performance of the algorithms with the RTX enables real time data compression and decompression to be implemented for a wide range of applications.

  10. [Vascular compression of the duodenum].

    Acosta, B; Guachalla, G; Martínez, C; Felce, S; Ledezma, G


    The acute vascular compression of the duodenum is a well-recognized clinical entity, characterized by recurrent vomiting, abdominal distention, weight loss, post prandial distress. The cause of compression is considered to be effect produced as a result of the angle formed by the superior mesenteric vessels and sometimes by one of its first two branches, and vertebrae and paravertebral muscles, when the angle between superior mesenteric vessels and the aorta it's lower than 18 degrees we can saw this syndrome. The duodenojejunostomy is the best treatment, as well as in our patient.

  11. GPU-accelerated compressive holography.

    Endo, Yutaka; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi


    In this paper, we show fast signal reconstruction for compressive holography using a graphics processing unit (GPU). We implemented a fast iterative shrinkage-thresholding algorithm on a GPU to solve the ℓ1 and total variation (TV) regularized problems that are typically used in compressive holography. Since the algorithm is highly parallel, GPUs can compute it efficiently by data-parallel computing. For better performance, our implementation exploits the structure of the measurement matrix to compute the matrix multiplications. The results show that GPU-based implementation is about 20 times faster than CPU-based implementation.

  12. Compressing the Inert Doublet Model

    Blinov, Nikita; Morrissey, David E; de la Puente, Alejandro


    The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. This stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. We derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.

  13. Managment oriented analysis of sediment yield time compression

    Smetanova, Anna; Le Bissonnais, Yves; Raclot, Damien; Nunes, João P.; Licciardello, Feliciana; Le Bouteiller, Caroline; Latron, Jérôme; Rodríguez Caballero, Emilio; Mathys, Nicolle; Klotz, Sébastien; Mekki, Insaf; Gallart, Francesc; Solé Benet, Albert; Pérez Gallego, Nuria; Andrieux, Patrick; Moussa, Roger; Planchon, Olivier; Marisa Santos, Juliana; Alshihabi, Omran; Chikhaoui, Mohamed


    The understanding of inter- and intra-annual variability of sediment yield is important for the land use planning and management decisions for sustainable landscapes. It is of particular importance in the regions where the annual sediment yield is often highly dependent on the occurrence of few large events which produce the majority of sediments, such as in the Mediterranean. This phenomenon is referred as time compression, and relevance of its consideration growths with the increase in magnitude and frequency of extreme events due to climate change in many other regions. So far, time compression has ben studied mainly on events datasets, providing high resolution, but (in terms of data amount, required data precision and methods), demanding analysis. In order to provide an alternative simplified approach, the monthly and yearly time compressions were evaluated in eight Mediterranean catchments (of the R-OSMed network), representing a wide range of Mediterranean landscapes. The annual sediment yield varied between 0 to ~27100 Mg•km-2•a-1, and the monthly sediment yield between 0 to ~11600 Mg•km-2•month-1. The catchment's sediment yield was un-equally distributed at inter- and intra-annual scale, and large differences were observed between the catchments. Two types of time compression were distinguished - (i) the inter-annual (based on annual values) and intra- annual (based on monthly values). Four different rainfall-runoff-sediment yield time compression patterns were observed: (i) no time-compression of rainfall, runoff, nor sediment yield, (ii) low time compression of rainfall and runoff, but high compression of sediment yield, (iii) low compression of rainfall and high of runoff and sediment yield, and (iv) low, medium and high compression of rainfall, runoff and sediment yield. All four patterns were present at inter-annual scale, while at intra-annual scale only the two latter were present. This implies that high sediment yields occurred in

  14. Size dependence of efficiency at maximum power of heat engine

    Izumida, Y.


    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  15. Metal Liner Implosions for Cylindrical Convergent Isentropic Compression of Deuterium and its Application to MAGLIF

    Weinwurm, Marcus; Appelbe, Brian; Skidmore, Jonathan; Bland, Simon; Chittenden, Jeremy


    Isentropic Compression Experiments on pulsed power machines in the field of High Energy Density Physics have gained interest in recent years. We describe a method of isentropically compressing cryogenic Deuterium inside a metal liner. Pulse shaping was performed by solving Kidder's homogeneous isentropic compression for cylindrical geometry and extending it to an arbitrary Equation of State. The obtained pulse shape enables us to simulate a cylindrically convergent ramp wave, which quasi-isentropically compresses the Deuterium fill to densities much higher than achievable by using a standard pulse. The effect of Rayleigh-Taylor instabilities upon the peak density achieved is evaluated using the resistive magneto-hydrodynamics code Gorgon for a maximum current of 25 MA. Therefore, isentropic liner implosions are a promising technique for recreating the conditions present in the interiors of gas giants. We applied this technique to the High-Gain Magnetized Liner Inertial Fusion (MAGLIF) scheme [1]. There a metal liner is filled with DT gas surrounded by a layer of DT ice. We show how the current pulse can be shaped in order to isentropically compress the DT ice layer. By doing so, we keep the fuel at low temperature. This maximises the compression of the DT ice layer, and increases rho-r at stagnation. Burn wave propagation in the isentropically compressed fuel is compared to propagation in fuel compressed by a standard current pulse. [4pt] [1] S.A. Slutz and R. A. Vesey, Phys. Rev. Lett. 108, 025003 (2012)

  16. Evaluation of adhesive and compressive strength of glass ionomer cements.

    Ramashanker; Singh, Raghuwar D; Chand, Pooran; Jurel, Sunit Km; Tripathi, Shuchi


    The aim of the study was to assess, compare and evaluate the adhesive strength and compressive strength of different brands of glass ionomer cements to a ceramometal alloy. (A) Glass ionomer cements: GC Fuji II (GC Corporation, Tokyo), Chem Flex (Dentsply DeTrey, Germany), Glass ionomer FX (Shofu-11, Japan), MR dental (MR dental suppliers Pvt Ltd, England). (B) Ceramometal alloy (Ni-Cr: Wiron 99; Bego, Bremen, Germany). (C) Cold cure acrylic resin. (E) Temperature cum humidity control chamber. (F) Instron Universal Testing Machine. Four different types of Glass ionomer cements were used in the study. From each type of the Glass ionomer cements, 15 specimens for each were made to evaluate the compressive strength and adhesive strength, respectively. The 15 specimens were further divided into three subgroups of five specimens. For compressive strength, specimens were tested at 2, 4 and 12 h by using Instron Universal Testing Machine. To evaluate the adhesive strength, specimens were surface treated with diamond bur, silicone carbide bur and sandblasting and tested under Instron Universal Testing Machine. It was concluded from the study that the compressive strength as well as the adhesive bond strength of MR dental glass ionomer cement with a ceramometal alloy was found to be maximum compare to other glass ionomer cements. Sandblasting surface treatment of ceramometal alloy was found to be comparatively more effective for adhesive bond strength between alloy and glass ionomer cement.

  17. Sulcus formation in a compressed elastic half space

    Biggins, John; Mahadevan, L.


    When a block of rubber, biological tissue or other soft material is subject to substantial compression, its surfaces undergo a folding instability. Rather than having a smooth profile, these folds contain cusps and hence have been called creases or sulcii rather than wrinkles. The stability of a compressed surface was first investigated by Biot (1965), assuming the strains associated with the instability were small. However, the compression threshold predicted with this approach is substantially too high. I will introduce a family of analytic area preserving maps that contain cusps (and hence points of infinite strain) that save energy before the linear stability threshold even at vanishing amplitude. This establishes that there is a region before the linear stability threshold is reached where the system is unstable to infinitesimal perturbations, but that this instability is quintessentially non-linear and cannot be found with linear strain elasticity.

  18. Compression and Progressive Retrieval of Multi-Dimensional Sensor Data

    Lorkowski, P.; Brinkhoff, T.


    Since the emergence of sensor data streams, increasing amounts of observations have to be transmitted, stored and retrieved. Performing these tasks at the granularity of single points would mean an inappropriate waste of resources. Thus, we propose a concept that performs a partitioning of observations by spatial, temporal or other criteria (or a combination of them) into data segments. We exploit the resulting proximity (according to the partitioning dimension(s)) within each data segment for compression and efficient data retrieval. While in principle allowing lossless compression, it can also be used for progressive transmission with increasing accuracy wherever incremental data transfer is reasonable. In a first feasibility study, we apply the proposed method to a dataset of ARGO drifting buoys covering large spatio-temporal regions of the world's oceans and compare the achieved compression ratio to other formats.

  19. Cascade of kinetic energy in three-dimensional compressible turbulence.

    Wang, Jianchun; Yang, Yantao; Shi, Yipeng; Xiao, Zuoli; He, X T; Chen, Shiyi


    The conservative cascade of kinetic energy is established using both Fourier analysis and a new exact physical-space flux relation in a simulated compressible turbulence. The subgrid scale (SGS) kinetic energy flux of the compressive mode is found to be significantly larger than that of the solenoidal mode in the inertial range, which is the main physical origin for the occurrence of Kolmogorov's -5/3 scaling of the energy spectrum in compressible turbulence. The perfect antiparallel alignment between the large-scale strain and the SGS stress leads to highly efficient kinetic energy transfer in shock regions, which is a distinctive feature of shock structures in comparison with vortex structures. The rescaled probability distribution functions of SGS kinetic energy flux collapse in the inertial range, indicating a statistical self-similarity of kinetic energy cascades.

  20. Worst-case Compressibility of Discrete and Finite Distributions

    Agnihotri, Samar


    In the worst-case distributed source coding (DSC) problem of [1], the smaller cardinality of the support-set describing the correlation in informant data, may neither imply that fewer informant bits are required nor that fewer informants need to be queried, to finish the data-gathering at the sink. It is important to formally address these observations for two reasons: first, to develop good worst-case information measures and second, to perform meaningful worst-case information-theoretic analysis of various distributed data-gathering problems. Towards this goal, we introduce the notions of bit-compressibility and informant-compressibility of support-sets. We consider DSC and distributed function computation problems and provide results on computing the bit- and informant- compressibilities regions of the support-sets as a function of their cardinality.

  1. Wavelet and wavelet packet compression of electrocardiograms.

    Hilton, M L


    Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.

  2. Lithological Uncertainty Expressed by Normalized Compression Distance

    Jatnieks, J.; Saks, T.; Delina, A.; Popovs, K.


    Lithological composition and structure of the Quaternary deposits is highly complex and heterogeneous in nature, especially as described in borehole log data. This work aims to develop a universal solution for quantifying uncertainty based on mutual information shared between the borehole logs. This approach presents tangible information directly useful in generalization of the geometry and lithology of the Quaternary sediments for use in regional groundwater flow models as a qualitative estimate of lithological uncertainty involving thousands of borehole logs would be humanly impossible due to the amount of raw data involved. Our aim is to improve parametrization of recharge in the Quaternary strata. This research however holds appeal for other areas of reservoir modelling, as demonstrated in the 2011 paper by Wellmann & Regenauer-Lieb. For our experiments we used extracts of the Quaternary strata from general-purpose geological borehole log database maintained by the Latvian Environment, Geology and Meteorology Centre, spanning the territory of Latvia. Lithological codes were generalised into 2 aggregation levels consisting of 5 and 20 rock types respectively. Our calculation of borehole log similarity relies on the concept of information distance proposed by Bennet et al. in 1998. This was developed into a practical data mining application by Cilibrasi in the 2007 dissertation. The resulting implementation called CompLearn utilities provide a calculation of the Normalized Compression Distance (NCD) metric. It relies on the universal data compression algorithms for estimating mutual information content in the data. This approach has proven to be universally successful for parameter free data mining in disciplines from molecular biology to network intrusion monitoring. To improve this approach for use in geology it is beneficial to apply several transformations as pre-processing steps to the borehole log data. Efficiency of text stream compressors, such as

  3. Maxwell's Demon and Data Compression

    Hosoya, Akio; Shikano, Yutaka


    In an asymmetric Szilard engine model of Maxwell's demon, we show the equivalence between information theoretical and thermodynamic entropies when the demon erases information optimally. The work gain by the engine can be exactly canceled out by the work necessary to reset demon's memory after optimal data compression a la Shannon before the erasure.

  4. Grid-free compressive beamforming

    Xenaki, Angeliki; Gerstoft, Peter


    The direction-of-arrival (DOA) estimation problem involves the localization of a few sources from a limited number of observations on an array of sensors, thus it can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve high...

  5. LIDAR data compression using wavelets

    Pradhan, B.; Mansor, Shattri; Ramli, Abdul Rahman; Mohamed Sharif, Abdul Rashid B.; Sandeep, K.


    The lifting scheme has been found to be a flexible method for constructing scalar wavelets with desirable properties. In this paper, it is extended to the LIDAR data compression. A newly developed data compression approach to approximate the LIDAR surface with a series of non-overlapping triangles has been presented. Generally a Triangulated Irregular Networks (TIN) are the most common form of digital surface model that consists of elevation values with x, y coordinates that make up triangles. But over the years the TIN data representation has become a case in point for many researchers due its large data size. Compression of TIN is needed for efficient management of large data and good surface visualization. This approach covers following steps: First, by using a Delaunay triangulation, an efficient algorithm is developed to generate TIN, which forms the terrain from an arbitrary set of data. A new interpolation wavelet filter for TIN has been applied in two steps, namely splitting and elevation. In the splitting step, a triangle has been divided into several sub-triangles and the elevation step has been used to 'modify' the point values (point coordinates for geometry) after the splitting. Then, this data set is compressed at the desired locations by using second generation wavelets. The quality of geographical surface representation after using proposed technique is compared with the original LIDAR data. The results show that this method can be used for significant reduction of data set.

  6. Compressed Blind De-convolution

    Saligrama, V


    Suppose the signal x is realized by driving a k-sparse signal u through an arbitrary unknown stable discrete-linear time invariant system H. These types of processes arise naturally in Reflection Seismology. In this paper we are interested in several problems: (a) Blind-Deconvolution: Can we recover both the filter $H$ and the sparse signal $u$ from noisy measurements? (b) Compressive Sensing: Is x compressible in the conventional sense of compressed sensing? Namely, can x, u and H be reconstructed from a sparse set of measurements. We develop novel L1 minimization methods to solve both cases and establish sufficient conditions for exact recovery for the case when the unknown system H is auto-regressive (i.e. all pole) of a known order. In the compressed sensing/sampling setting it turns out that both H and x can be reconstructed from O(k log(n)) measurements under certain technical conditions on the support structure of u. Our main idea is to pass x through a linear time invariant system G and collect O(k lo...

  7. Compressing spatio-temporal trajectories

    Gudmundsson, Joachim; Katajainen, Jyrki; Merrick, Damian


    A trajectory is a sequence of locations, each associated with a timestamp, describing the movement of a point. Trajectory data is becoming increasingly available and the size of recorded trajectories is getting larger. In this paper we study the problem of compressing planar trajectories such tha...

  8. Range Compressed Holographic Aperture Ladar


    digital holography, laser, active imaging, remote sensing, laser imaging 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT: SAR 8...slow speed tunable lasers, while relaxing the need to precisely track the transceiver or target motion. In the following section we describe a scenario...contrast targets. As shown in Figure 28, augmenting holographic ladar with range compression relaxes the dependence of image reconstruction on

  9. Compressive passive millimeter wave imager

    Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C


    A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.

  10. An adaptive fusion approach for infrared and visible images based on NSCT and compressed sensing

    Zhang, Qiong; Maldague, Xavier


    A novel nonsubsampled contourlet transform (NSCT) based image fusion approach, implementing an adaptive-Gaussian (AG) fuzzy membership method, compressed sensing (CS) technique, total variation (TV) based gradient descent reconstruction algorithm, is proposed for the fusion computation of infrared and visible images. Compared with wavelet, contourlet, or any other multi-resolution analysis method, NSCT has many evident advantages, such as multi-scale, multi-direction, and translation invariance. As is known, a fuzzy set is characterized by its membership function (MF), while the commonly known Gaussian fuzzy membership degree can be introduced to establish an adaptive control of the fusion processing. The compressed sensing technique can sparsely sample the image information in a certain sampling rate, and the sparse signal can be recovered by solving a convex problem employing gradient descent based iterative algorithm(s). In the proposed fusion process, the pre-enhanced infrared image and the visible image are decomposed into low-frequency subbands and high-frequency subbands, respectively, via the NSCT method as a first step. The low-frequency coefficients are fused using the adaptive regional average energy rule; the highest-frequency coefficients are fused using the maximum absolute selection rule; the other high-frequency coefficients are sparsely sampled, fused using the adaptive-Gaussian regional standard deviation rule, and then recovered by employing the total variation based gradient descent recovery algorithm. Experimental results and human visual perception illustrate the effectiveness and advantages of the proposed fusion approach. The efficiency and robustness are also analyzed and discussed through different evaluation methods, such as the standard deviation, Shannon entropy, root-mean-square error, mutual information and edge-based similarity index.

  11. Perceptually tuned JPEG coder for echocardiac image compression.

    Al-Fahoum, Amjed S; Reza, Ali M


    In this work, we propose an efficient framework for compressing and displaying medical images. Image compression for medical applications, due to available Digital Imaging and Communications in Medicine requirements, is limited to the standard discrete cosine transform-based joint picture expert group. The objective of this work is to develop a set of quantization tables (Q tables) for compression of a specific class of medical image sequences, namely echocardiac. The main issue of concern is to achieve a Q table that matches the specific application and can linearly change the compression rate by adjusting the gain factor. This goal is achieved by considering the region of interest, optimum bit allocation, human visual system constraint, and optimum coding technique. These parameters are jointly optimized to design a Q table that works robustly for a category of medical images. Application of this approach to echocardiac images shows high subjective and quantitative performance. The proposed approach exhibits objectively a 2.16-dB improvement in the peak signal-to-noise ratio and subjectively 25% improvement over the most useable compression techniques.

  12. Lossless data compression for infrared hyperspectral sounders: an update

    Huang, Bormin; Huang, Hung-Lung A.; Ahuja, Alok; Schmit, Timothy J.; Heymann, Roger W.


    The compression of hyperspectral sounder data is beneficial for more efficient archive and transfer given its large 3-D volume. Moreover, since physical retrieval of geophysical parameters from hyperspectral sounder data is a mathematically ill-posed problem that is sensitive to the error of the data, lossless or near-lossless compression is desired. This paper provides an update into applications of state-of-the-art 2D and 3D lossless compression algorithms such as 3D EZW, 3D SPIHT, 2D JPEG2000, 2D JPEG-LS and 2D CALIC for hyperspectral sounder data. In addition, in order to better explore the correlations between the remote spectral regions affected by the same type of atmospheric absorbing constituents or clouds, the Bias-Adjusted Reordering (BAR) scheme is presented which reorders the data such that the bias-adjusted distance between any two neighboring vectors is minimized. This scheme coupled with any of the state-of-the-art compression algorithms produces significant compression gains.

  13. The Compressed Baryonic Matter Experiment at FAIR

    Heuser, Johann M.


    The Compressed Baryonic Matter (CBM) experiment will explore the phase diagram of strongly interacting matter in the region of high net baryon densities. The experiment is being laid out for nuclear collision rates from 0.1 to 10 MHz to access a unique wide spectrum of probes, including rarest particles like hadrons containing charm quarks, or multi-strange hyperons. The physics programme will be performed with ion beams of energies up to 45 GeV/nucleon. Those will be delivered by the SIS-300 synchrotron at the completed FAIR accelerator complex. Parts of the research programme can already be addressed with the SIS-100 synchrotron at the start of FAIR operation in 2018. The initial energy range of up to 11 GeV/nucleon for heavy nuclei, 14 GeV/nucleon for light nuclei, and 29 GeV for protons, allows addressing the equation of state of compressed nuclear matter, the properties of hadrons in a dense medium, the production and propagation of charm near the production threshold, and exploring the third, strange dimension of the nuclide chart. In this article we summarize the CBM physics programme, the preparation of the detector, and give an outline of the recently begun construction of the Facility for Antiproton and Ion Research.

  14. Shock compression of [001] single crystal silicon

    Zhao, S.; Hahn, E. N.; Kad, B.; Remington, B. A.; Bringa, E. M.; Meyers, M. A.


    Silicon is ubiquitous in our advanced technological society, yet our current understanding of change to its mechanical response at extreme pressures and strain-rates is far from complete. This is due to its brittleness, making recovery experiments difficult. High-power, short-duration, laser-driven, shock compression and recovery experiments on [001] silicon (using impedance-matched momentum traps) unveiled remarkable structural changes observed by transmission electron microscopy. As laser energy increases, corresponding to an increase in peak shock pressure, the following plastic responses are are observed: surface cleavage along {111} planes, dislocations and stacking faults; bands of amorphized material initially forming on crystallographic orientations consistent with dislocation slip; and coarse regions of amorphized material. Molecular dynamics simulations approach equivalent length and time scales to laser experiments and reveal the evolution of shock-induced partial dislocations and their crucial role in the preliminary stages of amorphization. Application of coupled hydrostatic and shear stresses produce amorphization below the hydrostatically determined critical melting pressure under dynamic shock compression.

  15. A dual method for maximum entropy restoration

    Smith, C. B.


    A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.

  16. Maximum Throughput in Multiple-Antenna Systems

    Zamani, Mahdi


    The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...

  17. Photoemission spectromicroscopy with MAXIMUM at Wisconsin

    Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))


    We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).

  18. Maximum-likelihood method in quantum estimation

    Paris, M G A; Sacchi, M F


    The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.

  19. The maximum entropy technique. System's statistical description

    Belashev, B Z


    The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered

  20. 19 CFR 114.23 - Maximum period.


    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...

  1. Maximum-Likelihood Detection Of Noncoherent CPM

    Divsalar, Dariush; Simon, Marvin K.


    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.


    Pandya A M


    Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70

  3. Short-pulse, compressed ion beams at the Neutralized Drift Compression Experiment

    Seidl, P. A.; Barnard, J. J.; Davidson, R. C.; Friedman, A.; Gilson, E. P.; Grote, D.; Ji, Q.; Kaganovich, I. D.; Persaud, A.; Waldron, W. L.; Schenkel, T.


    We have commenced experiments with intense short pulses of ion beams on the Neutralized Drift Compression Experiment (NDCX-II) at Lawrence Berkeley National Laboratory, with 1-mm beam spot size within 2.5 ns full-width at half maximum. The ion kinetic energy is 1.2 MeV. To enable the short pulse duration and mm-scale focal spot radius, the beam is neutralized in a 1.5-meter-long drift compression section following the last accelerator cell. A short-focal-length solenoid focuses the beam in the presence of the volumetric plasma that is near the target. In the accelerator, the line-charge density increases due to the velocity ramp imparted on the beam bunch. The scientific topics to be explored are warm dense matter, the dynamics of radiation damage in materials, and intense beam and beam-plasma physics including select topics of relevance to the development of heavy-ion drivers for inertial fusion energy. Below the transition to melting, the short beam pulses offer an opportunity to study the multi-scale dynamics of radiation-induced damage in materials with pump-probe experiments, and to stabilize novel metastable phases of materials when short-pulse heating is followed by rapid quenching. First experiments used a lithium ion source; a new plasma-based helium ion source shows much greater charge delivered to the target.

  4. Short-Pulse, Compressed Ion Beams at the Neutralized Drift Compression Experiment

    Seidl, Peter A; Davidson, Ronald C; Friedman, Alex; Gilson, Erik P; Grote, David; Ji, Qing; Kaganovich, I D; Persaud, Arun; Waldron, William L; Schenkel, Thomas


    We have commenced experiments with intense short pulses of ion beams on the Neutralized Drift Compression Experiment (NDCX-II) at Lawrence Berkeley National Laboratory, with 1-mm beam spot size within 2.5 ns full-width at half maximum. The ion kinetic energy is 1.2 MeV. To enable the short pulse duration and mm-scale focal spot radius, the beam is neutralized in a 1.5-meter-long drift compression section following the last accelerator cell. A short-focal-length solenoid focuses the beam in the presence of the volumetric plasma that is near the target. In the accelerator, the line-charge density increases due to the velocity ramp imparted on the beam bunch. The scientific topics to be explored are warm dense matter, the dynamics of radiation damage in materials, and intense beam and beam-plasma physics including select topics of relevance to the development of heavy-ion drivers for inertial fusion energy. Below the transition to melting, the short beam pulses offer an opportunity to study the multi-scale dynam...

  5. Semantic Source Coding for Flexible Lossy Image Compression

    Phoha, Shashi; Schmiedekamp, Mendel


    Semantic Source Coding for Lossy Video Compression investigates methods for Mission-oriented lossy image compression, by developing methods to use different compression levels for different portions...

  6. Regionalism, Regionalization and Regional Development

    Liviu C. Andrei


    Full Text Available Sustained development is a concept associating other concepts, in its turn, in the EU practice, e.g. regionalism, regionalizing and afferent policies, here including structural policies. This below text, dedicated to integration concepts, will limit on the other hand to regionalizing, otherwise an aspect typical to Europe and to the EU. On the other hand, two aspects come up to strengthen this field of ideas, i.e. the region (al-regionalism-(regional development triplet has either its own history or precise individual outline of terms.

  7. Dynamic compressive properties of bovine knee layered tissue

    Nishida, Masahiro; Hino, Yuki; Todo, Mitsugu


    In Japan, the most common articular disease is knee osteoarthritis. Among many treatment methodologies, tissue engineering and regenerative medicine have recently received a lot of attention. In this field, cells and scaffolds are important, both ex vivo and in vivo. From the viewpoint of effective treatment, in addition to histological features, the compatibility of mechanical properties is also important. In this study, the dynamic and static compressive properties of bovine articular cartilage-cancellous bone layered tissue were measured using a universal testing machine and a split Hopkinson pressure bar method. The compressive behaviors of bovine articular cartilage-cancellous bone layered tissue were examined. The effects of strain rate on the maximum stress and the slope of stress-strain curves of the bovine articular cartilage-cancellous bone layered tissue were discussed.

  8. An Image Coder for Lossless and Near Lossless Compression

    MENChaoguang; LIXiukun; ZHAODebin; YANGXiaozong


    In this paper, we propose a new image coder (DACLIC) for lossless and near lossless image cornpression. The redundancy removal in DACLIC (Direction and context-based lossless/near lossless image coder) is achieved by block direction prediction and context-based error modeling. A quadtree coder and a postprocessing technique in DACLIC are also described. Experiments show that DACLIC has higher compression efficiency than the ISO standard: LOCO-I (Low complexity lossless compression for images). For example, DACLIC is superior to LOCO-I by 0.12bpp, 0.13bpp and 0.21bpp when the maximum absolute tolerant error n = 0. 5 and 10 for 512 × 512 image “Lena”. In term of computational complexity, DACLIC has marginally higher encoding complexity than LOCO-I but is comparable to LOCO-I in decoding complexity.

  9. Compressed Sensing Based Fingerprint Identification for Wireless Transmitters

    Caidan Zhao


    Full Text Available Most of the existing fingerprint identification techniques are unable to distinguish different wireless transmitters, whose emitted signals are highly attenuated, long-distance propagating, and of strong similarity to their transient waveforms. Therefore, this paper proposes a new method to identify different wireless transmitters based on compressed sensing. A data acquisition system is designed to capture the wireless transmitter signals. Complex analytical wavelet transform is used to obtain the envelope of the transient signal, and the corresponding features are extracted by using the compressed sensing theory. Feature selection utilizing minimum redundancy maximum relevance (mRMR is employed to obtain the optimal feature subsets for identification. The results show that the proposed method is more efficient for the identification of wireless transmitters with similar transient waveforms.

  10. Anomalous compression behavior of germanium during phase transformation

    Yan, Xiaozhi [Institute of Atomic and Molecular Physics, Sichuan University, Chengdu 610065 (China); Center for High Pressure Science and Technology Advanced Research (HPSTAR), Shanghai 201203 (China); Tan, Dayong [Center for High Pressure Science and Technology Advanced Research (HPSTAR), Shanghai 201203 (China); Guangzhou Institute of Geochemistry, Chinese Academic of Sciences, Guangzhou 510640 (China); Ren, Xiangting [Center for High Pressure Science and Technology Advanced Research (HPSTAR), Shanghai 201203 (China); Yang, Wenge, E-mail:, E-mail: [Center for High Pressure Science and Technology Advanced Research (HPSTAR), Shanghai 201203 (China); High Pressure Synergetic Consortium (HPSynC), Geophysical Laboratory, Carnegie Institution of Washington, Argonne, Illinois 60439 (United States); He, Duanwei, E-mail:, E-mail: [Institute of Atomic and Molecular Physics, Sichuan University, Chengdu 610065 (China); Institute of Fluid Physics and National Key Laboratory of Shockwave and Detonation Physic, China Academy of Engineering Physics, Mianyang 621900 (China); Mao, Ho-Kwang [Center for High Pressure Science and Technology Advanced Research (HPSTAR), Shanghai 201203 (China); High Pressure Synergetic Consortium (HPSynC), Geophysical Laboratory, Carnegie Institution of Washington, Argonne, Illinois 60439 (United States); Geophysical Laboratory, Carnegie Institution of Washington, Washington, DC 20015 (United States)


    In this article, we present the abnormal compression and plastic behavior of germanium during the pressure-induced cubic diamond to β-tin structure transition. Between 8.6 GPa and 13.8 GPa, in which pressure range both phases are co-existing, first softening and followed by hardening for both phases were observed via synchrotron x-ray diffraction and Raman spectroscopy. These unusual behaviors can be interpreted as the volume misfit between different phases. Following Eshelby, the strain energy density reaches the maximum in the middle of the transition zone, where the switch happens from softening to hardening. Insight into these mechanical properties during phase transformation is relevant for the understanding of plasticity and compressibility of crystal materials when different phases coexist during a phase transition.

  11. Infraspinatus muscle atrophy from suprascapular nerve compression.

    Cordova, Christopher B; Owens, Brett D


    Muscle weakness without pain may signal a nerve compression injury. Because these injuries should be identified and treated early to prevent permanent muscle weakness and atrophy, providers should consider suprascapular nerve compression in patients with shoulder muscle weakness.

  12. Spatio-temporal observations of tertiary ozone maximum

    V. F. Sofieva


    Full Text Available We present spatio-temporal distributions of tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at altitude ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time obtaining spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.

    The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.

    Since ozone in the mesosphere is very sensitive to HOx concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HOx enhancement from the increased ionization.


    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler


    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  14. A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition With Security Guarantee in e-Health Applications.

    Ma, JiaLi; Zhang, TanTan; Dong, MingChui


    This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.

  15. A combined application of lossless and lossy compression in ECG processing and transmission via GSM-based SMS.

    Mukhopadhyay, S K; Mitra, S; Mitra, M


    This paper presents a software-based scheme for reliable and robust Electrocardiogram (ECG) data compression and its efficient transmission using Second Generation (2G) Global System for Mobile Communication (GSM) based Short Message Service (SMS). To achieve a firm lossless compression in high standard deviating QRS complex regions and an acceptable lossy compression in the rest of the signal, two different algorithms have been used. The combined compression module is such that it outputs only American Standard Code for Information Interchange (ASCII) characters and, hence, SMS service is found to be most suitable for transmitting the compressed signal. At the receiving end, the ECG signal is reconstructed using just the reverse algorithm. The module has been tested to all the 12 leads of different types of ECG signals (healthy and abnormal) collected from the PTB Diagnostic ECG Database. The compression algorithm achieves an average compression ratio of ∼22.51, without any major alteration of clinical morphology.

  16. Considerations and Algorithms for Compression of Sets

    Larsson, Jesper

    compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques.......We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...

  17. Cascaded quadratic soliton compression at 800 nm

    Bache, Morten; Bang, Ole; Moses, Jeffrey;


    We study soliton compression in quadratic nonlinear materials at 800 nm, where group-velocity mismatch dominates. We develop a nonlocal theory showing that efficient compression depends strongly on characteristic nonlocal time scales related to pulse dispersion.......We study soliton compression in quadratic nonlinear materials at 800 nm, where group-velocity mismatch dominates. We develop a nonlocal theory showing that efficient compression depends strongly on characteristic nonlocal time scales related to pulse dispersion....

  18. Still image and video compression with MATLAB

    Thyagarajan, K


    This book describes the principles of image and video compression techniques and introduces current and popular compression standards, such as the MPEG series. Derivations of relevant compression algorithms are developed in an easy-to-follow fashion. Numerous examples are provided in each chapter to illustrate the concepts. The book includes complementary software written in MATLAB SIMULINK to give readers hands-on experience in using and applying various video compression methods. Readers can enhance the software by including their own algorithms.

  19. Simultaneous denoising and compression of multispectral images

    Hagag, Ahmed; Amin, Mohamed; Abd El-Samie, Fathi E.


    A new technique for denoising and compression of multispectral satellite images to remove the effect of noise on the compression process is presented. One type of multispectral images has been considered: Landsat Enhanced Thematic Mapper Plus. The discrete wavelet transform (DWT), the dual-tree DWT, and a simple Huffman coder are used in the compression process. Simulation results show that the proposed technique is more effective than other traditional compression-only techniques.

  20. Image quality (IQ) guided multispectral image compression

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik


    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  1. Brain image Compression, a brief survey

    Saleha Masood


    Full Text Available Brain image compression is known as a subfield of image compression. It allows the deep analysis and measurements of brain images in different modes. Brain images are compressed to analyze and diagnose in an effective manner while reducing the image storage space. This survey study describes the different existing techniques regarding brain image compression. The techniques come under different categories. The study also discusses these categories.

  2. Position index preserving compression of text data

    Akhtar, Nasim; Rashid, Mamunur; Islam, Shafiqul; Kashem, Mohammod Abul; Kolybanov, Cyrll Y.


    Data compression offers an attractive approach to reducing communication cost by using available bandwidth effectively. It also secures data during transmission for its encoded form. In this paper an index based position oriented lossless text compression called PIPC ( Position Index Preserving Compression) is developed. In PIPC the position of the input word is denoted by ASCII code. The basic philosopy of the secure compression is to preprocess the text and transform it into some intermedia...

  3. MAXAD distortion minimization for wavelet compression of remote sensing data

    Alecu, Alin; Munteanu, Adrian; Schelkens, Peter; Cornelis, Jan P.; Dewitte, Steven


    In the context of compression of high resolution multi-spectral satellite image data consisting of radiances and top-of-the-atmosphere fluxes, it is vital that image calibration characteristics (luminance, radiance) must be preserved within certain limits in lossy image compression. Though existing compression schemes (SPIHT, JPEG2000, SQP) give good results as far as minimization of the global PSNR error is concerned, they fail to guarantee a maximum local error. With respect to this, we introduce a new image compression scheme, which guarantees a MAXAD distortion, defined as the maximum absolute difference between original pixel values and reconstructed pixel values. In terms of defining the Lagrangian optimization problem, this reflects in minimization of the rate given the MAXAD distortion. Our approach thus uses the l-infinite distortion measure, which is applied to the lifting scheme implementation of the 9-7 floating point Cohen-Daubechies-Feauveau (CDF) filter. Scalar quantizers, optimal in the D-R sense, are derived for every subband, by solving a global optimization problem that guarantees a user-defined MAXAD. The optimization problem has been defined and solved for the case of the 9-7 filter, and we show that our approach is valid and may be applied to any finite wavelet filters synthesized via lifting. The experimental assessment of our codec shows that our technique provides excellent results in applications such as those for remote sensing, in which reconstruction of image calibration characteristics within a tolerable local error (MAXAD) is perceived as being of crucial importance compared to obtaining an acceptable global error (PSNR), as is the case of existing quantizer design techniques.

  4. Acoustic metric of the compressible draining bathtub

    Cherubini, C.; Filippi, S.


    The draining bathtub flow, a cornerstone in the theory of acoustic black holes, is here extended to the case of exact solutions for compressible nonviscous flows characterized by a polytropic equation of state. Investigating the analytical configurations obtained for selected values of the polytropic index, it is found that each of them becomes nonphysical at the so called limiting circle. By studying the null geodesics structure of the corresponding acoustic line elements, it is shown that such a geometrical locus coincides with the acoustic event horizon. This region is characterized also by an infinite value of space-time curvature, so the acoustic analogy breaks down there. Possible applications for artificial and natural vortices are finally discussed.

  5. Compressed Air Energy Storage in Denmark

    Salgi, Georges Garabeth; Lund, Henrik


    the prices from fluctuating to the extent that CAES investments have not been considered feasible. This report studies the effect of technological development and possible future price development of investments in CAES plants of various capacities. It is found that advanced high-efficiency CAES plants......Compressed air energy storage system (CAES) is a technology which can be used for integrating more fluctuating renewable energy sources into the electricity supply system. On a utility scale, CAES has a high feasibility potential compared to other storage technologies. Here, the technology...... is analysed with regard to the Danish energy system. In Denmark, wind power supplies 20% of the electricity demand and 50% is produced by combined heat and power (CHP). The operation of CAES requires high electricity price volatility. However, in the Nordic region, large hydro capacities have so far kept...

  6. A Compressed Sensing Perspective of Hippocampal Function

    Panagiotis ePetrantonakis


    Full Text Available Hippocampus is one of the most important information processing units in the brain. Input from the cortex passes through convergent axon pathways to the downstream hippocampal subregions and, after being appropriately processed, is fanned out back to the cortex. Here, we review evidence of the hypothesis that information flow and processing in the hippocampus complies with the principles of Compressed Sensing (CS. The CS theory comprises a mathematical framework that describes how and under which conditions, restricted sampling of information (data set can lead to condensed, yet concise, forms of the initial, subsampled information entity (i.e. of the original data set. In this work, hippocampus related regions and their respective circuitry are presented as a CS-based system whose different components collaborate to realize efficient memory encoding and decoding processes. This proposition introduces a unifying mathematical framework for hippocampal function and opens new avenues for exploring coding and decoding strategies in the brain.

  7. H.264/AVC Video Compression on Smartphones

    Sharabayko, M. P.; Markov, N. G.


    In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

  8. BPCS steganography using EZW lossy compressed images

    Spaulding, Jeremiah; Noda, Hideki; Shirazi, Mahdad N.; Kawaguchi, Eiji


    This paper presents a steganography method based on an embedded zerotree wavelet (EZW) compression scheme and bit-plane complexity segmentation (BPCS) steganography. The proposed steganography enables us to use lossy compressed images as dummy files in bit-plane-based steganographic algorithms. Large embedding rates of around 25% of the compressed image size were achieved with little noticeable degradation in image quality.

  9. Multiscale Compression Entropy of Microvascular Blood FlowSignals: Comparison of Results from Laser Speckle Contrastand Laser Doppler Flowmetry Data in Healthy Subjects

    Humeau-Heurtier, Anne; Baumert, Mathias; Mahé, Guillaume; Abraham, Pierre


    .... This is performed through the computation of their multiscale compression entropy. The results obtained with LSCI time series computed from different regions of interest (ROI) sizes are examined...

  10. Maximum permissible voltage of YBCO coated conductors

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.


    Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  11. Computing Rooted and Unrooted Maximum Consistent Supertrees

    van Iersel, Leo


    A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.

  12. Compression or tension? The stress distribution in the proximal femur

    Meakin JR


    Full Text Available Abstract Background Questions regarding the distribution of stress in the proximal human femur have never been adequately resolved. Traditionally, by considering the femur in isolation, it has been believed that the effect of body weight on the projecting neck and head places the superior aspect of the neck in tension. A minority view has proposed that this region is in compression because of muscular forces pulling the femur into the pelvis. Little has been done to study stress distributions in the proximal femur. We hypothesise that under physiological loading the majority of the proximal femur is in compression and that the internal trabecular structure functions as an arch, transferring compressive stresses to the femoral shaft. Methods To demonstrate the principle, we have developed a 2D finite element model of the femur in which body weight, a representation of the pelvis, and ligamentous forces were included. The regions of higher trabecular bone density in the proximal femur (the principal trabecular systems were assigned a higher modulus than the surrounding trabecular bone. Two-legged and one-legged stances, the latter including an abductor force, were investigated. Results The inclusion of ligamentous forces in two-legged stance generated compressive stresses in the proximal femur. The increased modulus in areas of greater structural density focuses the stresses through the arch-like internal structure. Including an abductor muscle force in simulated one-legged stance also produced compression, but with a different distribution. Conclusion This 2D model shows, in principle, that including ligamentous and muscular forces has the effect of generating compressive stresses across most of the proximal femur. The arch-like trabecular structure transmits the compressive loads to the shaft. The greater strength of bone in compression than in tension is then used to advantage. These results support the hypothesis presented. If correct, a

  13. Strict Authentication Watermarking with JPEG Compression (SAW-JPEG) for Medical Images

    Zain, Jasni Mohamad


    This paper proposes a strict authentication watermarking for medical images. In this scheme, we define region of interest (ROI) by taking the smallest rectangle around an image. The watermark is generated from hashing the area of interest. The embedding region is considered to be outside the region of interest as to preserve the area from distortion as a result from watermarking. The strict authentication watermarking is robust to some degree of JPEG compression (SAW-JPEG). JPEG compression will be reviewed. To embed a watermark in the spatial domain, we have to make sure that the embedded watermark will survive JPEG quantization process. The watermarking scheme, including data embedding, extracting and verifying procedure were presented. Experimental results showed that such a scheme could embed and extract the watermark at a high compression rate. The watermark is robust to a high compression rate up to 90.6%. The JPEG image quality threshold is 60 for the least significant bit embedding. The image quality ...

  14. Object-based wavelet compression using coefficient selection

    Zhao, Lifeng; Kassim, Ashraf A.


    In this paper, we present a novel approach to code image regions of arbitrary shapes. The proposed algorithm combines a coefficient selection scheme with traditional wavelet compression for coding arbitrary regions and uses a shape adaptive embedded zerotree wavelet coding (SA-EZW) to quantize the selected coefficients. Since the shape information is implicitly encoded by the SA-EZW, our decoder can reconstruct the arbitrary region without separate shape coding. This makes the algorithm simple to implement and avoids the problem of contour coding. Our algorithm also provides a sufficient framework to address content-based scalability and improved coding efficiency as described by MPEG-4.

  15. Stability of compressible boundary layers

    Nayfeh, Ali H.


    The stability of compressible 2-D and 3-D boundary layers is reviewed. The stability of 2-D compressible flows differs from that of incompressible flows in two important features: There is more than one mode of instability contributing to the growth of disturbances in supersonic laminar boundary layers and the most unstable first mode wave is 3-D. Whereas viscosity has a destabilizing effect on incompressible flows, it is stabilizing for high supersonic Mach numbers. Whereas cooling stabilizes first mode waves, it destabilizes second mode waves. However, second order waves can be stabilized by suction and favorable pressure gradients. The influence of the nonparallelism on the spatial growth rate of disturbances is evaluated. The growth rate depends on the flow variable as well as the distance from the body. Floquet theory is used to investigate the subharmonic secondary instability.

  16. Conservative regularization of compressible flow

    Krishnaswami, Govind S; Thyagaraja, Anantanarayanan


    Ideal Eulerian flow may develop singularities in vorticity w. Navier-Stokes viscosity provides a dissipative regularization. We find a local, conservative regularization - lambda^2 w times curl(w) of compressible flow and compressible MHD: a three dimensional analogue of the KdV regularization of the one dimensional kinematic wave equation. The regulator lambda is a field subject to the constitutive relation lambda^2 rho = constant. Lambda is like a position-dependent mean-free path. Our regularization preserves Galilean, parity and time-reversal symmetries. We identify locally conserved energy, helicity, linear and angular momenta and boundary conditions ensuring their global conservation. Enstrophy is shown to remain bounded. A swirl velocity field is identified, which transports w/rho and B/rho generalizing the Kelvin-Helmholtz and Alfven theorems. A Hamiltonian and Poisson bracket formulation is given. The regularized equations are used to model a rotating vortex, channel flow, plane flow, a plane vortex ...

  17. Maximum Multiflow in Wireless Network Coding

    Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao


    In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.

  18. Compressing DNA sequence databases with coil

    Hendy Michael D


    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  19. Antiproton compression and radial measurements

    Andresen, G. B.; Bertsche, W.; Bowe, P. D.; Bray, C. C.; Butler, E.; Cesar, C. L.; Chapman, S.; Charlton, M.; Fajans, J.; Fujiwara, M. C.; Funakoshi, R.; Gill, D. R.; Hangst, J. S.; Hardy, W. N.; Hayano, R. S.; Hayden, M. E.; Humphries, A. J.; Hydomako, R.; Jenkins, M. J.; Jørgensen, L. V.; Kurchaninov, L.; Lambo, R.; Madsen, N.; Nolan, P.; Olchanski, K.; Olin, A.; Page, R. D.; Povilus, A.; Pusa, P.; Robicheaux, F.; Sarid, E.; El Nasr, S. Seif; Silveira, D. M.; Storey, J. W.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Yamazaki, Y.


    Control of the radial profile of trapped antiproton clouds is critical to trapping antihydrogen. We report detailed measurements of the radial manipulation of antiproton clouds, including areal density compressions by factors as large as ten, achieved by manipulating spatially overlapped electron plasmas. We show detailed measurements of the near-axis antiproton radial profile, and its relation to that of the electron plasma. We also measure the outer radial profile by ejecting antiprotons to the trap wall using an octupole magnet.

  20. The origin of the compressibility anomaly in amorphous silica: a molecular dynamics study

    Walker, Andrew M [Department of Earth Sciences, University of Cambridge, Downing Street, Cambridge CB2 3EQ (United Kingdom); Sullivan, Lucy A [Department of Earth Sciences, University of Cambridge, Downing Street, Cambridge CB2 3EQ (United Kingdom); Trachenko, Kostya [Department of Earth Sciences, University of Cambridge, Downing Street, Cambridge CB2 3EQ (United Kingdom); Bruin, Richard P [Department of Earth Sciences, University of Cambridge, Downing Street, Cambridge CB2 3EQ (United Kingdom); White, Toby O H [Department of Earth Sciences, University of Cambridge, Downing Street, Cambridge CB2 3EQ (United Kingdom); Dove, Martin T [Department of Earth Sciences, University of Cambridge, Downing Street, Cambridge CB2 3EQ (United Kingdom); Tyer, Richard P [CCLRC Daresbury Laboratory, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom); Todorov, Ilian T [CCLRC Daresbury Laboratory, Daresbury, Warrington, Cheshire WA4 4AD (United Kingdom); Wells, Stephen A [Center for Biological Physics, Arizona State University, Bateman Physical Sciences Building, Tempe, AZ 85287-1504 (United States)


    We propose an explanation for the anomalous compressibility maximum in amorphous silica based on rigidity arguments. The model considers the fact that a network structure will be rigidly compressed in the high-pressure limit, and rigidly taut in the negative pressure limit, but flexible and hence softer at intermediate pressures. We validate the plausibility of this explanation by the analysis of molecular dynamics simulations. In fact this model is quite general, and will apply to any network solid, crystalline or amorphous; there are experimental indications that support this prediction. In contrast to other ideas concerning the compressibility maximum in amorphous silica, the model presented here does not invoke the existence of polyamorphic phase transitions in the glass phase.

  1. Compressibility effects on turbulent mixing

    Panickacheril John, John; Donzis, Diego


    We investigate the effect of compressibility on passive scalar mixing in isotropic turbulence with a focus on the fundamental mechanisms that are responsible for such effects using a large Direct Numerical Simulation (DNS) database. The database includes simulations with Taylor Reynolds number (Rλ) up to 100, turbulent Mach number (Mt) between 0.1 and 0.6 and Schmidt number (Sc) from 0.5 to 1.0. We present several measures of mixing efficiency on different canonical flows to robustly identify compressibility effects. We found that, like shear layers, mixing is reduced as Mach number increases. However, data also reveal a non-monotonic trend with Mt. To assess directly the effect of dilatational motions we also present results with both dilatational and soleniodal forcing. Analysis suggests that a small fraction of dilatational forcing decreases mixing time at higher Mt. Scalar spectra collapse when normalized by Batchelor variables which suggests that a compressive mechanism similar to Batchelor mixing in incompressible flows might be responsible for better mixing at high Mt and with dilatational forcing compared to pure solenoidal mixing. We also present results on scalar budgets, in particular on production and dissipation. Support from NSF is gratefully acknowledged.

  2. Laser Compression of Nanocrystalline Metals

    Meyers, M. A.; Jarmakani, H. N.; Bringa, E. M.; Earhart, P.; Remington, B. A.; Vo, N. Q.; Wang, Y. M.


    Shock compression in nanocrystalline nickel is simulated over a range of pressures (10-80 GPa) and compared with experimental results. Laser compression carried out at Omega and Janus yields new information on the deformation mechanisms of nanocrystalline Ni. Although conventional deformation does not produce hardening, the extreme regime imparted by laser compression generates an increase in hardness, attributed to the residual dislocations observed in the structure by TEM. An analytical model is applied to predict the critical pressure for the onset of twinning in nanocrystalline nickel. The slip-twinning transition pressure is shifted from 20 GPa, for polycrystalline Ni, to 80 GPa, for Ni with g. s. of 10 nm. Contributions to the net strain from the different mechanisms of plastic deformation (partials, perfect dislocations, twinning, and grain boundary shear) were quantified in the nanocrystalline samples through MD calculations. The effect of release, a phenomenon often neglected in MD simulations, on dislocation behavior was established. A large fraction of the dislocations generated at the front are annihilated.

  3. Analysis of fracture process zone in brittle rock subjected to shear-compressive loading

    ZHOU De-quan; CHEN Feng; CAO Ping; MA Chun-de


    An analytical expression for the prediction of shear-compressive fracture process zone(SCFPZ) is derived by using a proposed local strain energy density criterion, in which the strain energy density is separated into the dilatational and distortional strain energy density, only the former is considered to contribute to the brittle fracture of rock in different loading cases. The theoretical prediction by this criterion shows that the SCFPZ is of asymmetric mulberry leaf in shape, which forms a shear-compression fracture kern. Dilatational strain energy density along the boundary of SCFPZ reaches its maximum value. The dimension of SCFPZ is governed by the ratio of KⅡ to KⅠ . The analytical results are then compared with those from literatures and the tests conducted on double edge cracked Brazilian disk subjected to diametrical compression. The obtained results are useful to the prediction of crack extension and to nonlinear analysis of shear-compressive fracture of brittle rock.

  4. An Electron Bunch Compression Scheme for a Superconducting Radio Frequency Linear Accelerator Driven Light Source

    C. Tennant, S.V. Benson, D. Douglas, P. Evtushenko, R.A. Legg


    We describe an electron bunch compression scheme suitable for use in a light source driven by a superconducting radio frequency (SRF) linac. The key feature is the use of a recirculating linac to perform the initial bunch compression. Phasing of the second pass beam through the linac is chosen to de-chirp the electron bunch prior to acceleration to the final energy in an SRF linac ('afterburner'). The final bunch compression is then done at maximum energy. This scheme has the potential to circumvent some of the most technically challenging aspects of current longitudinal matches; namely transporting a fully compressed, high peak current electron bunch through an extended SRF environment, the need for a RF harmonic linearizer and the need for a laser heater. Additional benefits include a substantial savings in capital and operational costs by efficiently using the available SRF gradient.

  5. The Wiener maximum quadratic assignment problem

    Cela, Eranda; Woeginger, Gerhard J


    We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.

  6. Maximum confidence measurements via probabilistic quantum cloning

    Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu


    Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.

  7. Revealing the Maximum Strength in Nanotwinned Copper

    Lu, L.; Chen, X.; Huang, Xiaoxu


    The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  8. The Maximum Resource Bin Packing Problem

    Boyar, J.; Epstein, L.; Favrholdt, L.M.


    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  9. Revealing the Maximum Strength in Nanotwinned Copper

    Lu, L.; Chen, X.; Huang, Xiaoxu


    The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  10. Maximum phytoplankton concentrations in the sea

    Jackson, G.A.; Kiørboe, Thomas


    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...

  11. Image Compression Using Discrete Wavelet Transform

    Mohammad Mozammel Hoque Chowdhury


    Full Text Available Image compression is a key technology in transmission and storage of digital images because of vast data associated with them. This research suggests a new image compression scheme with pruning proposal based on discrete wavelet transformation (DWT. The effectiveness of the algorithm has been justified over some real images, and the performance of the algorithm has been compared with other common compression standards. The algorithm has been implemented using Visual C++ and tested on a Pentium Core 2 Duo 2.1 GHz PC with 1 GB RAM. Experimental results demonstrate that the proposed technique provides sufficient high compression ratios compared to other compression techniques.

  12. Compression Waves and Phase Plots: Simulations

    Orlikowski, Daniel


    Compression wave analysis started nearly 50 years ago with Fowles.[1] Coperthwaite and Williams [2] gave a method that helps identify simple and steady waves. We have been developing a method that gives describes the non-isentropic character of compression waves, in general.[3] One result of that work is a simple analysis tool. Our method helps clearly identify when a compression wave is a simple wave, a steady wave (shock), and when the compression wave is in transition. This affects the analysis of compression wave experiments and the resulting extraction of the high-pressure equation of state.

  13. Mathematical theory of compressible fluid flow

    Von Mises, Richard


    Mathematical Theory of Compressible Fluid Flow covers the conceptual and mathematical aspects of theory of compressible fluid flow. This five-chapter book specifically tackles the role of thermodynamics in the mechanics of compressible fluids. This text begins with a discussion on the general theory of characteristics of compressible fluid with its application. This topic is followed by a presentation of equations delineating the role of thermodynamics in compressible fluid mechanics. The discussion then shifts to the theory of shocks as asymptotic phenomena, which is set within the context of

  14. Video compressive sensing using Gaussian mixture models.

    Yang, Jianbo; Yuan, Xin; Liao, Xuejun; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence


    A Gaussian mixture model (GMM)-based algorithm is proposed for video reconstruction from temporally compressed video measurements. The GMM is used to model spatio-temporal video patches, and the reconstruction can be efficiently computed based on analytic expressions. The GMM-based inversion method benefits from online adaptive learning and parallel computation. We demonstrate the efficacy of the proposed inversion method with videos reconstructed from simulated compressive video measurements, and from a real compressive video camera. We also use the GMM as a tool to investigate adaptive video compressive sensing, i.e., adaptive rate of temporal compression.

  15. Reattachment heating upstream of short compression ramps in hypersonic flow

    Estruch-Samper, David


    Hypersonic shock-wave/boundary-layer interactions with separation induce unsteady thermal loads of particularly high intensity in flow reattachment regions. Building on earlier semi-empirical correlations, the maximum heat transfer rates upstream of short compression ramp obstacles of angles 15° ⩽ θ ⩽ 135° are here discretised based on time-dependent experimental measurements to develop insight into their transient nature (Me = 8.2-12.3, Re_h= 0.17× 105-0.47× 105). Interactions with an incoming laminar boundary layer experience transition at separation, with heat transfer oscillating between laminar and turbulent levels exceeding slightly those in fully turbulent interactions. Peak heat transfer rates are strongly influenced by the stagnation of the flow upon reattachment close ahead of obstacles and increase with ramp angle all the way up to θ =135°, whereby rates well over two orders of magnitude above the undisturbed laminar levels are intermittently measured (q'_max>10^2q_{u,L}). Bearing in mind the varying degrees of strength in the competing effect between the inviscid and viscous terms—namely the square of the hypersonic similarity parameter (Mθ )^2 for strong interactions and the viscous interaction parameter bar{χ } (primarily a function of Re and M)—the two physical factors that appear to most globally encompass the effects of peak heating for blunt ramps (θ ⩾ 45°) are deflection angle and stagnation heat transfer, so that this may be fundamentally expressed as q'_max∝ {q_{o,2D}} θ ^2 with further parameters in turn influencing the interaction to a lesser extent. The dominant effect of deflection angle is restricted to short obstacle heights, where the rapid expansion at the top edge of the obstacle influences the relaxation region just downstream of reattachment and leads to an upstream displacement of the separation front. The extreme heating rates result from the strengthening of the reattaching shear layer with the increase in

  16. Spinal cord compression in two related Ursus arctos horribilis.

    Thomovsky, Stephanie A; Chen, Annie V; Roberts, Greg R; Schmidt, Carrie E; Layton, Arthur W


    Two 15-yr-old grizzly bear littermates were evaluated within 9 mo of each other with the symptom of acute onset of progressive paraparesis and proprioceptive ataxia. The most significant clinical examination finding was pelvic limb paresis in both bears. Magnetic resonance examinations of both bears showed cranial thoracic spinal cord compression. The first bear had left-sided extradural, dorsolateral spinal cord compression at T3-T4. Vertebral canal stenosis was also observed at T2-T3. Images of the second bear showed lateral spinal cord compression from T2-T3 to T4-T5. Intervertebral disk disease and associated spinal cord compression was also observed at T2-T3 and T3-T4. One grizzly bear continued to deteriorate despite reduced exercise, steroid, and antibiotic therapy. The bear was euthanized, and a necropsy was performed. The postmortem showed a spinal ganglion cyst that caused spinal cord compression at the level of T3-T4. Wallerian-like degeneration was observed from C3-T6. The second bear was prescribed treatment that consisted of a combination of reduced exercise and steroid therapy. He continued to deteriorate with these medical therapies and was euthanized 4 mo after diagnosis. A necropsy showed hypertrophy and protrusion of the dorsal longitudinal ligament at T2-T3 and T3-T4, with resulting spinal cord compression in this region. Wallerian-like degeneration was observed from C2-L1. This is one of few case reports that describes paresis in bears. It is the only case report, to the authors' knowledge, that describes spinal magnetic resonance imaging findings in a grizzly bear and also the only report that describes a cranial thoracic myelopathy in two related grizzly bears with neurologic signs.

  17. The timing of the maximum extent of the Rhone Glacier at Wangen a.d. Aare

    Ivy-Ochs, S.; Schluechter, C. [Bern Univ. (Switzerland); Kubik, P.W. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Beer, J. [EAWAG, Duebendorf (Switzerland)


    Erratic blocks found in the region of Wangen a.d. Aare delineate the maximum position of the Solothurn lobe of the Rhone Glacier. {sup 10}Be and {sup 26}Al exposure ages of three of these blocks show that the glacier withdraw from its maximum position at or slightly before 20,000{+-}1800 years ago. (author) 1 fig., 5 refs.

  18. Rapid intermittent compression increases skin circulation in chronically ischemic legs with infra-popliteal arterial obstruction.

    van Bemmelen, P S; Weiss-Olmanni, J; Ricotta, J J


    Intermittent pneumatic compression (IPC) has been shown, by duplex, to increase popliteal artery flow in normal legs and in legs with superficial femoral artery occlusion. The objective of this study was to see if IPC improves distal circulation in legs with severe infra-popliteal disease. Sixteen chronically ischemic legs with arteriographically demonstrated crural or pedal disease were studied during compression with an ArtAssist compression-device. This device delivers rapid compression of the foot and calf. Cutaneous laser-Doppler flux was measured continuously at the dorsal aspect of the distal forefoot. The findings were compared to those in thirteen normal controls of similar age. In ischemic legs, the spontaneous changes in skin-flux are minimal: mean resting flux in sitting position was 0.87 +/- 0.46 AU (Arbitrary Units). Upon activation of the compression device the maximum flux increased to 4.55 +/- 1.35 AU. The difference was statistically significant (p < 0.001). This response was similar to that in normal controls. Arterial flow augmentation upon compression is associated with increased skin-flux. This response remains present in severe disease of the crural outflow-arteries. Further investigation to define the role of intermittent compression for management of chronic arterial disease is warranted.

  19. The Lateral Compressive Buckling Performance of Aluminum Honeycomb Panels for Long-Span Hollow Core Roofs

    Caiqi Zhao


    Full Text Available To solve the problem of critical buckling in the structural analysis and design of the new long-span hollow core roof architecture proposed in this paper (referred to as a “honeycomb panel structural system” (HSSS, lateral compression tests and finite element analyses were employed in this study to examine the lateral compressive buckling performance of this new type of honeycomb panel with different length-to-thickness ratios. The results led to two main conclusions: (1 Under the experimental conditions that were used, honeycomb panels with the same planar dimensions but different thicknesses had the same compressive stiffness immediately before buckling, while the lateral compressive buckling load-bearing capacity initially increased rapidly with an increasing honeycomb core thickness and then approached the same limiting value; (2 The compressive stiffnesses of test pieces with the same thickness but different lengths were different, while the maximum lateral compressive buckling loads were very similar. Overall instability failure is prone to occur in long and flexible honeycomb panels. In addition, the errors between the lateral compressive buckling loads from the experiment and the finite element simulations are within 6%, which demonstrates the effectiveness of the nonlinear finite element analysis and provides a theoretical basis for future analysis and design for this new type of spatial structure.

  20. Maximum Likelihood Sequence Detection Receivers for Nonlinear Optical Channels

    Gabriel N. Maggio


    Full Text Available The space-time whitened matched filter (ST-WMF maximum likelihood sequence detection (MLSD architecture has been recently proposed (Maggio et al., 2014. Its objective is reducing implementation complexity in transmissions over nonlinear dispersive channels. The ST-WMF-MLSD receiver (i drastically reduces the number of states of the Viterbi decoder (VD and (ii offers a smooth trade-off between performance and complexity. In this work the ST-WMF-MLSD receiver is investigated in detail. We show that the space compression of the nonlinear channel is an instrumental property of the ST-WMF-MLSD which results in a major reduction of the implementation complexity in intensity modulation and direct detection (IM/DD fiber optic systems. Moreover, we assess the performance of ST-WMF-MLSD in IM/DD optical systems with chromatic dispersion (CD and polarization mode dispersion (PMD. Numerical results for a 10 Gb/s, 700 km, and IM/DD fiber-optic link with 50 ps differential group delay (DGD show that the number of states of the VD in ST-WMF-MLSD can be reduced ~4 times compared to an oversampled MLSD. Finally, we analyze the impact of the imperfect channel estimation on the performance of the ST-WMF-MLSD. Our results show that the performance degradation caused by channel estimation inaccuracies is low and similar to that achieved by existing MLSD schemes (~0.2 dB.