WorldWideScience

Sample records for integer-based post-compression rate-distortion

  1. A CU-Level Rate and Distortion Estimation Scheme for RDO of Hardware-Friendly HEVC Encoders Using Low-Complexity Integer DCTs.

    Science.gov (United States)

    Lee, Bumshik; Kim, Munchurl

    2016-08-01

    In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of

  2. Compressed sensing techniques for receiver based post-compensation of transmitter's nonlinear distortions in OFDM systems

    KAUST Repository

    Owodunni, Damilola S.

    2014-04-01

    In this paper, compressed sensing techniques are proposed to linearize commercial power amplifiers driven by orthogonal frequency division multiplexing signals. The nonlinear distortion is considered as a sparse phenomenon in the time-domain, and three compressed sensing based algorithms are presented to estimate and compensate for these distortions at the receiver using a few and, at times, even no frequency-domain free carriers (i.e. pilot carriers). The first technique is a conventional compressed sensing approach, while the second incorporates a priori information about the distortions to enhance the estimation. Finally, the third technique involves an iterative data-aided algorithm that does not require any pilot carriers and hence allows the system to work at maximum bandwidth efficiency. The performances of all the proposed techniques are evaluated on a commercial power amplifier and compared. The error vector magnitude and symbol error rate results show the ability of compressed sensing to compensate for the amplifier\\'s nonlinear distortions. © 2013 Elsevier B.V.

  3. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  4. Information theory and rate distortion theory for communications and compression

    CERN Document Server

    Gibson, Jerry

    2013-01-01

    This book is very specifically targeted to problems in communications and compression by providing the fundamental principles and results in information theory and rate distortion theory for these applications and presenting methods that have proved and will prove useful in analyzing and designing real systems. The chapters contain treatments of entropy, mutual information, lossless source coding, channel capacity, and rate distortion theory; however, it is the selection, ordering, and presentation of the topics within these broad categories that is unique to this concise book. While the cover

  5. Compressed sensing techniques for receiver based post-compensation of transmitter's nonlinear distortions in OFDM systems

    KAUST Repository

    Owodunni, Damilola S.; Ali, Anum Z.; Quadeer, Ahmed Abdul; Al-Safadi, Ebrahim B.; Hammi, Oualid; Al-Naffouri, Tareq Y.

    2014-01-01

    -domain, and three compressed sensing based algorithms are presented to estimate and compensate for these distortions at the receiver using a few and, at times, even no frequency-domain free carriers (i.e. pilot carriers). The first technique is a conventional

  6. EP-based wavelet coefficient quantization for linear distortion ECG data compression.

    Science.gov (United States)

    Hung, King-Chu; Wu, Tsung-Ching; Lee, Hsieh-Wei; Liu, Tung-Kuan

    2014-07-01

    Reconstruction quality maintenance is of the essence for ECG data compression due to the desire for diagnosis use. Quantization schemes with non-linear distortion characteristics usually result in time-consuming quality control that blocks real-time application. In this paper, a new wavelet coefficient quantization scheme based on an evolution program (EP) is proposed for wavelet-based ECG data compression. The EP search can create a stationary relationship among the quantization scales of multi-resolution levels. The stationary property implies that multi-level quantization scales can be controlled with a single variable. This hypothesis can lead to a simple design of linear distortion control with 3-D curve fitting technology. In addition, a competitive strategy is applied for alleviating data dependency effect. By using the ECG signals saved in MIT and PTB databases, many experiments were undertaken for the evaluation of compression performance, quality control efficiency, data dependency influence. The experimental results show that the new EP-based quantization scheme can obtain high compression performance and keep linear distortion behavior efficiency. This characteristic guarantees fast quality control even for the prediction model mismatching practical distortion curve. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. ROI-based DICOM image compression for telemedicine

    Indian Academy of Sciences (India)

    ground and reconstruct the image portions losslessly. The compressed image can ... If the image is compressed by 8:1 compression without any perceptual distortion, the ... Figure 2. Cross-sectional view of medical image (statistical representation). ... The Integer Wavelet Transform (IWT) is used to have lossless processing.

  8. Integer Set Compression and Statistical Modeling

    DEFF Research Database (Denmark)

    Larsson, N. Jesper

    2014-01-01

    enumeration of elements may be arbitrary or random, but where statistics is kept in order to estimate probabilities of elements. We present a recursive subset-size encoding method that is able to benefit from statistics, explore the effects of permuting the enumeration order based on element probabilities......Compression of integer sets and sequences has been extensively studied for settings where elements follow a uniform probability distribution. In addition, methods exist that exploit clustering of elements in order to achieve higher compression performance. In this work, we address the case where...

  9. Guessing and compression subject to distortion

    OpenAIRE

    Hanawal, Manjesh Kumar; Sundaresan, Rajesh

    2010-01-01

    The problem of guessing a random string is revisited. The relation-ship between guessing without distortion and compression is extended to the case when source alphabet size is countably in¯nite. Further, similar relationship is established for the case when distortion allowed by establishing a tight relationship between rate distortion codes and guessing strategies.

  10. Spectral Distortion in Lossy Compression of Hyperspectral Data

    Directory of Open Access Journals (Sweden)

    Bruno Aiazzi

    2012-01-01

    Full Text Available Distortion allocation varying with wavelength in lossy compression of hyperspectral imagery is investigated, with the aim of minimizing the spectral distortion between original and decompressed data. The absolute angular error, or spectral angle mapper (SAM, is used to quantify spectral distortion, while radiometric distortions are measured by maximum absolute deviation (MAD for near-lossless methods, for example, differential pulse code modulation (DPCM, or mean-squared error (MSE for lossy methods, for example, spectral decorrelation followed by JPEG 2000. Two strategies of interband distortion allocation are compared: given a target average bit rate, distortion may be set to be constant with wavelength. Otherwise, it may be allocated proportionally to the noise level of each band, according to the virtually lossless protocol. Comparisons with the uncompressed originals show that the average SAM of radiance spectra is minimized by constant distortion allocation to radiance data. However, variable distortion allocation according to the virtually lossless protocol yields significantly lower SAM in case of reflectance spectra obtained from compressed radiance data, if compared with the constant distortion allocation at the same compression ratio.

  11. Compressed sensing based joint-compensation of power amplifier's distortions in OFDMA cognitive radio systems

    KAUST Repository

    Ali, Anum Z.

    2013-12-01

    Linearization of user equipment power amplifiers driven by orthogonal frequency division multiplexing signals is addressed in this paper. Particular attention is paid to the power efficient operation of an orthogonal frequency division multiple access cognitive radio system and realization of such a system using compressed sensing. Specifically, precompensated overdriven amplifiers are employed at the mobile terminal. Over-driven amplifiers result in in-band distortions and out of band interference. Out of band interference mostly occupies the spectrum of inactive users, whereas the in-band distortions are mitigated using compressed sensing at the receiver. It is also shown that the performance of the proposed scheme can be further enhanced using multiple measurements of the distortion signal in single-input multi-output systems. Numerical results verify the ability of the proposed setup to improve error vector magnitude, bit error rate, outage capacity and mean squared error. © 2011 IEEE.

  12. Compressed sensing based joint-compensation of power amplifier's distortions in OFDMA cognitive radio systems

    KAUST Repository

    Ali, Anum Z.; Hammi, Oualid; Al-Naffouri, Tareq Y.

    2013-01-01

    Linearization of user equipment power amplifiers driven by orthogonal frequency division multiplexing signals is addressed in this paper. Particular attention is paid to the power efficient operation of an orthogonal frequency division multiple access cognitive radio system and realization of such a system using compressed sensing. Specifically, precompensated overdriven amplifiers are employed at the mobile terminal. Over-driven amplifiers result in in-band distortions and out of band interference. Out of band interference mostly occupies the spectrum of inactive users, whereas the in-band distortions are mitigated using compressed sensing at the receiver. It is also shown that the performance of the proposed scheme can be further enhanced using multiple measurements of the distortion signal in single-input multi-output systems. Numerical results verify the ability of the proposed setup to improve error vector magnitude, bit error rate, outage capacity and mean squared error. © 2011 IEEE.

  13. Quality Evaluation and Nonuniform Compression of Geometrically Distorted Images Using the Quadtree Distortion Map

    Directory of Open Access Journals (Sweden)

    Cristina Costa

    2004-09-01

    Full Text Available The paper presents an analysis of the effects of lossy compression algorithms applied to images affected by geometrical distortion. It will be shown that the encoding-decoding process results in a nonhomogeneous image degradation in the geometrically corrected image, due to the different amount of information associated to each pixel. A distortion measure named quadtree distortion map (QDM able to quantify this aspect is proposed. Furthermore, QDM is exploited to achieve adaptive compression of geometrically distorted pictures, in order to ensure a uniform quality on the final image. Tests are performed using JPEG and JPEG2000 coding standards in order to quantitatively and qualitatively assess the performance of the proposed method.

  14. Complexity Control of Fast Motion Estimation in H.264/MPEG-4 AVC with Rate-Distortion-Complexity optimization

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren; Aghito, Shankar Manuel

    2007-01-01

    A complexity control algorithm for H.264 advanced video coding is proposed. The algorithm can control the complexity of integer inter motion estimation for a given target complexity. The Rate-Distortion-Complexity performance is improved by a complexity prediction model, simple analysis of the pa...... statistics and a control scheme. The algorithm also works well for scene change condition. Test results for coding interlaced video (720x576 PAL) are reported.......A complexity control algorithm for H.264 advanced video coding is proposed. The algorithm can control the complexity of integer inter motion estimation for a given target complexity. The Rate-Distortion-Complexity performance is improved by a complexity prediction model, simple analysis of the past...

  15. Analysis of tractable distortion metrics for EEG compression applications

    International Nuclear Information System (INIS)

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Cárdenas-Barrera, Julián

    2012-01-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio. (paper)

  16. A model of R-D performance evaluation for Rate-Distortion-Complexity evaluation of H.264 video coding

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren

    2007-01-01

    This paper considers a method for evaluation of Rate-Distortion-Complexity (R-D-C) performance of video coding. A statistical model of the transformed coefficients is used to estimate the Rate-Distortion (R-D) performance. A model frame work for rate, distortion and slope of the R-D curve for inter...... and intra frame is presented. Assumptions are given for analyzing an R-D model for fast R-D-C evaluation. The theoretical expressions are combined with H.264 video coding, and confirmed by experimental results. The complexity frame work is applied to the integer motion estimation....

  17. Comparison on Integer Wavelet Transforms in Spherical Wavelet Based Image Based Relighting

    Institute of Scientific and Technical Information of China (English)

    WANGZe; LEEYin; LEUNGChising; WONGTientsin; ZHUYisheng

    2003-01-01

    To provide a good quality rendering in the Image based relighting (IBL) system, tremendous reference images under various illumination conditions are needed. Therefore data compression is essential to enable interactive action. And the rendering speed is another crucial consideration for real applications. Based on Spherical wavelet transform (SWT), this paper presents a quick representation method with Integer wavelet transform (IWT) for the IBL system. It focuses on comparison on different IWTs with the Embedded zerotree wavelet (EZW) used in the IBL system. The whole compression procedure contains two major compression steps. Firstly, SWT is applied to consider the correlation among different reference images. Secondly, the SW transformed images are compressed with IWT based image compression approach. Two IWTs are used and good results are showed in the simulations.

  18. Distortion Estimation in Compressed Music Using Only Audio Fingerprints

    NARCIS (Netherlands)

    Doets, P.J.O.; Lagendijk, R.L.

    2008-01-01

    An audio fingerprint is a compact yet very robust representation of the perceptually relevant parts of an audio signal. It can be used for content-based audio identification, even when the audio is severely distorted. Audio compression changes the fingerprint slightly. We show that these small

  19. Energy minimization of mobile video devices with a hardware H.264/AVC encoder based on energy-rate-distortion optimization

    Science.gov (United States)

    Kang, Donghun; Lee, Jungeon; Jung, Jongpil; Lee, Chul-Hee; Kyung, Chong-Min

    2014-09-01

    In mobile video systems powered by battery, reducing the encoder's compression energy consumption is critical to prolong its lifetime. Previous Energy-rate-distortion (E-R-D) optimization methods based on a software codec is not suitable for practical mobile camera systems because the energy consumption is too large and encoding rate is too low. In this paper, we propose an E-R-D model for the hardware codec based on the gate-level simulation framework to measure the switching activity and the energy consumption. From the proposed E-R-D model, an energy minimizing algorithm for mobile video camera sensor have been developed with the GOP (Group of Pictures) size and QP(Quantization Parameter) as run-time control variables. Our experimental results show that the proposed algorithm provides up to 31.76% of energy consumption saving while satisfying the rate and distortion constraints.

  20. Content Adaptive Lagrange Multiplier Selection for Rate-Distortion Optimization in 3-D Wavelet-Based Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2018-03-01

    Full Text Available Rate-distortion optimization (RDO plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC. Among all the possible coding modes, it aims to select the one which has the best trade-off between bitrate and compression distortion. Specifically, this tradeoff is tuned through the choice of the Lagrange multiplier. Despite the prevalence of conventional method for Lagrange multiplier selection in hybrid video coding, the underlying formulation is not applicable to 3-D wavelet-based SVC where the explicit values of the quantization step are not available, with on consideration of the content features of input signal. In this paper, an efficient content adaptive Lagrange multiplier selection algorithm is proposed in the context of RDO for 3-D wavelet-based SVC targeting quality scalability. Our contributions are two-fold. First, we introduce a novel weighting method, which takes account of the mutual information, gradient per pixel, and texture homogeneity to measure the temporal subband characteristics after applying the motion-compensated temporal filtering (MCTF technique. Second, based on the proposed subband weighting factor model, we derive the optimal Lagrange multiplier. Experimental results demonstrate that the proposed algorithm enables more satisfactory video quality with negligible additional computational complexity.

  1. Rate-distortion analysis of steganography for conveying stereovision disparity maps

    Science.gov (United States)

    Umeda, Toshiyuki; Batolomeu, Ana B. D. T.; Francob, Filipe A. L.; Delannay, Damien; Macq, Benoit M. M.

    2004-06-01

    3-D images transmission in a way which is compliant with traditional 2-D representations can be done through the embedding of disparity maps within the 2-D signal. This approach enables the transmission of stereoscopic video sequences or images on traditional analogue TV channels (PAL or NTSC) or printed photographic images. The aim of this work is to study the achievable performances of such a technique. The embedding of disparity maps has to be seen as a global rate-distortion problem. The embedding capacity through steganography is determined by the transmission channel noise and by the bearable distortion on the watermarked image. The distortion of the 3-D image displayed as two stereo views depends on the rate allocated to the complementary information required to build those two views from one reference 2-D image. Results from the works on the scalar Costa scheme are used to optimize the embedding of the disparity map compressed bit stream into the reference image. A method for computing the optimal trade off between the disparity map distortion and embedding distortion as a function of the channel impairments is proposed. The goal is to get a similar distortion on the left (the reference image) and the right (the disparity compensated image) images. We show that in typical situations the embedding of 2 bits/pixels in the left image, while the disparity map is compressed at 1 bit per pixel leads to a good trade-off. The disparity map is encoded with a strong error correcting code, including synchronisation bits.

  2. A clinical distortion index for compressed echocardiogram evaluation: recommendations for Xvid codec

    International Nuclear Information System (INIS)

    Alesanco, A; Hernández, C; García, J; Portolés, A; Aured, C; García, M; Ramos, L; Serrano, P

    2009-01-01

    This paper introduces a new clinical distortion index able to measure the decrease in diagnostic content in compressed echocardiograms. It is calculated using cardiologists' answers to a clinical testbed composed of two types of tests: one blind and the other semi-blind. This index may be used to compare clinical performance among video codecs from a clinical perspective. It can also be used to classify compression rates into useful and useless ranges, thus providing recommendations for echocardiogram compression. A study carried out in order to illustrate its use with Xvid video codec is also presented. The results obtained showed that, for 2D and M modes, the transmission rate should be at least 768 kbit s −1 and for color Doppler mode and pulsed/continuous Doppler, 256 kbit s −1

  3. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  4. A Method for Harmonic Sources Detection based on Harmonic Distortion Power Rate

    Science.gov (United States)

    Lin, Ruixing; Xu, Lin; Zheng, Xian

    2018-03-01

    Harmonic sources detection at the point of common coupling is an essential step for harmonic contribution determination and harmonic mitigation. The harmonic distortion power rate index is proposed for harmonic source location based on IEEE Std 1459-2010 in the paper. The method only based on harmonic distortion power is not suitable when the background harmonic is large. To solve this problem, a threshold is determined by the prior information, when the harmonic distortion power is larger than the threshold, the customer side is considered as the main harmonic source, otherwise, the utility side is. A simple model of public power system was built in MATLAB/Simulink and field test results of typical harmonic loads verified the effectiveness of proposed method.

  5. Rate-distortion analysis of directional wavelets.

    Science.gov (United States)

    Maleki, Arian; Rajaei, Boshra; Pourreza, Hamid Reza

    2012-02-01

    The inefficiency of separable wavelets in representing smooth edges has led to a great interest in the study of new 2-D transformations. The most popular criterion for analyzing these transformations is the approximation power. Transformations with near-optimal approximation power are useful in many applications such as denoising and enhancement. However, they are not necessarily good for compression. Therefore, most of the nearly optimal transformations such as curvelets and contourlets have not found any application in image compression yet. One of the most promising schemes for image compression is the elegant idea of directional wavelets (DIWs). While these algorithms outperform the state-of-the-art image coders in practice, our theoretical understanding of them is very limited. In this paper, we adopt the notion of rate-distortion and calculate the performance of the DIW on a class of edge-like images. Our theoretical analysis shows that if the edges are not "sharp," the DIW will compress them more efficiently than the separable wavelets. It also demonstrates the inefficiency of the quadtree partitioning that is often used with the DIW. To solve this issue, we propose a new partitioning scheme called megaquad partitioning. Our simulation results on real-world images confirm the benefits of the proposed partitioning algorithm, promised by our theoretical analysis. © 2011 IEEE

  6. Distortion-Based Link Adaptation for Wireless Video Transmission

    Directory of Open Access Journals (Sweden)

    Andrew Nix

    2008-06-01

    Full Text Available Wireless local area networks (WLANs such as IEEE 802.11a/g utilise numerous transmission modes, each providing different throughputs and reliability levels. Most link adaptation algorithms proposed in the literature (i maximise the error-free data throughput, (ii do not take into account the content of the data stream, and (iii rely strongly on the use of ARQ. Low-latency applications, such as real-time video transmission, do not permit large numbers of retransmission. In this paper, a novel link adaptation scheme is presented that improves the quality of service (QoS for video transmission. Rather than maximising the error-free throughput, our scheme minimises the video distortion of the received sequence. With the use of simple and local rate distortion measures and end-to-end distortion models at the video encoder, the proposed scheme estimates the received video distortion at the current transmission rate, as well as on the adjacent lower and higher rates. This allows the system to select the link-speed which offers the lowest distortion and to adapt to the channel conditions. Simulation results are presented using the MPEG-4/AVC H.264 video compression standard over IEEE 802.11g. The results show that the proposed system closely follows the optimum theoretic solution.

  7. Influence of chest compression artefact on capnogram-based ventilation detection during out-of-hospital cardiopulmonary resuscitation.

    Science.gov (United States)

    Leturiondo, Mikel; Ruiz de Gauna, Sofía; Ruiz, Jesus M; Julio Gutiérrez, J; Leturiondo, Luis A; González-Otero, Digna M; Russell, James K; Zive, Dana; Daya, Mohamud

    2018-03-01

    Capnography has been proposed as a method for monitoring the ventilation rate during cardiopulmonary resuscitation (CPR). A high incidence (above 70%) of capnograms distorted by chest compression induced oscillations has been previously reported in out-of-hospital (OOH) CPR. The aim of the study was to better characterize the chest compression artefact and to evaluate its influence on the performance of a capnogram-based ventilation detector during OOH CPR. Data from the MRx monitor-defibrillator were extracted from OOH cardiac arrest episodes. For each episode, presence of chest compression artefact was annotated in the capnogram. Concurrent compression depth and transthoracic impedance signals were used to identify chest compressions and to annotate ventilations, respectively. We designed a capnogram-based ventilation detection algorithm and tested its performance with clean and distorted episodes. Data were collected from 232 episodes comprising 52 654 ventilations, with a mean (±SD) of 227 (±118) per episode. Overall, 42% of the capnograms were distorted. Presence of chest compression artefact degraded algorithm performance in terms of ventilation detection, estimation of ventilation rate, and the ability to detect hyperventilation. Capnogram-based ventilation detection during CPR using our algorithm was compromised by the presence of chest compression artefact. In particular, artefact spanning from the plateau to the baseline strongly degraded ventilation detection, and caused a high number of false hyperventilation alarms. Further research is needed to reduce the impact of chest compression artefact on capnographic ventilation monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A new DWT/MC/DPCM video compression framework based on EBCOT

    Science.gov (United States)

    Mei, L. M.; Wu, H. R.; Tan, D. M.

    2005-07-01

    A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.

  9. Spectral Behavior of Weakly Compressible Aero-Optical Distortions

    Science.gov (United States)

    Mathews, Edwin; Wang, Kan; Wang, Meng; Jumper, Eric

    2016-11-01

    In classical theories of optical distortions by atmospheric turbulence, an appropriate and key assumption is that index-of-refraction variations are dominated by fluctuations in temperature and the effects of turbulent pressure fluctuations are negligible. This assumption is, however, not generally valid for aero-optical distortions caused by turbulent flow over an optical aperture, where both temperature and pressures fluctuations may contribute significantly to the index-of-refraction fluctuations. A general expression for weak fluctuations in refractive index is derived using the ideal gas law and Gladstone-Dale relation and applied to describe the spectral behavior of aero-optical distortions. Large-eddy simulations of weakly compressible, temporally evolving shear layers are then used to verify the theoretical results. Computational results support theoretical findings and confirm that if the log slope of the 1-D density spectrum in the inertial range is -mρ , the optical phase distortion spectral slope is given by - (mρ + 1) . The value of mρ is then shown to be dependent on the ratio of shear-layer free-stream densities and bounded by the spectral slopes of temperature and pressure fluctuations. Supported by HEL-JTO through AFOSR Grant FA9550-13-1-0001 and Blue Waters Graduate Fellowship Program.

  10. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  11. Rate-distortion theory and human perception.

    Science.gov (United States)

    Sims, Chris R

    2016-07-01

    The fundamental goal of perception is to aid in the achievement of behavioral objectives. This requires extracting and communicating useful information from noisy and uncertain sensory signals. At the same time, given the complexity of sensory information and the limitations of biological information processing, it is necessary that some information must be lost or discarded in the act of perception. Under these circumstances, what constitutes an 'optimal' perceptual system? This paper describes the mathematical framework of rate-distortion theory as the optimal solution to the problem of minimizing the costs of perceptual error subject to strong constraints on the ability to communicate or transmit information. Rate-distortion theory offers a general and principled theoretical framework for developing computational-level models of human perception (Marr, 1982). Models developed in this framework are capable of producing quantitatively precise explanations for human perceptual performance, while yielding new insights regarding the nature and goals of perception. This paper demonstrates the application of rate-distortion theory to two benchmark domains where capacity limits are especially salient in human perception: discrete categorization of stimuli (also known as absolute identification) and visual working memory. A software package written for the R statistical programming language is described that aids in the development of models based on rate-distortion theory. Copyright © 2016 The Author. Published by Elsevier B.V. All rights reserved.

  12. LOW COMPLEXITY HYBRID LOSSY TO LOSSLESS IMAGE CODER WITH COMBINED ORTHOGONAL POLYNOMIALS TRANSFORM AND INTEGER WAVELET TRANSFORM

    Directory of Open Access Journals (Sweden)

    R. Krishnamoorthy

    2012-05-01

    Full Text Available In this paper, a new lossy to lossless image coding scheme combined with Orthogonal Polynomials Transform and Integer Wavelet Transform is proposed. The Lifting Scheme based Integer Wavelet Transform (LS-IWT is first applied on the image in order to reduce the blocking artifact and memory demand. The Embedded Zero tree Wavelet (EZW subband coding algorithm is used in this proposed work for progressive image coding which achieves efficient bit rate reduction. The computational complexity of lower subband coding of EZW algorithm is reduced in this proposed work with a new integer based Orthogonal Polynomials transform coding. The normalization and mapping are done on the subband of the image for exploiting the subjective redundancy and the zero tree structure is obtained for EZW coding and so the computation complexity is greatly reduced in this proposed work. The experimental results of the proposed technique also show that the efficient bit rate reduction is achieved for both lossy and lossless compression when compared with existing techniques.

  13. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  14. Perceptual distortion analysis of color image VQ-based coding

    Science.gov (United States)

    Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine

    1997-04-01

    It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.

  15. Investigating Students’ Development of Learning Integer Concept and Integer Addition

    Directory of Open Access Journals (Sweden)

    Nenden Octavarulia Shanty

    2016-09-01

    Full Text Available This research aimed at investigating students’ development of learning integer concept and integer addition. The investigation was based on analyzing students’ works in solving the given mathematical problems in each instructional activity designed based on Realistic Mathematics Education (RME levels. Design research was chosen to achieve and to contribute in developing a local instruction theory for teaching and learning of integer concept and integer addition. In design research, the Hypothetical Learning Trajectory (HLT plays important role as a design and research instrument. It was designed in the phase of preliminary design and tested to three students of grade six OASIS International School, Ankara – Turkey. The result of the experiments showed that temperature in the thermometer context could stimulate students’ informal knowledge of integer concept. Furthermore, strategies and tools used by the students in comparing and relating two temperatures were gradually be developed into a more formal mathematics. The representation of line inside thermometer which then called the number line could bring the students to the last activity levels, namely rules for adding integer, and became the model for more formal reasoning. Based on these findings, it can be concluded that students’ learning integer concept and integer addition developed through RME levels.Keywords: integer concept, integer addition, Realistic Mathematics Education DOI: http://dx.doi.org/10.22342/jme.7.2.3538.57-72

  16. Orbit and optics distortion in fixed field alternating gradient muon accelerators

    Directory of Open Access Journals (Sweden)

    Shinji Machida

    2007-11-01

    Full Text Available In a linear nonscaling fixed field alternating gradient (FFAG accelerator, betatron tunes vary over a wide range and a beam has to cross integer and half-integer tunes several times. Although it is plausible to say that integer and half-integer resonances are not harmful if the crossing speed is fast, no quantitative argument exists. With tracking simulation, we studied orbit and optics distortion due to alignment and magnet errors. It was found that the concept of integer and half-integer resonance crossing is irrelevant to explain beam behavior in a nonscaling FFAG when acceleration is fast and betatron tunes change quickly. In a muon FFAG accelerator, it takes 17 turns for acceleration and the betatron tunes change more than 10, for example. Instead, the orbit and optics distortion is excited by random dipole and quadrupole kicks. The latter causes beam size growth because the beam starts tumbling in phase space, but not necessarily with emittance growth.

  17. Adaptive Binary Arithmetic Coder-Based Image Feature and Segmentation in the Compressed Domain

    Directory of Open Access Journals (Sweden)

    Hsi-Chin Hsin

    2012-01-01

    Full Text Available Image compression is necessary in various applications, especially for efficient transmission over a band-limited channel. It is thus desirable to be able to segment an image in the compressed domain directly such that the burden of decompressing computation can be avoided. Motivated by the adaptive binary arithmetic coder (MQ coder of JPEG2000, we propose an efficient scheme to segment the feature vectors that are extracted from the code stream of an image. We modify the Compression-based Texture Merging (CTM algorithm to alleviate the influence of overmerging problem by making use of the rate distortion information. Experimental results show that the MQ coder-based image segmentation is preferable in terms of the boundary displacement error (BDE measure. It has the advantage of saving computational cost as the segmentation results even at low rates of bits per pixel (bpp are satisfactory.

  18. New technique for real-time distortion-invariant multiobject recognition and classification

    Science.gov (United States)

    Hong, Rutong; Li, Xiaoshun; Hong, En; Wang, Zuyi; Wei, Hongan

    2001-04-01

    A real-time hybrid distortion-invariant OPR system was established to make 3D multiobject distortion-invariant automatic pattern recognition. Wavelet transform technique was used to make digital preprocessing of the input scene, to depress the noisy background and enhance the recognized object. A three-layer backpropagation artificial neural network was used in correlation signal post-processing to perform multiobject distortion-invariant recognition and classification. The C-80 and NOA real-time processing ability and the multithread programming technology were used to perform high speed parallel multitask processing and speed up the post processing rate to ROIs. The reference filter library was constructed for the distortion version of 3D object model images based on the distortion parameter tolerance measuring as rotation, azimuth and scale. The real-time optical correlation recognition testing of this OPR system demonstrates that using the preprocessing, post- processing, the nonlinear algorithm os optimum filtering, RFL construction technique and the multithread programming technology, a high possibility of recognition and recognition rate ere obtained for the real-time multiobject distortion-invariant OPR system. The recognition reliability and rate was improved greatly. These techniques are very useful to automatic target recognition.

  19. Multiparametric programming based algorithms for pure integer and mixed-integer bilevel programming problems

    KAUST Repository

    Domínguez, Luis F.

    2010-12-01

    This work introduces two algorithms for the solution of pure integer and mixed-integer bilevel programming problems by multiparametric programming techniques. The first algorithm addresses the integer case of the bilevel programming problem where integer variables of the outer optimization problem appear in linear or polynomial form in the inner problem. The algorithm employs global optimization techniques to convexify nonlinear terms generated by a reformulation linearization technique (RLT). A continuous multiparametric programming algorithm is then used to solve the reformulated convex inner problem. The second algorithm addresses the mixed-integer case of the bilevel programming problem where integer and continuous variables of the outer problem appear in linear or polynomial forms in the inner problem. The algorithm relies on the use of global multiparametric mixed-integer programming techniques at the inner optimization level. In both algorithms, the multiparametric solutions obtained are embedded in the outer problem to form a set of single-level (M)(I)(N)LP problems - which are then solved to global optimality using standard fixed-point (global) optimization methods. Numerical examples drawn from the open literature are presented to illustrate the proposed algorithms. © 2010 Elsevier Ltd.

  20. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  1. Quantization Distortion in Block Transform-Compressed Data

    Science.gov (United States)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  2. Real-time distortion correction for visual inspection systems based on FPGA

    Science.gov (United States)

    Liang, Danhua; Zhang, Zhaoxia; Chen, Xiaodong; Yu, Daoyin

    2008-03-01

    Visual inspection is a kind of new technology based on the research of computer vision, which focuses on the measurement of the object's geometry and location. It can be widely used in online measurement, and other real-time measurement process. Because of the defects of the traditional visual inspection, a new visual detection mode -all-digital intelligent acquisition and transmission is presented. The image processing, including filtering, image compression, binarization, edge detection and distortion correction, can be completed in the programmable devices -FPGA. As the wide-field angle lens is adopted in the system, the output images have serious distortion. Limited by the calculating speed of computer, software can only correct the distortion of static images but not the distortion of dynamic images. To reach the real-time need, we design a distortion correction system based on FPGA. The method of hardware distortion correction is that the spatial correction data are calculated first under software circumstance, then converted into the address of hardware storage and stored in the hardware look-up table, through which data can be read out to correct gray level. The major benefit using FPGA is that the same circuit can be used for other circularly symmetric wide-angle lenses without being modified.

  3. Fixed-Rate Compressed Floating-Point Arrays.

    Science.gov (United States)

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  4. Subband Coding Methods for Seismic Data Compression

    Science.gov (United States)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  5. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  6. Multiparametric programming based algorithms for pure integer and mixed-integer bilevel programming problems

    KAUST Repository

    Domí nguez, Luis F.; Pistikopoulos, Efstratios N.

    2010-01-01

    continuous multiparametric programming algorithm is then used to solve the reformulated convex inner problem. The second algorithm addresses the mixed-integer case of the bilevel programming problem where integer and continuous variables of the outer problem

  7. Distortion-Rate Bounds for Distributed Estimation Using Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Nihar Jindal

    2008-03-01

    Full Text Available We deal with centralized and distributed rate-constrained estimation of random signal vectors performed using a network of wireless sensors (encoders communicating with a fusion center (decoder. For this context, we determine lower and upper bounds on the corresponding distortion-rate (D-R function. The nonachievable lower bound is obtained by considering centralized estimation with a single-sensor which has all observation data available, and by determining the associated D-R function in closed-form. Interestingly, this D-R function can be achieved using an estimate first compress afterwards (EC approach, where the sensor (i forms the minimum mean-square error (MMSE estimate for the signal of interest; and (ii optimally (in the MSE sense compresses and transmits it to the FC that reconstructs it. We further derive a novel alternating scheme to numerically determine an achievable upper bound of the D-R function for general distributed estimation using multiple sensors. The proposed algorithm tackles an analytically intractable minimization problem, while it accounts for sensor data correlations. The obtained upper bound is tighter than the one determined by having each sensor performing MSE optimal encoding independently of the others. Numerical examples indicate that the algorithm performs well and yields D-R upper bounds which are relatively tight with respect to analytical alternatives obtained without taking into account the cross-correlations among sensor data.

  8. A Posteriori Restoration of Block Transform-Compressed Data

    Science.gov (United States)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  9. Construction of Fixed Rate Non-Binary WOM Codes Based on Integer Programming

    Science.gov (United States)

    Fujino, Yoju; Wadayama, Tadashi

    In this paper, we propose a construction of non-binary WOM (Write-Once-Memory) codes for WOM storages such as flash memories. The WOM codes discussed in this paper are fixed rate WOM codes where messages in a fixed alphabet of size $M$ can be sequentially written in the WOM storage at least $t^*$-times. In this paper, a WOM storage is modeled by a state transition graph. The proposed construction has the following two features. First, it includes a systematic method to determine the encoding regions in the state transition graph. Second, the proposed construction includes a labeling method for states by using integer programming. Several novel WOM codes for $q$ level flash memories with 2 cells are constructed by the proposed construction. They achieve the worst numbers of writes $t^*$ that meet the known upper bound in many cases. In addition, we constructed fixed rate non-binary WOM codes with the capability to reduce ICI (inter cell interference) of flash cells. One of the advantages of the proposed construction is its flexibility. It can be applied to various storage devices, to various dimensions (i.e, number of cells), and various kind of additional constraints.

  10. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  11. Task-oriented lossy compression of magnetic resonance images

    Science.gov (United States)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  12. Binaural model-based dynamic-range compression.

    Science.gov (United States)

    Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D

    2018-01-26

    Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.

  13. Integers in number systems with positive and negative quadratic Pisot base

    OpenAIRE

    Masáková, Zuzana; Vávra, Tomáš

    2013-01-01

    We consider numeration systems with base $\\beta$ and $-\\beta$, for quadratic Pisot numbers $\\beta$ and focus on comparing the combinatorial structure of the sets $\\Z_\\beta$ and $\\Z_{-\\beta}$ of numbers with integer expansion in base $\\beta$, resp. $-\\beta$. Our main result is the comparison of languages of infinite words $u_\\beta$ and $u_{-\\beta}$ coding the ordering of distances between consecutive $\\beta$- and $(-\\beta)$-integers. It turns out that for a class of roots $\\beta$ of $x^2-mx-m$...

  14. Compressive Sampling of EEG Signals with Finite Rate of Innovation

    Directory of Open Access Journals (Sweden)

    Poh Kok-Kiong

    2010-01-01

    Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.

  15. A seismic data compression system using subband coding

    Science.gov (United States)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  16. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest.

    Science.gov (United States)

    Monsieurs, Koenraad G; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F; Calle, Paul A

    2012-11-01

    BACKGROUND AND GOAL OF STUDY: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with decreased depth. In patients undergoing prehospital cardiopulmonary resuscitation by health care professionals, chest compression rate and depth were recorded using an accelerometer (E-series monitor-defibrillator, Zoll, U.S.A.). Compression depth was compared for rates 120/min. A difference in compression depth ≥0.5 cm was considered clinically significant. Mixed models with repeated measurements of chest compression depth and rate (level 1) nested within patients (level 2) were used with compression rate as a continuous and as a categorical predictor of depth. Results are reported as means and standard error (SE). One hundred and thirty-three consecutive patients were analysed (213,409 compressions). Of all compressions 2% were 120/min, 36% were 5 cm. In 77 out of 133 (58%) patients a statistically significant lower depth was observed for rates >120/min compared to rates 80-120/min, in 40 out of 133 (30%) this difference was also clinically significant. The mixed models predicted that the deepest compression (4.5 cm) occurred at a rate of 86/min, with progressively lower compression depths at higher rates. Rates >145/min would result in a depth compression depth for rates 80-120/min was on average 4.5 cm (SE 0.06) compared to 4.1 cm (SE 0.06) for compressions >120/min (mean difference 0.4 cm, Pcompression rates and lower compression depths. Avoiding excessive compression rates may lead to more compressions of sufficient depth. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. A content-based digital image watermarking scheme resistant to local geometric distortions

    International Nuclear Information System (INIS)

    Yang, Hong-ying; Chen, Li-li; Wang, Xiang-yang

    2011-01-01

    Geometric distortion is known as one of the most difficult attacks to resist, as it can desynchronize the location of the watermark and hence cause incorrect watermark detection. Geometric distortion can be decomposed into two classes: global affine transforms and local geometric distortions. Most countermeasures proposed in the literature only address the problem of global affine transforms. It is a challenging problem to design a robust image watermarking scheme against local geometric distortions. In this paper, we propose a new content-based digital image watermarking scheme with good visual quality and reasonable resistance against local geometric distortions. Firstly, the robust feature points, which can survive various common image processing and global affine transforms, are extracted by using a multi-scale SIFT (scale invariant feature transform) detector. Then, the affine covariant local feature regions (LFRs) are constructed adaptively according to the feature scale and local invariant centroid. Finally, the digital watermark is embedded into the affine covariant LFRs by modulating the magnitudes of discrete Fourier transform (DFT) coefficients. By binding the watermark with the affine covariant LFRs, the watermark detection can be done without synchronization error. Experimental results show that the proposed image watermarking is not only invisible and robust against common image processing operations such as sharpening, noise addition, and JPEG compression, etc, but also robust against global affine transforms and local geometric distortions

  18. DETERMINING OPTIMAL CUBE FOR 3D-DCT BASED VIDEO COMPRESSION FOR DIFFERENT MOTION LEVELS

    Directory of Open Access Journals (Sweden)

    J. Augustin Jacob

    2012-11-01

    Full Text Available This paper proposes new three dimensional discrete cosine transform (3D-DCT based video compression algorithm that will select the optimal cube size based on the motion content of the video sequence. It is determined by finding normalized pixel difference (NPD values, and by categorizing the cubes as “low” or “high” motion cube suitable cube size of dimension either [16×16×8] or[8×8×8] is chosen instead of fixed cube algorithm. To evaluate the performance of the proposed algorithm test sequence with different motion levels are chosen. By doing rate vs. distortion analysis the level of compression that can be achieved and the quality of reconstructed video sequence are determined and compared against fixed cube size algorithm. Peak signal to noise ratio (PSNR is taken to measure the video quality. Experimental result shows that varying the cube size with reference to the motion content of video frames gives better performance in terms of compression ratio and video quality.

  19. TPC track distortions IV: post tenebras lux

    CERN Document Server

    Ammosov, V; Boyko, I; Chelkov, G; Dedovitch, D; Dydak, F; Elagin, A; Gostkin, M; Guskov, A; Koreshev, V; Krumshtein, Z; Nefedov, Y; Nikolaev, K; Wotschack, J; Zhemchugov, A

    2007-01-01

    We present a comprehensive discussion and summary of static and dynamic track distortions in the HARP TPC in terms of physical origin, mathematical modelling and correction algorithms. `Static' distortions are constant with time, while `dynamic' distortions are distortions that occur only during the 400 ms long accelerator spill. The measurement of dynamic distortions, their mathematical modelling and the correction algorithms build on our understanding of static distortions. In the course of corroborating the validity of our static distortion corrections, their reliability and precision was further improved. Dynamic TPC distortions originate dominantly from the `stalactite' effect: a column of positive-ion charge starts growing at the begin of the accelerator spill, and continues growing with nearly constant velocity out from the sense-wire plane into the active TPC volume. However, the `stalactite' effect is not able to describe the distortions that are present already at the start of the spill and which ha...

  20. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua

    2004-01-01

    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  1. The impact of chest compression rates on quality of chest compressions - a manikin study.

    Science.gov (United States)

    Field, Richard A; Soar, Jasmeet; Davies, Robin P; Akhtar, Naheed; Perkins, Gavin D

    2012-03-01

    Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables. Twenty healthcare professionals performed 2 min of continuous compressions on an instrumented manikin at rates of 80, 100, 120, 140 and 160 min(-1) in a random order. An electronic metronome was used to guide compression rate. Compression data were analysed by repeated measures ANOVA and are presented as mean (SD). Non-parametric data was analysed by Friedman test. At faster compression rates there were significant improvements in the number of compressions delivered (160(2) at 80 min(-1) vs. 312(13) compressions at 160 min(-1), P<0.001); and compression duty-cycle (43(6)% at 80 min(-1) vs. 50(7)% at 160 min(-1), P<0.001). This was at the cost of a significant reduction in compression depth (39.5(10)mm at 80 min(-1) vs. 34.5(11)mm at 160 min(-1), P<0.001); and earlier decay in compression quality (median decay point 120 s at 80 min(-1) vs. 40s at 160 min(-1), P<0.001). Additionally not all participants achieved the target rate (100% at 80 min(-1) vs. 70% at 160 min(-1)). Rates above 120 min(-1) had the greatest impact on reducing chest compression quality. For Guidelines 2005 trained rescuers, a chest compression rate of 100-120 min(-1) for 2 min is feasible whilst maintaining adequate chest compression quality in terms of depth, duty-cycle, leaning, and decay in compression performance. Further studies are needed to assess the impact of the Guidelines 2010 recommendation for deeper and faster chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. Analyzing Reaction Rates with the Distortion/Interaction-Activation Strain Model

    NARCIS (Netherlands)

    Bickelhaupt, F. Matthias; Houk, Kendall N.

    2017-01-01

    The activation strain or distortion/interaction model is a tool to analyze activation barriers that determine reaction rates. For bimolecular reactions, the activation energies are the sum of the energies to distort the reactants into geometries they have in transition states plus the interaction

  3. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  4. Lossless Geometry Compression Through Changing 3D Coordinates into 1D

    Directory of Open Access Journals (Sweden)

    Yongkui Liu

    2013-08-01

    Full Text Available A method of lossless geometry compression on the coordinates of the vertexes for grid model is presented. First, the 3D coordinates are pre-processed to be transformed into a specific form. Then these 3D coordinates are changed into 1D data by making the three coordinates of a vertex represented by only a position number, which is made of a large integer. To minimize the integers, they are sorted and the differences between two adjacent vertexes are stored in a vertex table. In addition to the technique of geometry compression on coordinates, an improved method for storing the compressed topological data in a facet table is proposed to make the method more complete and efficient. The experimental results show that the proposed method has a better compression rate than the latest method of lossless geometry compression, the Isenburg-Lindstrom-Snoeyink method. The theoretical analysis and the experiment results also show that the important decompression time of the new method is short. Though the new method is explained in the case of a triangular grid, it can also be used in other forms of grid model.

  5. Quantum Integers

    International Nuclear Information System (INIS)

    Khrennikov, Andrei; Klein, Moshe; Mor, Tal

    2010-01-01

    In number theory, a partition of a positive integer n is a way of writing n as a sum of positive integers. The number of partitions of n is given by the partition function p(n). Inspired by quantum information processing, we extend the concept of partitions in number theory as follows: for an integer n, we treat each partition as a basis state of a quantum system representing that number n, so that the Hilbert-space that corresponds to that integer n is of dimension p(n); the 'classical integer' n can thus be generalized into a (pure) quantum state ||ψ(n) > which is a superposition of the partitions of n, in the same way that a quantum bit (qubit) is a generalization of a classical bit. More generally, ρ(n) is a density matrix in that same Hilbert-space (a probability distribution over pure states). Inspired by the notion of quantum numbers in quantum theory (such as in Bohr's model of the atom), we then try to go beyond the partitions, by defining (via recursion) the notion of 'sub-partitions' in number theory. Combining the two notions mentioned above, sub-partitions and quantum integers, we finally provide an alternative definition of the quantum integers [the pure-state |ψ'(n)> and the mixed-state ρ'(n),] this time using the sub-partitions as the basis states instead of the partitions, for describing the quantum number that corresponds to the integer n.

  6. Influence of video compression on the measurement error of the television system

    Science.gov (United States)

    Sotnik, A. V.; Yarishev, S. N.; Korotaev, V. V.

    2015-05-01

    possible reducing of the digital stream. Discrete cosine transformation is most widely used among possible orthogonal transformation. Errors of television measuring systems and data compression protocols analyzed In this paper. The main characteristics of measuring systems and detected sources of their error detected. The most effective methods of video compression are determined. The influence of video compression error on television measuring systems was researched. Obtained results will increase the accuracy of the measuring systems. In television image quality measuring system reduces distortion identical distortion in analog systems and specific distortions resulting from the process of coding / decoding digital video signal and errors in the transmission channel. By the distortions associated with encoding / decoding signal include quantization noise, reducing resolution, mosaic effect, "mosquito" effect edging on sharp drops brightness, blur colors, false patterns, the effect of "dirty window" and other defects. The size of video compression algorithms used in television measuring systems based on the image encoding with intra- and inter prediction individual fragments. The process of encoding / decoding image is non-linear in space and in time, because the quality of the playback of a movie at the reception depends on the pre- and post-history of a random, from the preceding and succeeding tracks, which can lead to distortion of the inadequacy of the sub-picture and a corresponding measuring signal.

  7. Correction of distortions in distressed mothers' ratings of their preschool children's psychopathology.

    Science.gov (United States)

    Müller, Jörg M; Furniss, Tilman

    2013-11-30

    The often-reported low informant agreement about child psychopathology between multiple informants has lead to various suggestions about how to address discrepant ratings. Among the factors that may lower agreement that have been discussed is informant credibility, reliability, or psychopathology, which is of interest in this paper. We tested three different models, namely, the accuracy, the distortion, and an integrated so-called combined model, that conceptualize parental ratings to assess child psychopathology. The data comprise ratings of child psychopathology from multiple informants (mother, therapist and kindergarten teacher) and ratings of maternal psychopathology. The children were patients in a preschool psychiatry unit (N=247). The results from structural equation modeling show that maternal ratings of child psychopathology were biased by maternal psychopathology (distortion model). Based on this statistical background, we suggest a method to adjust biased maternal ratings. We illustrate the maternal bias by comparing the ratings of mother to expert ratings (combined kindergarten teacher and therapist ratings) and show that the correction equation increases the agreement between maternal and expert ratings. We conclude that this approach may help to reduce misclassification of preschool children as 'clinical' on the basis of biased maternal ratings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  8. Loss less real-time data compression based on LZO for steady-state Tokamak DAS

    International Nuclear Information System (INIS)

    Pujara, H.D.; Sharma, Manika

    2008-01-01

    The evolution of data acquisition system (DAS) for steady-state operation of Tokamak has been technology driven. Steady-state Tokamak demands a data acquisition system which is capable enough to acquire data losslessly from diagnostics. The needs of loss less continuous acquisition have a significant effect on data storage and takes up a greater portion of any data acquisition systems. Another basic need of steady state of nature of operation demands online viewing of data which loads the LAN significantly. So there is strong demand for something that would control the expansion of both these portion by a way of employing compression technique in real time. This paper presents a data acquisition systems employing real-time data compression technique based on LZO. It is a data compression library which is suitable for data compression and decompression in real time. The algorithm used favours speed over compression ratio. The system has been rigged up based on PXI bus and dual buffer mode architecture is implemented for loss less acquisition. The acquired buffer is compressed in real time and streamed to network and hard disk for storage. Observed performance of measure on various data type like binary, integer float, types of different type of wave form as well as compression timing overheads has been presented in the paper. Various software modules for real-time acquiring, online viewing of data on network nodes have been developed in LabWindows/CVI based on client server architecture

  9. "Stayin' alive": a novel mental metronome to maintain compression rates in simulated cardiac arrests.

    Science.gov (United States)

    Hafner, John W; Sturgell, Jeremy L; Matlock, David L; Bockewitz, Elizabeth G; Barker, Lisa T

    2012-11-01

    A novel and yet untested memory aid has anecdotally been proposed for aiding practitioners in complying with American Heart Association (AHA) cardiopulmonary resuscitation (CPR) compression rate guidelines (at least 100 compressions per minute). This study investigates how subjects using this memory aid adhered to current CPR guidelines in the short and long term. A prospective observational study was conducted with medical providers certified in 2005 AHA guideline CPR. Subjects were randomly paired and alternated administering CPR compressions on a mannequin during a standardized cardiac arrest scenario. While performing compressions, subjects listened to a digital recording of the Bee Gees song "Stayin' Alive," and were asked to time compressions to the musical beat. After at least 5 weeks, the participants were retested without directly listening to the recorded music. Attitudinal views were gathered using a post-session questionnaire. Fifteen subjects (mean age 29.3 years, 66.7% resident physicians and 80% male) were enrolled. The mean compression rate during the primary assessment (with music) was 109.1, and during the secondary assessment (without music) the rate was 113.2. Mean CPR compression rates did not vary by training level, CPR experience, or time to secondary assessment. Subjects felt that utilizing the music improved their ability to provide CPR and they felt more confident in performing CPR. Medical providers trained to use a novel musical memory aid effectively maintained AHA guideline CPR compression rates initially and in long-term follow-up. Subjects felt that the aid improved their technical abilities and confidence in providing CPR. Copyright © 2012. Published by Elsevier Inc.

  10. Supportive Distortions: An Analysis of Posts on a Pedophile Internet Message Board

    Science.gov (United States)

    Malesky, L. Alvin, Jr.; Ennis, Liam

    2004-01-01

    A covert observation of posts on a pro-pedophile Internet message board investigated evidence of distorted cognitions that were supportive of sexually abusive behavior. Implications for the treatment and supervision of members of online communities that support pedophilic interests and behaviors are discussed. The purpose of the present study was…

  11. Rate-distortion functions of non-stationary Markoff chains and their block-independent approximations

    OpenAIRE

    Agarwal, Mukul

    2018-01-01

    It is proved that the limit of the normalized rate-distortion functions of block independent approximations of an irreducible, aperiodic Markoff chain is independent of the initial distribution of the Markoff chain and thus, is also equal to the rate-distortion function of the Markoff chain.

  12. Post-Buckling Strength of Uniformly Compressed Plates

    NARCIS (Netherlands)

    Bakker, M.C.M.; Rosmanit, M.; Hofmeyer, H.; Camotim, D; Silvestre, N; Dinis, P.B.

    2006-01-01

    In this paper it is discussed how existing analytical and semi-analytical formulas for describing the elastic-post-buckling behavior of uniformly compressed square plates with initial imperfections, for loads up to three times the buckling load can be simplified and improved. For loads larger than

  13. Fast Rate Estimation for RDO Mode Decision in HEVC

    Directory of Open Access Journals (Sweden)

    Maxim P. Sharabayko

    2014-12-01

    Full Text Available The latter-day H.265/HEVC video compression standard is able to provide two-times higher compression efficiency compared to the current industrial standard, H.264/AVC. However, coding complexity also increased. The main bottleneck of the compression process is the rate-distortion optimization (RDO stage, as it involves numerous sequential syntax-based binary arithmetic coding (SBAC loops. In this paper, we present an entropy-based RDO estimation technique for H.265/HEVC compression, instead of the common approach based on the SBAC. Our RDO implementation reduces RDO complexity, providing an average bit rate overhead of 1.54%. At the same time, elimination of the SBAC from the RDO estimation reduces block interdependencies, thus providing an opportunity for the development of the compression system with parallel processing of multiple blocks of a video frame.

  14. A higher chest compression rate may be necessary for metronome-guided cardiopulmonary resuscitation.

    Science.gov (United States)

    Chung, Tae Nyoung; Kim, Sun Wook; You, Je Sung; Cho, Young Soon; Chung, Sung Phil; Park, Incheol

    2012-01-01

    Metronome guidance is a simple and economical feedback system for guiding cardiopulmonary resuscitation (CPR). However, a recent study showed that metronome guidance reduced the depth of chest compression. The results of previous studies suggest that a higher chest compression rate is associated with a better CPR outcome as compared with a lower chest compression rate, irrespective of metronome use. Based on this finding, we hypothesized that a lower chest compression rate promotes a reduction in chest compression depth in the recent study rather than metronome use itself. One minute of chest compression-only CPR was performed following the metronome sound played at 1 of 4 different rates: 80, 100, 120, and 140 ticks/min. Average compression depths (ACDs) and duty cycles were compared using repeated measures analysis of variance, and the values in the absence and presence of metronome guidance were compared. Both the ACD and duty cycle increased when the metronome rate increased (P = .017, metronome rates of 80 and 100 ticks/min were significantly lower than those for the procedures without metronome guidance. The ACD and duty cyle for chest compression increase as the metronome rate increases during metronome-guided CPR. A higher rate of chest compression is necessary for metronome-guided CPR to prevent suboptimal quality of chest compression. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. A Novel Error Resilient Scheme for Wavelet-based Image Coding Over Packet Networks

    OpenAIRE

    WenZhu Sun; HongYu Wang; DaXing Qian

    2012-01-01

    this paper presents a robust transmission strategy for wavelet based scalable bit stream over packet erasure channel. By taking the advantage of the bit plane coding and the multiple description coding, the proposed strategy adopts layered multiple description coding (LMDC) for the embedded wavelet coders to improve the error resistant capability of the important bit planes in the meaning of D(R) function. Then, the post-compression rate-distortion (PCRD) optimization process is used to impro...

  16. A quadratic approximation-based algorithm for the solution of multiparametric mixed-integer nonlinear programming problems

    KAUST Repository

    Domínguez, Luis F.

    2012-06-25

    An algorithm for the solution of convex multiparametric mixed-integer nonlinear programming problems arising in process engineering problems under uncertainty is introduced. The proposed algorithm iterates between a multiparametric nonlinear programming subproblem and a mixed-integer nonlinear programming subproblem to provide a series of parametric upper and lower bounds. The primal subproblem is formulated by fixing the integer variables and solved through a series of multiparametric quadratic programming (mp-QP) problems based on quadratic approximations of the objective function, while the deterministic master subproblem is formulated so as to provide feasible integer solutions for the next primal subproblem. To reduce the computational effort when infeasibilities are encountered at the vertices of the critical regions (CRs) generated by the primal subproblem, a simplicial approximation approach is used to obtain CRs that are feasible at each of their vertices. The algorithm terminates when there does not exist an integer solution that is better than the one previously used by the primal problem. Through a series of examples, the proposed algorithm is compared with a multiparametric mixed-integer outer approximation (mp-MIOA) algorithm to demonstrate its computational advantages. © 2012 American Institute of Chemical Engineers (AIChE).

  17. Video Coding Technique using MPEG Compression Standards

    African Journals Online (AJOL)

    Akorede

    The two dimensional discrete cosine transform (2-D DCT) is an integral part of video and image compression ... solution for the optimum trade-off by applying rate-distortion theory has been ..... Int. J. the computer, the internet and management,.

  18. TRACKING SIMULATIONS NEAR HALF-INTEGER RESONANCE AT PEP-II

    International Nuclear Information System (INIS)

    Nosochkov, Yuri

    2003-01-01

    Beam-beam simulations predict that PEP-II luminosity can be increased by operating the horizontal betatron tune near and above a half-integer resonance. However, effects of the resonance and its synchrotron sidebands significantly enhance betatron and chromatic perturbations which tend to reduce dynamic aperture. In the study, chromatic variation of horizontal tune near the resonance was minimized by optimizing local sextupoles in the Interaction Region. Dynamic aperture was calculated using tracking simulations in LEGO code. Dependence of dynamic aperture on the residual orbit, dispersion and β distortion after correction was investigated

  19. Does the quality of chest compressions deteriorate when the chest compression rate is above 120/min?

    Science.gov (United States)

    Lee, Soo Hoon; Kim, Kyuseok; Lee, Jae Hyuk; Kim, Taeyun; Kang, Changwoo; Park, Chanjong; Kim, Joonghee; Jo, You Hwan; Rhee, Joong Eui; Kim, Dong Hoon

    2014-08-01

    The quality of chest compressions along with defibrillation is the cornerstone of cardiopulmonary resuscitation (CPR), which is known to improve the outcome of cardiac arrest. We aimed to investigate the relationship between the compression rate and other CPR quality parameters including compression depth and recoil. A conventional CPR training for lay rescuers was performed 2 weeks before the 'CPR contest'. CPR anytime training kits were distributed to respective participants for self-training on their own in their own time. The participants were tested for two-person CPR in pairs. The quantitative and qualitative data regarding the quality of CPR were collected from a standardised check list and SkillReporter, and compared by the compression rate. A total of 161 teams consisting of 322 students, which includes 116 men and 206 women, participated in the CPR contest. The mean depth and rate for chest compression were 49.0±8.2 mm and 110.2±10.2/min. Significantly deeper chest compression depths were noted at rates over 120/min than those at any other rates (47.0±7.4, 48.8±8.4, 52.3±6.7, p=0.008). Chest compression depth was proportional to chest compression rate (r=0.206, pcompression including chest compression depth and chest recoil by chest compression rate. Further evaluation regarding the upper limit of the chest compression rate is needed to ensure complete full chest wall recoil while maintaining an adequate chest compression depth. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  20. Relationship between chest compression rates and outcomes from cardiac arrest.

    Science.gov (United States)

    Idris, Ahamed H; Guffey, Danielle; Aufderheide, Tom P; Brown, Siobhan; Morrison, Laurie J; Nichols, Patrick; Powell, Judy; Daya, Mohamud; Bigham, Blair L; Atkins, Dianne L; Berg, Robert; Davis, Dan; Stiell, Ian; Sopko, George; Nichol, Graham

    2012-06-19

    Guidelines for cardiopulmonary resuscitation recommend a chest compression rate of at least 100 compressions per minute. Animal and human studies have reported that blood flow is greatest with chest compression rates near 120/min, but few have reported rates used during out-of-hospital (OOH) cardiopulmonary resuscitation or the relationship between rate and outcome. The purpose of this study was to describe chest compression rates used by emergency medical services providers to resuscitate patients with OOH cardiac arrest and to determine the relationship between chest compression rate and outcome. Included were patients aged ≥ 20 years with OOH cardiac arrest treated by emergency medical services providers participating in the Resuscitation Outcomes Consortium. Data were abstracted from monitor-defibrillator recordings during cardiopulmonary resuscitation. Multiple logistic regression analysis assessed the association between chest compression rate and outcome. From December 2005 to May 2007, 3098 patients with OOH cardiac arrest were included in this study. Mean age was 67 ± 16 years, and 8.6% survived to hospital discharge. Mean compression rate was 112 ± 19/min. A curvilinear association between chest compression rate and return of spontaneous circulation was found in cubic spline models after multivariable adjustment (P=0.012). Return of spontaneous circulation rates peaked at a compression rate of ≈ 125/min and then declined. Chest compression rate was not significantly associated with survival to hospital discharge in multivariable categorical or cubic spline models. Chest compression rate was associated with return of spontaneous circulation but not with survival to hospital discharge in OOH cardiac arrest.

  1. Traveling magnetopause distortion related to a large-scale magnetosheath plasma jet: THEMIS and ground-based observations

    Science.gov (United States)

    Dmitriev, A. V.; Suvorova, A. V.

    2012-08-01

    Here, we present a case study of THEMIS and ground-based observations of the perturbed dayside magnetopause and the geomagnetic field in relation to the interaction of an interplanetary directional discontinuity (DD) with the magnetosphere on 16 June 2007. The interaction resulted in a large-scale local magnetopause distortion of an "expansion - compression - expansion" (ECE) sequence that lasted for ˜15 min. The compression was caused by a very dense, cold, and fast high-βmagnetosheath plasma flow, a so-called plasma jet, whose kinetic energy was approximately three times higher than the energy of the incident solar wind. The plasma jet resulted in the effective penetration of magnetosheath plasma inside the magnetosphere. A strong distortion of the Chapman-Ferraro current in the ECE sequence generated a tripolar magnetic pulse "decrease - peak- decrease" (DPD) that was observed at low and middle latitudes by some ground-based magnetometers of the INTERMAGNET network. The characteristics of the ECE sequence and the spatial-temporal dynamics of the DPD pulse were found to be very different from any reported patterns of DD interactions with the magnetosphere. The observed features only partially resembled structures such as FTE, hot flow anomalies, and transient density events. Thus, it is difficult to explain them in the context of existing models.

  2. On the Delone property of (−β-integers

    Directory of Open Access Journals (Sweden)

    Wolfgang Steiner

    2011-08-01

    Full Text Available The (−β-integers are natural generalisations of the β-integers, and thus of the integers, for negative real bases. They can be described by infinite words which are fixed points of anti-morphisms. We show that they are not necessarily uniformly discrete and relatively dense in the real numbers.

  3. Instantaneous and controllable integer ambiguity resolution: review and an alternative approach

    Science.gov (United States)

    Zhang, Jingyu; Wu, Meiping; Li, Tao; Zhang, Kaidong

    2015-11-01

    In the high-precision application of Global Navigation Satellite System (GNSS), integer ambiguity resolution is the key step to realize precise positioning and attitude determination. As the necessary part of quality control, integer aperture (IA) ambiguity resolution provides the theoretical and practical foundation for ambiguity validation. It is mainly realized by acceptance testing. Due to the constraint of correlation between ambiguities, it is impossible to realize the controlling of failure rate according to analytical formula. Hence, the fixed failure rate approach is implemented by Monte Carlo sampling. However, due to the characteristics of Monte Carlo sampling and look-up table, we have to face the problem of a large amount of time consumption if sufficient GNSS scenarios are included in the creation of look-up table. This restricts the fixed failure rate approach to be a post process approach if a look-up table is not available. Furthermore, if not enough GNSS scenarios are considered, the table may only be valid for a specific scenario or application. Besides this, the method of creating look-up table or look-up function still needs to be designed for each specific acceptance test. To overcome these problems in determination of critical values, this contribution will propose an instantaneous and CONtrollable (iCON) IA ambiguity resolution approach for the first time. The iCON approach has the following advantages: (a) critical value of acceptance test is independently determined based on the required failure rate and GNSS model without resorting to external information such as look-up table; (b) it can be realized instantaneously for most of IA estimators which have analytical probability formulas. The stronger GNSS model, the less time consumption; (c) it provides a new viewpoint to improve the research about IA estimation. To verify these conclusions, multi-frequency and multi-GNSS simulation experiments are implemented. Those results show that IA

  4. Magnetic resonance image compression using scalar-vector quantization

    Science.gov (United States)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  5. Fast parallel DNA-based algorithms for molecular computation: quadratic congruence and factoring integers.

    Science.gov (United States)

    Chang, Weng-Long

    2012-03-01

    Assume that n is a positive integer. If there is an integer such that M (2) ≡ C (mod n), i.e., the congruence has a solution, then C is said to be a quadratic congruence (mod n). If the congruence does not have a solution, then C is said to be a quadratic noncongruence (mod n). The task of solving the problem is central to many important applications, the most obvious being cryptography. In this article, we describe a DNA-based algorithm for solving quadratic congruence and factoring integers. In additional to this novel contribution, we also show the utility of our encoding scheme, and of the algorithm's submodules. We demonstrate how a variety of arithmetic, shifted and comparative operations, namely bitwise and full addition, subtraction, left shifter and comparison perhaps are performed using strands of DNA.

  6. A motion-based integer ambiguity resolution method for attitude determination using the global positioning system (GPS)

    International Nuclear Information System (INIS)

    Wang, Bo; Deng, Zhihong; Wang, Shunting; Fu, Mengyin

    2010-01-01

    Loss of the satellite signal and noise disturbance will cause cycle slips to occur in the carrier phase observation of the attitude determination system using the global positioning system (GPS), especially in the dynamic situation. Therefore, in order to reject the error by cycle slips, the integer ambiguity should be re-computed. A motion model-based Kalman predictor is used for the ambiguity re-computation in dynamic applications. This method utilizes the correct observation of the last step to predict the current ambiguities. With the baseline length as a constraint to reject invalid values, we can solve the current integer ambiguity and the attitude angles, by substituting the obtained ambiguities into the constrained LAMBDA method. Experimental results demonstrate that the proposed method is more efficient in the dynamic situation, which takes less time to obtain new fixed ambiguities with a higher mean success rate

  7. New Approach Based on Compressive Sampling for Sample Rate Enhancement in DASs for Low-Cost Sensing Nodes

    Directory of Open Access Journals (Sweden)

    Francesco Bonavolontà

    2014-10-01

    Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.

  8. Two divergent paths: compression vs. non-compression in deep venous thrombosis and post thrombotic syndrome

    Directory of Open Access Journals (Sweden)

    Eduardo Simões Da Matta

    Full Text Available Abstract Use of compression therapy to reduce the incidence of postthrombotic syndrome among patients with deep venous thrombosis is a controversial subject and there is no consensus on use of elastic versus inelastic compression, or on the levels and duration of compression. Inelastic devices with a higher static stiffness index, combine relatively small and comfortable pressure at rest with pressure while standing strong enough to restore the “valve mechanism” generated by plantar flexion and dorsiflexion of the foot. Since the static stiffness index is dependent on the rigidity of the compression system and the muscle strength within the bandaged area, improvement of muscle mass with muscle-strengthening programs and endurance training should be encouraged. Therefore, in the acute phase of deep venous thrombosis events, anticoagulation combined with inelastic compression therapy can reduce the extension of the thrombus. Notwithstanding, prospective studies evaluating the effectiveness of inelastic therapy in deep venous thrombosis and post-thrombotic syndrome are needed.

  9. Wavelet-based audio embedding and audio/video compression

    Science.gov (United States)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  10. Orbit and Optics Distortion in a Nonscaling Muon FFAG

    International Nuclear Information System (INIS)

    Machida, Shinji

    2008-01-01

    Finite chromaticity in a nonscaling FFAG makes transverse tunes move during acceleration. Particles have to cross many integer and half-integer resonances although all those tunes are imperfection ones. Plausible argument is that the acceleration is so fast that effects of those resonance crossings are marginal. We did a tracking study in a lattice with alignment and gradient errors to see how the orbit and optics distortion is excited. We found that in a time scale of the muon acceleration, namely total tune changes about one unit per turn, the resonance is not a proper way to describe the beam dynamics. Instead, a random walk model, in which we assume that alignment and gradient errors kick a particle randomly in phase space, well explains the tracking results

  11. Integer anatomy

    Energy Technology Data Exchange (ETDEWEB)

    Doolittle, R. [ONR, Arlington, VA (United States)

    1994-11-15

    The title integer anatomy is intended to convey the idea of a systematic method for displaying the prime decomposition of the integers. Just as the biological study of anatomy does not teach us all things about behavior of species neither would we expect to learn everything about the number theory from a study of its anatomy. But, some number-theoretic theorems are illustrated by inspection of integer anatomy, which tend to validate the underlying structure and the form as developed and displayed in this treatise. The first statement to be made in this development is: the way structure of the natural numbers is displayed depends upon the allowed operations.

  12. Chest compression rates and survival following out-of-hospital cardiac arrest.

    Science.gov (United States)

    Idris, Ahamed H; Guffey, Danielle; Pepe, Paul E; Brown, Siobhan P; Brooks, Steven C; Callaway, Clifton W; Christenson, Jim; Davis, Daniel P; Daya, Mohamud R; Gray, Randal; Kudenchuk, Peter J; Larsen, Jonathan; Lin, Steve; Menegazzi, James J; Sheehan, Kellie; Sopko, George; Stiell, Ian; Nichol, Graham; Aufderheide, Tom P

    2015-04-01

    Guidelines for cardiopulmonary resuscitation recommend a chest compression rate of at least 100 compressions/min. A recent clinical study reported optimal return of spontaneous circulation with rates between 100 and 120/min during cardiopulmonary resuscitation for out-of-hospital cardiac arrest. However, the relationship between compression rate and survival is still undetermined. Prospective, observational study. Data is from the Resuscitation Outcomes Consortium Prehospital Resuscitation IMpedance threshold device and Early versus Delayed analysis clinical trial. Adults with out-of-hospital cardiac arrest treated by emergency medical service providers. None. Data were abstracted from monitor-defibrillator recordings for the first five minutes of emergency medical service cardiopulmonary resuscitation. Multiple logistic regression assessed odds ratio for survival by compression rate categories (compression fraction and depth, first rhythm, and study site. Compression rate data were available for 10,371 patients; 6,399 also had chest compression fraction and depth data. Age (mean±SD) was 67±16 years. Chest compression rate was 111±19 per minute, compression fraction was 0.70±0.17, and compression depth was 42±12 mm. Circulation was restored in 34%; 9% survived to hospital discharge. After adjustment for covariates without chest compression depth and fraction (n=10,371), a global test found no significant relationship between compression rate and survival (p=0.19). However, after adjustment for covariates including chest compression depth and fraction (n=6,399), the global test found a significant relationship between compression rate and survival (p=0.02), with the reference group (100-119 compressions/min) having the greatest likelihood for survival. After adjustment for chest compression fraction and depth, compression rates between 100 and 120 per minute were associated with greatest survival to hospital discharge.

  13. SEM and TEM characterization of the microstructure of post-compressed TiB2/2024Al composite.

    Science.gov (United States)

    Guo, Q; Jiang, L T; Chen, G Q; Feng, D; Sun, D L; Wu, G H

    2012-02-01

    In the present work, 55 vol.% TiB(2)/2024Al composites were obtained by pressure infiltration method. Compressive properties of 55 vol.% TiB(2)/2024Al composite under the strain rates of 10(-3) and 1S(-1) at different temperature were measured and microstructure of post-compressed TiB(2)/2024Al composite was characterized by scanning electron microscope (SEM) and transmission electron microscope (TEM). No trace of Al(3)Ti compound flake was found. TiB(2)-Al interface was smooth without significant reaction products, and orientation relationships ( [Formula: see text] and [Formula: see text] ) were revealed by HRTEM. Compressive strength of TiB(2)/2024Al composites decreased with temperature regardless of strain rates. The strain-rate-sensitivity of TiB(2)/2024Al composites increased with the increasing temperature. Fracture surface of specimens compressed at 25 and 250°C under 10(-3)S(-1) were characterized by furrow. Under 10(-3)S(-1), high density dislocations were formed in Al matrix when compressed at 25°C and dynamic recrystallization occurred at 250°C. Segregation of Mg and Cu on the subgrain boundary was also revealed at 550°C. Dislocations, whose density increased with temperature, were formed in TiB(2) particles under 1S(-1). Deformation of composites is affected by matrix, reinforcement and strain rate. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. COMPRESSING BIOMEDICAL IMAGE BY USING INTEGER WAVELET TRANSFORM AND PREDICTIVE ENCODER

    OpenAIRE

    Anushree Srivastava*, Narendra Kumar Chaurasia

    2016-01-01

    Image compression has become an important process in today’s world of information exchange. It helps in effective utilization of high speed network resources. Medical image compression has an important role in medical field because they are used for future reference of patients. Medical data is compressed in such a way so that the diagnostics capabilities are not compromised or no medical information is lost. Medical imaging poses the great challenge of having compression algorithms that redu...

  15. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest

    NARCIS (Netherlands)

    Monsieurs, Koenraad G.; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F.; Calle, Paul A.

    2012-01-01

    Background and goal of study: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with

  16. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  17. Analysis misconception of integers in microteaching activities

    Science.gov (United States)

    Setyawati, R. D.; Indiati, I.

    2018-05-01

    This study view to analyse student misconceptions on integers in microteaching activities. This research used qualitative research design. An integers test contained questions from eight main areas of integers. The Integers material test includes (a) converting the image into fractions, (b) examples of positive numbers including rational numbers, (c) operations in fractions, (d) sorting fractions from the largest to the smallest, and vice versa; e) equate denominator, (f) concept of ratio mark, (g) definition of fraction, and (h) difference between fractions and parts. The results indicated an integers concepts: (1) the students have not been able to define concepts well based on the classification of facts in organized part; (2) The correlational concept: students have not been able to combine interrelated events in the form of general principles; and (3) theoretical concepts: students have not been able to use concepts that facilitate in learning the facts or events in an organized system.

  18. DESIGN OF DYADIC-INTEGER-COEFFICIENTS BASED BI-ORTHOGONAL WAVELET FILTERS FOR IMAGE SUPER-RESOLUTION USING SUB-PIXEL IMAGE REGISTRATION

    Directory of Open Access Journals (Sweden)

    P.B. Chopade

    2014-05-01

    Full Text Available This paper presents image super-resolution scheme based on sub-pixel image registration by the design of a specific class of dyadic-integer-coefficient based wavelet filters derived from the construction of a half-band polynomial. First, the integer-coefficient based half-band polynomial is designed by the splitting approach. Next, this designed half-band polynomial is factorized and assigned specific number of vanishing moments and roots to obtain the dyadic-integer coefficients low-pass analysis and synthesis filters. The possibility of these dyadic-integer coefficients based wavelet filters is explored in the field of image super-resolution using sub-pixel image registration. The two-resolution frames are registered at a specific shift from one another to restore the resolution lost by CCD array of camera. The discrete wavelet transform (DWT obtained from the designed coefficients is applied on these two low-resolution images to obtain the high resolution image. The developed approach is validated by comparing the quality metrics with existing filter banks.

  19. Integer programming

    CERN Document Server

    Conforti, Michele; Zambelli, Giacomo

    2014-01-01

    This book is an elegant and rigorous presentation of integer programming, exposing the subject’s mathematical depth and broad applicability. Special attention is given to the theory behind the algorithms used in state-of-the-art solvers. An abundance of concrete examples and exercises of both theoretical and real-world interest explore the wide range of applications and ramifications of the theory. Each chapter is accompanied by an expertly informed guide to the literature and special topics, rounding out the reader’s understanding and serving as a gateway to deeper study. Key topics include: formulations polyhedral theory cutting planes decomposition enumeration semidefinite relaxations Written by renowned experts in integer programming and combinatorial optimization, Integer Programming is destined to become an essential text in the field.

  20. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  1. Influence of Compacting Rate on the Properties of Compressed Earth Blocks

    Directory of Open Access Journals (Sweden)

    Humphrey Danso

    2016-01-01

    Full Text Available Compaction of blocks contributes significantly to the strength properties of compressed earth blocks. This paper investigates the influence of compacting rates on the properties of compressed earth blocks. Experiments were conducted to determine the density, compressive strength, splitting tensile strength, and erosion properties of compressed earth blocks produced with different rates of compacting speed. The study concludes that although the low rate of compaction achieved slightly better performance characteristics, there is no statistically significant difference between the soil blocks produced with low compacting rate and high compacting rate. The study demonstrates that there is not much influence on the properties of compressed earth blocks produced with low and high compacting rates. It was further found that there are strong linear correlations between the compressive strength test and density, and density and the erosion. However, a weak linear correlation was found between tensile strength and compressive strength, and tensile strength and density.

  2. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  3. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  4. SRComp: short read sequence compression using burstsort and Elias omega coding.

    Directory of Open Access Journals (Sweden)

    Jeremy John Selva

    Full Text Available Next-generation sequencing (NGS technologies permit the rapid production of vast amounts of data at low cost. Economical data storage and transmission hence becomes an increasingly important challenge for NGS experiments. In this paper, we introduce a new non-reference based read sequence compression tool called SRComp. It works by first employing a fast string-sorting algorithm called burstsort to sort read sequences in lexicographical order and then Elias omega-based integer coding to encode the sorted read sequences. SRComp has been benchmarked on four large NGS datasets, where experimental results show that it can run 5-35 times faster than current state-of-the-art read sequence compression tools such as BEETL and SCALCE, while retaining comparable compression efficiency for large collections of short read sequences. SRComp is a read sequence compression tool that is particularly valuable in certain applications where compression time is of major concern.

  5. Garbage-free reversible constant multipliers for arbitrary integers

    DEFF Research Database (Denmark)

    Mogensen, Torben Ægidius

    2013-01-01

    We present a method for constructing reversible circuitry for multiplying integers by arbitrary integer constants. The method is based on Mealy machines and gives circuits whose size are (in the worst case) linear in the size of the constant. This makes the method unsuitable for large constants...

  6. Effect of the rate of chest compression familiarised in previous training on the depth of chest compression during metronome-guided cardiopulmonary resuscitation: a randomised crossover trial.

    Science.gov (United States)

    Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo

    2016-02-12

    To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Participants were recruited from a medical school and two paramedic schools of South Korea. 42 senior students of a medical school and two paramedic schools were enrolled but five dropped out due to physical restraints. Senior medical and paramedic students performed 1 min of metronome-guided CPR with chest compressions only at a speed of 120 compressions/min after training for chest compression with three different rates (100, 120 and 140 compressions/min). Friedman's test was used to compare average compression depths based on the different rates used during training. Average compression depths were significantly different according to the rate used in training (ptraining at a speed of 100 compressions/min and those at speeds of 120 and 140 compressions/min (both pCPR is affected by the relative difference between the rate of metronome guidance and the chest compression rate practised in previous training. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. Diversity and non-integer differentiation for system dynamics

    CERN Document Server

    Oustaloup, Alain

    2014-01-01

    Based on a structured approach to diversity, notably inspired by various forms of diversity of natural origins, Diversity and Non-integer Derivation Applied to System Dynamics provides a study framework to the introduction of the non-integer derivative as a modeling tool. Modeling tools that highlight unsuspected dynamical performances (notably damping performances) in an ""integer"" approach of mechanics and automation are also included. Written to enable a two-tier reading, this is an essential resource for scientists, researchers, and industrial engineers interested in this subject area. Ta

  8. Fast mode decision based on human noticeable luminance difference and rate distortion cost for H.264/AVC

    Science.gov (United States)

    Li, Mian-Shiuan; Chen, Mei-Juan; Tai, Kuang-Han; Sue, Kuen-Liang

    2013-12-01

    This article proposes a fast mode decision algorithm based on the correlation of the just-noticeable-difference (JND) and the rate distortion cost (RD cost) to reduce the computational complexity of H.264/AVC. First, the relationship between the average RD cost and the number of JND pixels is established by Gaussian distributions. Thus, the RD cost of the Inter 16 × 16 mode is compared with the predicted thresholds from these models for fast mode selection. In addition, we use the image content, the residual data, and JND visual model for horizontal/vertical detection, and then utilize the result to predict the partition in a macroblock. From the experimental results, a greater time saving can be achieved while the proposed algorithm also maintains performance and quality effectively.

  9. Superposition of two optical vortices with opposite integer or non-integer orbital angular momentum

    Directory of Open Access Journals (Sweden)

    Carlos Fernando Díaz Meza

    2016-01-01

    Full Text Available This work develops a brief proposal to achieve the superposition of two opposite vortex beams, both with integer or non-integer mean value of the orbital angular momentum. The first part is about the generation of this kind of spatial light distributions through a modified Brown and Lohmann’s hologram. The inclusion of a simple mathematical expression into the pixelated grid’s transmittance function, based in Fourier domain properties, shifts the diffraction orders counterclockwise and clockwise to the same point and allows the addition of different modes. The strategy is theoretically and experimentally validated for the case of two opposite rotation helical wavefronts.

  10. Critical Imperative for the Reform of British Interpretation of Fetal Heart Rate Decelerations: Analysis of FIGO and NICE Guidelines, Post-Truth Foundations, Cognitive Fallacies, Myths and Occam's Razor.

    Science.gov (United States)

    Sholapurkar, Shashikant L

    2017-04-01

    Cardiotocography (CTG) has disappointingly failed to show good predictability for fetal acidemia or neonatal outcomes in several large studies. A complete rethink of CTG interpretation will not be out of place. Fetal heart rate (FHR) decelerations are the most common deviations, benign as well as manifestation of impending fetal hypoxemia/acidemia, much more commonly than FHR baseline or variability. Their specific nomenclature is important (center-stage) because it provides the basic concepts and framework on which the complex "pattern recognition" of CTG interpretation by clinicians depends. Unfortunately, the discrimination of FHR decelerations seems to be muddled since the British obstetrics adopted the concept of vast majority of FHR decelerations being "variable" (cord-compression). With proliferation of confusing waveform criteria, "atypical variables" became the commonest cause of suspicious/pathological CTG. However, National Institute for Health and Care Excellence (NICE) (2014) had to disband the "typical" and "atypical" terminology because of flawed classifying criteria. This analytical review makes a strong case that there are major and fundamental framing and confirmation fallacies (not just biases) in interpretation of FHR decelerations by NICE (2014) and International Federation of Gynecology and Obstetrics (FIGO) (2015), probably the biggest in modern medicine. This "post-truth" approach is incompatible with scientific practice. Moreover, it amounts to setting oneself for failure. The inertia to change could be best described as "backfire effect". There is abundant evidence that head-compression (and other non-hypoxic mediators) causes rapid rather than shallow/gradual decelerations. Currently, the vast majority of decelerations are attributed to unproven cord compression underpinned by flawed disproven pathophysiological hypotheses. Their further discrimination based on abstract, random, trial and error criteria remains unresolved suggesting a

  11. PageRank of integers

    International Nuclear Information System (INIS)

    Frahm, K M; Shepelyansky, D L; Chepelianskii, A D

    2012-01-01

    We up a directed network tracing links from a given integer to its divisors and analyze the properties of the Google matrix of this network. The PageRank vector of this matrix is computed numerically and it is shown that its probability is approximately inversely proportional to the PageRank index thus being similar to the Zipf law and the dependence established for the World Wide Web. The spectrum of the Google matrix of integers is characterized by a large gap and a relatively small number of nonzero eigenvalues. A simple semi-analytical expression for the PageRank of integers is derived that allows us to find this vector for matrices of billion size. This network provides a new PageRank order of integers. (paper)

  12. Integer programming theory, applications, and computations

    CERN Document Server

    Taha, Hamdy A

    1975-01-01

    Integer Programming: Theory, Applications, and Computations provides information pertinent to the theory, applications, and computations of integer programming. This book presents the computational advantages of the various techniques of integer programming.Organized into eight chapters, this book begins with an overview of the general categorization of integer applications and explains the three fundamental techniques of integer programming. This text then explores the concept of implicit enumeration, which is general in a sense that it is applicable to any well-defined binary program. Other

  13. Parallel efficient rate control methods for JPEG 2000

    Science.gov (United States)

    Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko

    2017-09-01

    Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.

  14. Hard equality constrained integer knapsacks

    NARCIS (Netherlands)

    Aardal, K.I.; Lenstra, A.K.; Cook, W.J.; Schulz, A.S.

    2002-01-01

    We consider the following integer feasibility problem: "Given positive integer numbers a 0, a 1,..., a n, with gcd(a 1,..., a n) = 1 and a = (a 1,..., a n), does there exist a nonnegative integer vector x satisfying ax = a 0?" Some instances of this type have been found to be extremely hard to solve

  15. Compressed sensing for distributed systems

    CERN Document Server

    Coluccia, Giulio; Magli, Enrico

    2015-01-01

    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  16. Estimates of post-acceleration longitudinal bunch compression

    International Nuclear Information System (INIS)

    Judd, D.L.

    1977-01-01

    A simple analytic method is developed, based on physical approximations, for treating transient implosive longitudinal compression of bunches of heavy ions in an accelerator system for ignition of inertial-confinement fusion pellet targets. Parametric dependences of attainable compressions and of beam path lengths and times during compression are indicated for ramped pulsed-gap lines, rf systems in storage and accumulator rings, and composite systems, including sections of free drift. It appears that for high-confidence pellets in a plant producing 1000 MW of electric power the needed pulse lengths cannot be obtained with rings alone unless an unreasonably large number of them are used, independent of choice of rf harmonic number. In contrast, pulsed-gap lines alone can meet this need. The effects of an initial inward compressive drift and of longitudinal emittance are included

  17. Effects of image distortion correction on voxel-based morphometry

    International Nuclear Information System (INIS)

    Goto, Masami; Abe, Osamu; Kabasawa, Hiroyuki

    2012-01-01

    We aimed to show that correcting image distortion significantly affects brain volumetry using voxel-based morphometry (VBM) and to assess whether the processing of distortion correction reduces system dependency. We obtained contiguous sagittal T 1 -weighted images of the brain from 22 healthy participants using 1.5- and 3-tesla magnetic resonance (MR) scanners, preprocessed images using Statistical Parametric Mapping 5, and tested the relation between distortion correction and brain volume using VBM. Local brain volume significantly increased or decreased on corrected images compared with uncorrected images. In addition, the method used to correct image distortion for gradient nonlinearity produced fewer volumetric errors from MR system variation. This is the first VBM study to show more precise volumetry using VBM with corrected images. These results indicate that multi-scanner or multi-site imaging trials require correction for distortion induced by gradient nonlinearity. (author)

  18. Probable mode prediction for H.264 advanced video coding P slices using removable SKIP mode distortion estimation

    Science.gov (United States)

    You, Jongmin; Jeong, Jechang

    2010-02-01

    The H.264/AVC (advanced video coding) is used in a wide variety of applications including digital broadcasting and mobile applications, because of its high compression efficiency. The variable block mode scheme in H.264/AVC contributes much to its high compression efficiency but causes a selection problem. In general, rate-distortion optimization (RDO) is the optimal mode selection strategy, but it is computationally intensive. For this reason, the H.264/AVC encoder requires a fast mode selection algorithm for use in applications that require low-power and real-time processing. A probable mode prediction algorithm for the H.264/AVC encoder is proposed. To reduce the computational complexity of RDO, the proposed method selects probable modes among all allowed block modes using removable SKIP mode distortion estimation. Removable SKIP mode distortion is used to estimate whether or not a further divided block mode is appropriate for a macroblock. It is calculated using a no-motion reference block with a few computations. Then the proposed method reduces complexity by performing the RDO process only for probable modes. Experimental results show that the proposed algorithm can reduce encoding time by an average of 55.22% without significant visual quality degradation and increased bit rate.

  19. Compression of surface myoelectric signals using MP3 encoding.

    Science.gov (United States)

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  20. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    Science.gov (United States)

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  1. Mixed integer evolution strategies for parameter optimization.

    Science.gov (United States)

    Li, Rui; Emmerich, Michael T M; Eggermont, Jeroen; Bäck, Thomas; Schütz, M; Dijkstra, J; Reiber, J H C

    2013-01-01

    Evolution strategies (ESs) are powerful probabilistic search and optimization algorithms gleaned from biological evolution theory. They have been successfully applied to a wide range of real world applications. The modern ESs are mainly designed for solving continuous parameter optimization problems. Their ability to adapt the parameters of the multivariate normal distribution used for mutation during the optimization run makes them well suited for this domain. In this article we describe and study mixed integer evolution strategies (MIES), which are natural extensions of ES for mixed integer optimization problems. MIES can deal with parameter vectors consisting not only of continuous variables but also with nominal discrete and integer variables. Following the design principles of the canonical evolution strategies, they use specialized mutation operators tailored for the aforementioned mixed parameter classes. For each type of variable, the choice of mutation operators is governed by a natural metric for this variable type, maximal entropy, and symmetry considerations. All distributions used for mutation can be controlled in their shape by means of scaling parameters, allowing self-adaptation to be implemented. After introducing and motivating the conceptual design of the MIES, we study the optimality of the self-adaptation of step sizes and mutation rates on a generalized (weighted) sphere model. Moreover, we prove global convergence of the MIES on a very general class of problems. The remainder of the article is devoted to performance studies on artificial landscapes (barrier functions and mixed integer NK landscapes), and a case study in the optimization of medical image analysis systems. In addition, we show that with proper constraint handling techniques, MIES can also be applied to classical mixed integer nonlinear programming problems.

  2. Compression of fiber supercontinuum pulses to the Fourier-limit in a high-numerical-aperture focus

    DEFF Research Database (Denmark)

    Tu, Haohua; Liu, Yuan; Turchinovich, Dmitry

    2011-01-01

    A multiphoton intrapulse interference phase scan (MIIPS) adaptively and automatically compensates the combined phase distortion from a fiber supercontinuum source, a spatial light modulator pulse shaper, and a high-NA microscope objective, allowing Fourier-transform-limited compression of the sup......A multiphoton intrapulse interference phase scan (MIIPS) adaptively and automatically compensates the combined phase distortion from a fiber supercontinuum source, a spatial light modulator pulse shaper, and a high-NA microscope objective, allowing Fourier-transform-limited compression...... power of 18–70mW, and a repetition rate of 76MHz, permitting the application of this source to nonlinear optical microscopy and coherently controlled microspectroscopy....

  3. Face detection on distorted images using perceptual quality-aware features

    Science.gov (United States)

    Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.

    2014-02-01

    We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.

  4. smallWig: parallel compression of RNA-seq WIG files.

    Science.gov (United States)

    Wang, Zhiying; Weissman, Tsachy; Milenkovic, Olgica

    2016-01-15

    We developed a new lossless compression method for WIG data, named smallWig, offering the best known compression rates for RNA-seq data and featuring random access functionalities that enable visualization, summary statistics analysis and fast queries from the compressed files. Our approach results in order of magnitude improvements compared with bigWig and ensures compression rates only a fraction of those produced by cWig. The key features of the smallWig algorithm are statistical data analysis and a combination of source coding methods that ensure high flexibility and make the algorithm suitable for different applications. Furthermore, for general-purpose file compression, the compression rate of smallWig approaches the empirical entropy of the tested WIG data. For compression with random query features, smallWig uses a simple block-based compression scheme that introduces only a minor overhead in the compression rate. For archival or storage space-sensitive applications, the method relies on context mixing techniques that lead to further improvements of the compression rate. Implementations of smallWig can be executed in parallel on different sets of chromosomes using multiple processors, thereby enabling desirable scaling for future transcriptome Big Data platforms. The development of next-generation sequencing technologies has led to a dramatic decrease in the cost of DNA/RNA sequencing and expression profiling. RNA-seq has emerged as an important and inexpensive technology that provides information about whole transcriptomes of various species and organisms, as well as different organs and cellular communities. The vast volume of data generated by RNA-seq experiments has significantly increased data storage costs and communication bandwidth requirements. Current compression tools for RNA-seq data such as bigWig and cWig either use general-purpose compressors (gzip) or suboptimal compression schemes that leave significant room for improvement. To substantiate

  5. Applied Integer Programming Modeling and Solution

    CERN Document Server

    Chen, Der-San; Dang, Yu

    2011-01-01

    An accessible treatment of the modeling and solution of integer programming problems, featuring modern applications and software In order to fully comprehend the algorithms associated with integer programming, it is important to understand not only how algorithms work, but also why they work. Applied Integer Programming features a unique emphasis on this point, focusing on problem modeling and solution using commercial software. Taking an application-oriented approach, this book addresses the art and science of mathematical modeling related to the mixed integer programming (MIP) framework and

  6. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    Science.gov (United States)

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  7. Integer, fractional, and anomalous quantum Hall effects explained with Eyring's rate process theory and free volume concept.

    Science.gov (United States)

    Hao, Tian

    2017-02-22

    The Hall effects, especially the integer, fractional and anomalous quantum Hall effects, have been addressed using Eyring's rate process theory and free volume concept. The basic assumptions are that the conduction process is a common rate controlled "reaction" process that can be described with Eyring's absolute rate process theory; the mobility of electrons should be dependent on the free volume available for conduction electrons. The obtained Hall conductivity is clearly quantized as with prefactors related to both the magnetic flux quantum number and the magnetic quantum number via the azimuthal quantum number, with and without an externally applied magnetic field. This article focuses on two dimensional (2D) systems, but the approaches developed in this article can be extended to 3D systems.

  8. Volterra Series Based Distortion Effect

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    2010-01-01

    A large part of the characteristic sound of the electric guitar comes from nonlinearities in the signal path. Such nonlinearities may come from the input- or output-stage of the amplier, which is often equipped with vacuum tubes or a dedicated distortion pedal. In this paper the Volterra series...... expansion for non linear systems is investigated with respect to generating good distortion. The Volterra series allows for unlimited adjustment of the level and frequency dependency of each distortion component. Subjectively relevant ways of linking the dierent orders are discussed....

  9. High bit depth infrared image compression via low bit depth codecs

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    .264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can...

  10. Mixed-Integer-Linear-Programming-Based Energy Management System for Hybrid PV-Wind-Battery Microgrids

    DEFF Research Database (Denmark)

    Hernández, Adriana Carolina Luna; Aldana, Nelson Leonardo Diaz; Graells, Moises

    2017-01-01

    -side strategy, defined as a general mixed-integer linear programming by taking into account two stages for proper charging of the storage units. This model is considered as a deterministic problem that aims to minimize operating costs and promote self-consumption based on 24-hour ahead forecast data...

  11. Entropy and Compression Capture Different Complexity Features: The Case of Fetal Heart Rate

    Directory of Open Access Journals (Sweden)

    João Monteiro-Santos

    2017-12-01

    Full Text Available Entropy and compression have been used to distinguish fetuses at risk of hypoxia from their healthy counterparts through the analysis of Fetal Heart Rate (FHR. Low correlation that was observed between these two approaches suggests that they capture different complexity features. This study aims at characterizing the complexity of FHR features captured by entropy and compression, using as reference international guidelines. Single and multi-scale approaches were considered in the computation of entropy and compression. The following physiologic-based features were considered: FHR baseline; percentage of abnormal long (%abLTV and short (%abSTV term variability; average short term variability; and, number of acceleration and decelerations. All of the features were computed on a set of 68 intrapartum FHR tracings, divided as normal, mildly, and moderately-severely acidemic born fetuses. The correlation between entropy/compression features and the physiologic-based features was assessed. There were correlations between compressions and accelerations and decelerations, but neither accelerations nor decelerations were significantly correlated with entropies. The %abSTV was significantly correlated with entropies (ranging between −0.54 and −0.62, and to a higher extent with compression (ranging between −0.80 and −0.94. Distinction between groups was clearer in the lower scales using entropy and in the higher scales using compression. Entropy and compression are complementary complexity measures.

  12. Sensitivity of boundary-layer stability to base-state distortions at high Mach numbers

    Science.gov (United States)

    Park, Junho; Zaki, Tamer

    2017-11-01

    The stability diagram of high-speed boundary layers has been established by evaluating the linear instability modes of the similarity profile, over wide ranges of Reynolds and Mach numbers. In real flows, however, the base state can deviate from the similarity profile. Both the base velocity and temperature can be distorted, for example due to roughness and thermal wall treatments. We review the stability problem of high-speed boundary layer, and derive a new formulation of the sensitivity to base-state distortion using forward and adjoint parabolized stability equations. The new formulation provides qualitative and quantitative interpretations on change in growth rate due to modifications of mean-flow and mean-temperature in heated high-speed boundary layers, and establishes the foundation for future control strategies. This work has been funded by the Air Force Office of Scientific Research (AFOSR) Grant: FA9550-16-1-0103.

  13. 76 FR 41238 - Post Rock Wind Power Project, LLC; Supplemental Notice That Initial Market-Based Rate Filing...

    Science.gov (United States)

    2011-07-13

    ... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Docket No. ER11-3959-000] Post Rock Wind Power Project, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes Request for... Rock Wind Power Project, LLC's application for market-based rate authority, with an accompanying rate...

  14. Slip and Slide Method of Factoring Trinomials with Integer Coefficients over the Integers

    Science.gov (United States)

    Donnell, William A.

    2012-01-01

    In intermediate and college algebra courses there are a number of methods for factoring quadratic trinomials with integer coefficients over the integers. Some of these methods have been given names, such as trial and error, reversing FOIL, AC method, middle term splitting method and slip and slide method. The purpose of this article is to discuss…

  15. Behaviour of venous flow rates in intermittent sequential pneumatic compression of the legs using different compression strengths

    International Nuclear Information System (INIS)

    Fassmann-Glaser, I.

    1984-01-01

    A study with 25 patients was performed in order to find out whether intermittent, sequential, pneumatic leg compression is of value in the preventive management of thrombosis due to its effect on the venous flow rates. For this purpose, xenon 133 was injected into one of the foot veins and the flow rate in each case determined for the distance between instep and inguen using different compression strengths, with pressure being exerted on the ankle, calf and thigh. Increased flow rates were already measured at an average pressure value of 34.5 mmHg, while the maximum effect was achieved by exerting a pressure of 92.5 mmHg, which increased the flow rate by 366% as compared to the baseline value. The results point to a significant improvement of the venous flow rates due to intermittent, sequential, pneumatic leg compression and thus provide evidence to prove the value of this method in the prevention of hemostasis and thrombosis. (TRV) [de

  16. Critical Imperative for the Reform of British Interpretation of Fetal Heart Rate Decelerations: Analysis of FIGO and NICE Guidelines, Post-Truth Foundations, Cognitive Fallacies, Myths and Occam’s Razor

    Science.gov (United States)

    Sholapurkar, Shashikant L.

    2017-01-01

    Cardiotocography (CTG) has disappointingly failed to show good predictability for fetal acidemia or neonatal outcomes in several large studies. A complete rethink of CTG interpretation will not be out of place. Fetal heart rate (FHR) decelerations are the most common deviations, benign as well as manifestation of impending fetal hypoxemia/acidemia, much more commonly than FHR baseline or variability. Their specific nomenclature is important (center-stage) because it provides the basic concepts and framework on which the complex “pattern recognition” of CTG interpretation by clinicians depends. Unfortunately, the discrimination of FHR decelerations seems to be muddled since the British obstetrics adopted the concept of vast majority of FHR decelerations being “variable” (cord-compression). With proliferation of confusing waveform criteria, “atypical variables” became the commonest cause of suspicious/pathological CTG. However, National Institute for Health and Care Excellence (NICE) (2014) had to disband the “typical” and “atypical” terminology because of flawed classifying criteria. This analytical review makes a strong case that there are major and fundamental framing and confirmation fallacies (not just biases) in interpretation of FHR decelerations by NICE (2014) and International Federation of Gynecology and Obstetrics (FIGO) (2015), probably the biggest in modern medicine. This “post-truth” approach is incompatible with scientific practice. Moreover, it amounts to setting oneself for failure. The inertia to change could be best described as “backfire effect”. There is abundant evidence that head-compression (and other non-hypoxic mediators) causes rapid rather than shallow/gradual decelerations. Currently, the vast majority of decelerations are attributed to unproven cord compression underpinned by flawed disproven pathophysiological hypotheses. Their further discrimination based on abstract, random, trial and error criteria remains

  17. Informational analysis for compressive sampling in radar imaging.

    Science.gov (United States)

    Zhang, Jingxiong; Yang, Ke

    2015-03-24

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.

  18. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  19. Cortical compression rapidly trimmed transcallosal projections and altered axonal anterograde transport machinery.

    Science.gov (United States)

    Chen, Li-Jin; Wang, Yueh-Jan; Tseng, Guo-Fang

    2017-10-24

    Trauma and tumor compressing the brain distort underlying cortical neurons. Compressed cortical neurons remodel their dendrites instantly. The effects on axons however remain unclear. Using a rat epidural bead implantation model, we studied the effects of unilateral somatosensory cortical compression on its transcallosal projection and the reversibility of the changes following decompression. Compression reduced the density, branching profuseness and boutons of the projection axons in the contralateral homotopic cortex 1week and 1month post-compression. Projection fiber density was higher 1-month than 1-week post-compression, suggesting adaptive temporal changes. Compression reduced contralateral cortical synaptophysin, vesicular glutamate transporter 1 (VGLUT1) and postsynaptic density protein-95 (PSD95) expressions in a week and the first two marker proteins further by 1month. βIII-tubulin and kinesin light chain (KLC) expressions in the corpus callosum (CC) where transcallosal axons traveled were also decreased. Kinesin heavy chain (KHC) level in CC was temporarily increased 1week after compression. Decompression increased transcallosal axon density and branching profuseness to higher than sham while bouton density returned to sham levels. This was accompanied by restoration of synaptophysin, VGLUT1 and PSD95 expressions in the contralateral cortex of the 1-week, but not the 1-month, compression rats. Decompression restored βIII-tubulin, but not KLC and KHC expressions in CC. However, KLC and KHC expressions in the cell bodies of the layer II/III pyramidal neurons partially recovered. Our results show cerebral compression compromised cortical axonal outputs and reduced transcallosal projection. Some of these changes did not recover in long-term decompression. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Fish's Muscles Distortion and Pectoral Fins Propulsion of Lift-Based Mode

    Science.gov (United States)

    Yang, S. B.; Han, X. Y.; Qiu, J.

    As a sort of MPF(median and/or paired fin propulsion), pectoral fins propulsion makes fish easier to maneuver than other propulsion, according to the well-established classification scheme proposed by Webb in 1984. Pectoral fins propulsion is classified into oscillatory propulsion, undulatory propulsion and compound propulsion. Pectoral fins oscillatory propulsion, is further ascribable to two modes: drag-based mode and lift-based mode. And fish exhibits strong cruise ability by using lift-based mode. Therefore to robot fish design using pectoral fins lift-based mode will bring a new revolution to resources exploration in blue sea. On the basis of the wave plate theory, a kinematic model of fish’s pectoral fins lift-based mode is established associated with the behaviors of cownose ray (Rhinoptera bonasus) in the present work. In view of the power of fish’s locomotion from muscle distortion, it would be helpful benefit to reveal the mechanism of fish’s locomotion variation dependent on muscles distortion. So this study puts forward the pattern of muscles distortion of pectoral fins according to the character of skeletons and muscles of cownose ray in morphology and simulates the kinematics of lift-based mode using nonlinear analysis software. In the symmetrical fluid field, the model is simulated left-right symmetrically or asymmetrically. The results qualitatively show how muscles distortion determines the performance of fish locomotion. Finally the efficient muscles distortion associated with the preliminary dynamics is induced.

  1. Atomic effect algebras with compression bases

    International Nuclear Information System (INIS)

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-01

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  2. Contributions in compression of 3D medical images and 2D images; Contributions en compression d'images medicales 3D et d'images naturelles 2D

    Energy Technology Data Exchange (ETDEWEB)

    Gaudeau, Y

    2006-12-15

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  3. Selecting a general-purpose data compression algorithm

    Science.gov (United States)

    Mathews, Gary Jason

    1995-01-01

    The National Space Science Data Center's Common Data Formate (CDF) is capable of storing many types of data such as scalar data items, vectors, and multidimensional arrays of bytes, integers, or floating point values. However, regardless of the dimensionality and data type, the data break down into a sequence of bytes that can be fed into a data compression function to reduce the amount of data without losing data integrity and thus remaining fully reconstructible. Because of the diversity of data types and high performance speed requirements, a general-purpose, fast, simple data compression algorithm is required to incorporate data compression into CDF. The questions to ask are how to evaluate and compare compression algorithms, and what compression algorithm meets all requirements. The object of this paper is to address these questions and determine the most appropriate compression algorithm to use within the CDF data management package that would be applicable to other software packages with similar data compression needs.

  4. Distorted wave approach to calculate Auger transition rates of ions in metals

    Energy Technology Data Exchange (ETDEWEB)

    Deutscher, Stefan A. E-mail: sad@utk.edu; Diez Muino, R.; Arnau, A.; Salin, A.; Zaremba, E

    2001-08-01

    We evaluate the role of target distortion in the determination of Auger transition rates for multicharged ions in metals. The required two electron matrix elements are calculated using numerical solutions of the Kohn-Sham equations for both the bound and continuum states. Comparisons with calculations performed using plane waves and hydrogenic orbitals are presented.

  5. A Constraint-Based Model for Fast Post-Disaster Emergency Vehicle Routing

    Directory of Open Access Journals (Sweden)

    Roberto Amadini

    2013-12-01

    Full Text Available Disasters like terrorist attacks, earthquakes, hurricanes, and volcano eruptions are usually unpredictable events that affect a high number of people. We propose an approach that could be used as a decision support tool for a post-disaster response that allows the assignment of victims to hospitals and organizes their transportation via emergency vehicles. By exploiting the synergy between Mixed Integer Programming and Constraint Programming techniques, we are able to compute the routing of the vehicles so as to rescue much more victims than both heuristic based and complete approaches in a very reasonable time.

  6. Rate Dependence of the Compressive Response of Ti Foams

    Directory of Open Access Journals (Sweden)

    Nik Petrinic

    2012-06-01

    Full Text Available Titanium foams of relative density ranging from 0.3 to 0.9 were produced by titanium powder sintering procedures and tested in uniaxial compression at strain rates ranging from 0.01 to 2,000 s−1. The material microstructure was examined by X-ray tomography and Scanning Electron Microscopy (SEM observations. The foams investigated are strain rate sensitive, with both the yield stress and the strain hardening increasing with applied strain rate, and the strain rate sensitivity is more pronounced in foams of lower relative density. Finite element simulations were conducted modelling explicitly the material’s microstructure at the micron level, via a 3D Voronoi tessellation. Low and high strain rate simulations were conducted in order to predict the material’s compressive response, employing both rate-dependant and rate-independent constitutive models. Results from numerical analyses suggest that the primary source of rate sensitivity is represented by the intrinsic sensitivity of the foam’s parent material.

  7. A statistical mechanical approach to restricted integer partition functions

    Science.gov (United States)

    Zhou, Chi-Chun; Dai, Wu-Sheng

    2018-05-01

    The main aim of this paper is twofold: (1) suggesting a statistical mechanical approach to the calculation of the generating function of restricted integer partition functions which count the number of partitions—a way of writing an integer as a sum of other integers under certain restrictions. In this approach, the generating function of restricted integer partition functions is constructed from the canonical partition functions of various quantum gases. (2) Introducing a new type of restricted integer partition functions corresponding to general statistics which is a generalization of Gentile statistics in statistical mechanics; many kinds of restricted integer partition functions are special cases of this restricted integer partition function. Moreover, with statistical mechanics as a bridge, we reveal a mathematical fact: the generating function of restricted integer partition function is just the symmetric function which is a class of functions being invariant under the action of permutation groups. Using this approach, we provide some expressions of restricted integer partition functions as examples.

  8. Modeling Kinetics of Distortion in Porous Bi-layered Structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Frandsen, Henrik Lund; Bjørk, Rasmus

    2013-01-01

    because of different sintering rates of the materials resulting in undesired distortions of the component. An analytical model based on the continuum theory of sintering has been developed to describe the kinetics of densification and distortion in the sintering processes. A new approach is used...... to extract the material parameters controlling shape distortion through optimizing the model to experimental data of free shrinkage strains. The significant influence of weight of the sample (gravity) on the kinetics of distortion is taken in to consideration. The modeling predictions indicate good agreement...

  9. Post-prior discrepancies in the continuum distorted wave-eikonal initial state approximation for ion-helium ionization

    Energy Technology Data Exchange (ETDEWEB)

    Ciappina, M F [CONICET and Departamento de Fisica, Universidad Nacional del Sur, 8000 Bahia Blanca (Argentina); Cravero, W R [CONICET and Departamento de Fisica, Universidad Nacional del Sur, 8000 Bahia Blanca (Argentina); Garibotti, C R [CONICET and Division Colisiones Atomicas, Centro Atomico Bariloche, 8400 Bariloche (Argentina)

    2003-09-28

    We have explored post-prior discrepancies within continuum distorted wave-eikonal initial state theory for ion-atom ionization. Although there are no post-prior discrepancies when electron-target initial and final states are exact solutions of the respective Hamiltonians, discrepancies do arise for multielectronic targets, when a hydrogenic continuum with effective charge is used for the final electron-residual target wavefunction. We have found that the prior version calculations give better results than the post version, particularly for highly charged projectiles. We have explored the reasons for this behaviour and found that the prior version shows less sensitivity to the choice of the final state. The fact that the perturbation potentials operate upon the initial state suggests that the selection of the initial bound state is relatively more important than the final continuum state for the prior version.

  10. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part II: two-pass VBR coding for H.264/AVC.

    Science.gov (United States)

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

  11. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  12. Dural venous sinuses distortion and compression with supratentorial mass lesions: a mechanism for refractory intracranial hypertension?

    Science.gov (United States)

    Qureshi, Adnan I.; Qureshi, Mushtaq H.; Majidi, Shahram; Gilani, Waqas I.; Siddiq, Farhan

    2014-01-01

    Objective To determine the effect of supratentorial intraparenchymal mass lesions of various volumes on dural venous sinuses structure and transluminal pressures. Methods Three set of preparations were made using adult isolated head derived from fresh human cadaver. A supratentorial intraparenchymal balloon was introduced and inflated at various volumes and effect on dural venous sinuses was assessed by serial intravascular ultrasound, computed tomographic (CT), and magnetic resonance (MR) venograms. Contrast was injected through a catheter placed in sigmoid sinus for both CT and MR venograms. Serial trasluminal pressures were measured from middle part of superior sagittal sinus in another set of experiments. Results At intraparenchymal balloon inflation of 90 cm3, there was attenuation of contrast enhancement of superior sagittal sinus with compression visualized in posterior part of the sinus without any evidence of compression in the remaining sinus. At intraparenchymal balloon inflation of 180 and 210 cm3, there was compression and obliteration of superior sagittal sinus throughout the length of the sinus. In the coronal sections, at intraparenchymal balloon inflations of 90 and 120 cm3, compression and obliteration of the posterior part of superior sagittal sinus were visualized. In the axial images, basal veins were not visualized with intraparenchymal balloon inflation of 90 cm3 or greater although straight sinus was visualized at all levels of inflation. Trasluminal pressure in the middle part of superior sagittal sinus demonstrated a mild increase from 0 cm H2O to 0.4 cm H2O and 0.5 cm H2O with inflation of balloon to volume of 150 and 180 cm3, respectively. There was a rapid increase in transluminal pressure from 6.8 cm H2O to 25.6 cm H2O as the supratentorial mass lesion increased from 180 to 200 cm3. Conclusions Our experiments identified distortion and segmental and global obliteration of dural venous sinuses secondary to supratentorial mass lesion and

  13. Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.

    Science.gov (United States)

    Lan, Cuiling; Shi, Guangming; Wu, Feng

    2010-04-01

    Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

  14. Strain Rate Dependence of Compressive Yield and Relaxation in DGEBA Epoxies

    Science.gov (United States)

    Arechederra, Gabriel K.; Reprogle, Riley C.; Clarkson, Caitlyn M.; McCoy, John D.; Kropka, Jamie M.; Long, Kevin N.; Chambers, Robert S.

    2015-03-01

    The mechanical response in uniaxial compression of two diglycidyl ether of bisphenol-A epoxies were studied. These were 828DEA (Epon 828 cured with diethanolamine (DEA)) and 828T403 (Epon 828 cured with Jeffamine T-403). Two types of uniaxial compression tests were performed: A) constant strain rate compression and B) constant strain rate compression followed by a constant strain relaxation. The peak (yield) stress was analyzed as a function of strain rate from Eyring theory for activation volume. Runs at different temperatures permitted the construction of a mastercurve, and the resulting shift factors resulted in an activation energy. Strain and hold tests were performed for a low strain rate where a peak stress was lacking and for a higher strain rate where the peak stress was apparent. Relaxation from strains at different places along the stress-strain curve was tracked and compared. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  15. Accelerated high-frame-rate mouse heart cine-MRI using compressed sensing reconstruction

    NARCIS (Netherlands)

    Motaal, Abdallah G.; Coolen, Bram F.; Abdurrachim, Desiree; Castro, Rui M.; Prompers, Jeanine J.; Florack, Luc M. J.; Nicolay, Klaas; Strijkers, Gustav J.

    2013-01-01

    We introduce a new protocol to obtain very high-frame-rate cinematographic (Cine) MRI movies of the beating mouse heart within a reasonable measurement time. The method is based on a self-gated accelerated fast low-angle shot (FLASH) acquisition and compressed sensi ng reconstruction. Key to our

  16. A Summation Analysis of Compliance and Complications of Compression Hosiery for Patients with Chronic Venous Disease or Post-thrombotic Syndrome.

    Science.gov (United States)

    Kankam, Hadyn K N; Lim, Chung S; Fiorentino, Francesca; Davies, Alun H; Gohel, Manj S

    2018-03-01

    Compression stockings are commonly prescribed for patients with a range of venous disorders, but are difficult to don and uncomfortable to wear. This study aimed to investigate compliance and complications of compression stockings in patients with chronic venous disease (CVD) and post-thrombotic syndrome (PTS). A literature search of the following databases was carried out: MEDLINE (via PubMed), EMBASE (via OvidSP, 1974 to present), and CINAHL (via EBSCOhost). Studies evaluating the use of compression stockings in patients with CVD (CEAP C2-C5) or for the prevention or treatment of PTS were included. After scrutinising full text articles, compliance with compression and associated complications were assessed. Compliance rates were compared based on study type and degree of compression. Good compliance was defined as patients wearing compression stockings for >50% of the time. From an initial search result of 4303 articles, 58 clinical studies (37 randomised trials and 21 prospective studies) were selected. A total of 10,245 limbs were included, with compression ranging from 15 to 40 mmHg (not stated in 12 studies) and a median follow-up of 12 months (range 1-60 months). In 19 cohorts, compliance was not assessed and in a further nine, compliance was poorly specified. Overall, good compliance with compression was reported for 5371 out of 8104 (66.2%) patients. The mean compliance, weighted by study size, appeared to be greater for compression ≤25 mmHg (77%) versus > 25 mmHg (65%) and greater in the randomised studies (74%) than in prospective observational studies (64%). Complications of stockings were not mentioned in 43 out of 62 cohorts reviewed. Where complications were considered, skin irritation was a common event. In published trials, good compliance with compression is reported in around two thirds of patients, with inferior compliance in those given higher degrees of compression. Further studies are required to identify predictors of non

  17. Color image lossy compression based on blind evaluation and prediction of noise characteristics

    Science.gov (United States)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Egiazarian, Karen O.; Lepisto, Leena

    2011-03-01

    The paper deals with JPEG adaptive lossy compression of color images formed by digital cameras. Adaptation to noise characteristics and blur estimated for each given image is carried out. The dominant factor degrading image quality is determined in a blind manner. Characteristics of this dominant factor are then estimated. Finally, a scaling factor that determines quantization steps for default JPEG table is adaptively set (selected). Within this general framework, two possible strategies are considered. A first one presumes blind estimation for an image after all operations in digital image processing chain just before compressing a given raster image. A second strategy is based on prediction of noise and blur parameters from analysis of RAW image under quite general assumptions concerning characteristics parameters of transformations an image will be subject to at further processing stages. The advantages of both strategies are discussed. The first strategy provides more accurate estimation and larger benefit in image compression ratio (CR) compared to super-high quality (SHQ) mode. However, it is more complicated and requires more resources. The second strategy is simpler but less beneficial. The proposed approaches are tested for quite many real life color images acquired by digital cameras and shown to provide more than two time increase of average CR compared to SHQ mode without introducing visible distortions with respect to SHQ compressed images.

  18. Blade row dynamic digital compression program. Volume 2: J85 circumferential distortion redistribution model, effect of Stator characteristics, and stage characteristics sensitivity study

    Science.gov (United States)

    Tesch, W. A.; Steenken, W. G.

    1978-01-01

    The results of dynamic digital blade row compressor model studies of a J85-13 engine are reported. The initial portion of the study was concerned with the calculation of the circumferential redistribution effects in the blade-free volumes forward and aft of the compression component. Although blade-free redistribution effects were estimated, no significant improvement over the parallel-compressor type solution in the prediction of total-pressure inlet distortion stability limit was obtained for the J85-13 engine. Further analysis was directed to identifying the rotor dynamic response to spatial circumferential distortions. Inclusion of the rotor dynamic response led to a considerable gain in the ability of the model to match the test data. The impact of variable stator loss on the prediction of the stability limit was evaluated. An assessment of measurement error on the derivation of the stage characteristics and predicted stability limit of the compressor was also performed.

  19. Contributions in compression of 3D medical images and 2D images; Contributions en compression d'images medicales 3D et d'images naturelles 2D

    Energy Technology Data Exchange (ETDEWEB)

    Gaudeau, Y

    2006-12-15

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  20. Compression-rate-dependent nonlinear mechanics of normal and impaired porcine knee joints.

    Science.gov (United States)

    Rodriguez, Marcel Leonardo; Li, LePing

    2017-11-14

    The knee joint performs mechanical functions with various loading and unloading processes. Past studies have focused on the kinematics and elastic response of the joint with less understanding of the rate-dependent load response associated with viscoelastic and poromechanical behaviors. Forty-five fresh porcine knee joints were used in the present study to determine the loading-rate-dependent force-compression relationship, creep and relaxation of normal, dehydrated and meniscectomized joints. The mechanical tests of all normal intact joints showed similar strong compression-rate-dependent behavior: for a given compression-magnitude up to 1.2 mm, the reaction force varied 6 times over compression rates. While the static response was essentially linear, the nonlinear behavior was boosted with the increased compression rate to approach the asymptote or limit at approximately 2 mm/s. On the other hand, the joint stiffness varied approximately 3 times over different joints, when accounting for the maturity and breed of the animals. Both a loss of joint hydration and a total meniscectomy greatly compromised the load support in the joint, resulting in a reduction of load support as much as 60% from the corresponding intact joint. However, the former only weakened the transient load support, but the latter also greatly weakened the equilibrium load support. A total meniscectomy did not diminish the compression-rate-dependence of the joint though. These findings are consistent with the fluid-pressurization loading mechanism, which may have a significant implication in the joint mechanical function and cartilage mechanobiology.

  1. The role of self-serving cognitive distortions in reactive and proactive aggression.

    Science.gov (United States)

    Oostermeijer, Sanne; Smeets, Kirsten C; Jansen, Lucres M C; Jambroes, Tijs; Rommelse, Nanda N J; Scheepers, Floor E; Buitelaar, Jan K; Popma, Arne

    2017-12-01

    Aggression is often divided into reactive and proactive forms. Reactive aggression is typically thought to encompass 'blaming others' and 'assuming the worst', while proactive aggression relates to 'self-centeredness' and 'minimising/mislabelling'. Our aim was to evaluate relationships between reactive and proactive aggression and cognitive distortions and to test whether changes in these cognitions relate to changes in aggression. A total of 151 adolescents (60% boys; mean age 15.05 years, standard deviation 1.28) were enrolled in an evidence-based intervention to reduce aggression. Due to attrition and anomalous responses, the post-intervention sample involved 80 adolescents. Correlation and linear regression analyses were used to investigate the relationship between cognitive distortions and aggression. Blaming others was related to reactive aggression before the intervention, while all cognitive distortions were related to proactive aggression both pre- and post-intervention. Changes in reactive aggression were uniquely predicted by blaming others, while changes in proactive aggression were predicted by changes in cognitive distortions overall. To our knowledge, this study is the first to show a relationship between changes in cognitive distortions and changes in aggression. Treatment of reactive aggression may benefit from focusing primarily on reducing cognitive distortions involving misattribution of blame to others. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    Science.gov (United States)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  3. JET ENGINE INLET DISTORTION SCREEN AND DESCRIPTOR EVALUATION

    Directory of Open Access Journals (Sweden)

    Jiří Pečinka

    2017-02-01

    Full Text Available Total pressure distortion is one of the three basic flow distortions (total pressure, total temperature and swirl distortion that might appear at the inlet of a gas turbine engine (GTE during operation. Different numerical parameters are used for assessing the total pressure distortion intensity and extent. These summary descriptors are based on the distribution of total pressure in the aerodynamic interface plane. There are two descriptors largely spread around the world, however, three or four others are still in use and can be found in current references. The staff at the University of Defence decided to compare the most common descriptors using basic flow distortion patterns in order to select the most appropriate descriptor for future department research. The most common descriptors were identified based on their prevalence in widely accessible publications. The construction and use of these descriptors are reviewed in the paper. Subsequently, they are applied to radial, angular, and combined distortion patterns of different intensities and with varied mass flow rates. The tests were performed on a specially designed test bench using an electrically driven standalone industrial centrifugal compressor, sucking air through the inlet of a TJ100 small turbojet engine. Distortion screens were placed into the inlet channel to create the desired total pressure distortions. Of the three basic distortions, only the total pressure distortion descriptors were evaluated. However, both total and static pressures were collected using a multi probe rotational measurement system.

  4. Optimal chest compression rate in cardiopulmonary resuscitation: a prospective, randomized crossover study using a manikin model.

    Science.gov (United States)

    Lee, Seong Hwa; Ryu, Ji Ho; Min, Mun Ki; Kim, Yong In; Park, Maeng Real; Yeom, Seok Ran; Han, Sang Kyoon; Park, Seong Wook

    2016-08-01

    When performing cardiopulmonary resuscitation (CPR), the 2010 American Heart Association guidelines recommend a chest compression rate of at least 100 min, whereas the 2010 European Resuscitation Council guidelines recommend a rate of between 100 and 120 min. The aim of this study was to examine the rate of chest compression that fulfilled various quality indicators, thereby determining the optimal rate of compression. Thirty-two trainee emergency medical technicians and six paramedics were enrolled in this study. All participants had been trained in basic life support. Each participant performed 2 min of continuous compressions on a skill reporter manikin, while listening to a metronome sound at rates of 100, 120, 140, and 160 beats/min, in a random order. Mean compression depth, incomplete chest recoil, and the proportion of correctly performed chest compressions during the 2 min were measured and recorded. The rate of incomplete chest recoil was lower at compression rates of 100 and 120 min compared with that at 160 min (P=0.001). The numbers of compressions that fulfilled the criteria for high-quality CPR at a rate of 120 min were significantly higher than those at 100 min (P=0.016). The number of high-quality CPR compressions was the highest at a compression rate of 120 min, and increased incomplete recoil occurred with increasing compression rate. However, further studies are needed to confirm the results.

  5. Time-Varying Distortions of Binaural Information by Bilateral Hearing Aids

    Science.gov (United States)

    Rodriguez, Francisco A.; Portnuff, Cory D. F.; Goupell, Matthew J.; Tollin, Daniel J.

    2016-01-01

    In patients with bilateral hearing loss, the use of two hearing aids (HAs) offers the potential to restore the benefits of binaural hearing, including sound source localization and segregation. However, existing evidence suggests that bilateral HA users’ access to binaural information, namely interaural time and level differences (ITDs and ILDs), can be compromised by device processing. Our objective was to characterize the nature and magnitude of binaural distortions caused by modern digital behind-the-ear HAs using a variety of stimuli and HA program settings. Of particular interest was a common frequency-lowering algorithm known as nonlinear frequency compression, which has not previously been assessed for its effects on binaural information. A binaural beamforming algorithm was also assessed. Wide dynamic range compression was enabled in all programs. HAs were placed on a binaural manikin, and stimuli were presented from an arc of loudspeakers inside an anechoic chamber. Stimuli were broadband noise bursts, 10-Hz sinusoidally amplitude-modulated noise bursts, or consonant–vowel–consonant speech tokens. Binaural information was analyzed in terms of ITDs, ILDs, and interaural coherence, both for whole stimuli and in a time-varying sense (i.e., within a running temporal window) across four different frequency bands (1, 2, 4, and 6 kHz). Key findings were: (a) Nonlinear frequency compression caused distortions of high-frequency envelope ITDs and significantly reduced interaural coherence. (b) For modulated stimuli, all programs caused time-varying distortion of ILDs. (c) HAs altered the relationship between ITDs and ILDs, introducing large ITD–ILD conflicts in some cases. Potential perceptual consequences of measured distortions are discussed. PMID:27698258

  6. Fractal Image Compression Based on High Entropy Values Technique

    Directory of Open Access Journals (Sweden)

    Douaa Younis Abbaas

    2018-04-01

    Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.

  7. Efficient Simulation of Compressible, Viscous Fluids using Multi-rate Time Integration

    Science.gov (United States)

    Mikida, Cory; Kloeckner, Andreas; Bodony, Daniel

    2017-11-01

    In the numerical simulation of problems of compressible, viscous fluids with single-rate time integrators, the global timestep used is limited to that of the finest mesh point or fastest physical process. This talk discusses the application of multi-rate Adams-Bashforth (MRAB) integrators to an overset mesh framework to solve compressible viscous fluid problems of varying scale with improved efficiency, with emphasis on the strategy of timescale separation and the application of the resulting numerical method to two sample problems: subsonic viscous flow over a cylinder and a viscous jet in crossflow. The results presented indicate the numerical efficacy of MRAB integrators, outline a number of outstanding code challenges, demonstrate the expected reduction in time enabled by MRAB, and emphasize the need for proper load balancing through spatial decomposition in order for parallel runs to achieve the predicted time-saving benefit. This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002374.

  8. Contributions in compression of 3D medical images and 2D images

    International Nuclear Information System (INIS)

    Gaudeau, Y.

    2006-12-01

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  9. 7th Conference on Non-Integer Order Calculus and Its Applications

    CERN Document Server

    Dworak, Paweł

    2016-01-01

    This volume is devoted to presentation of new results of research on systems of non-integer order, called also fractional systems. Their analysis and practical implementation have been the object of spontaneous development for a few last decades. The fractional order models can depict a physical plant better than the classical integer order ones. This covers different research fields such as insulator properties, visco-elastic materials, electrodynamic, electrothermal, electrochemical, economic processes modelling etc. On the other hand fractional controllers often outperform their integer order counterparts. This volume contains new ideas and examples of implementation, theoretical and pure practical aspects of using a non-integer order calculus. It is divided into four parts covering: mathematical fundamentals, modeling and approximations, controllability, observability and stability problems and practical applications of fractional control systems. The first part expands the base of tools and methods of th...

  10. System for ν-ν-coincidence spectra processing with data compression

    International Nuclear Information System (INIS)

    Byalko, A.A.; Volkov, N.G.; Tsupko-Sitnikov, V.M.; Churakov, A.K.

    1982-01-01

    Calculational algorithm and program for analyzing gamma-gamma coincidence spectra based on using the method of expansion in singular values for data compression (the SVD method) are described. Results of the testing of the program during the processing of coincidence spectrum for the low-energy region of transitions corresponding to decay 164 Lu → 164 Yb are given. The program is written in the FORTRAN language and is realized by the ES-1040 computer. The counting time constitutes about 20 min. It is concluded that the use of the SVD method permits to correct the data at the expense of distortion filtration caused with statistical deviations and random interferences, at that not distorting the initial data. The data compressed correspond more to theoretical suggestions of forms of semiconductor detector lines and two-dimensional line in the coincidence spectrum

  11. Expansion and Compression of Time Correlate with Information Processing in an Enumeration Task.

    Directory of Open Access Journals (Sweden)

    Andreas Wutz

    Full Text Available Perception of temporal duration is subjective and is influenced by factors such as attention and context. For example, unexpected or emotional events are often experienced as if time subjectively expands, suggesting that the amount of information processed in a unit of time can be increased. Time dilation effects have been measured with an oddball paradigm in which an infrequent stimulus is perceived to last longer than standard stimuli in the rest of the sequence. Likewise, time compression for the oddball occurs when the duration of the standard items is relatively brief. Here, we investigated whether the amount of information processing changes when time is perceived as distorted. On each trial, an oddball stimulus of varying numerosity (1-14 items and duration was presented along with standard items that were either short (70 ms or long (1050 ms. Observers were instructed to count the number of dots within the oddball stimulus and to judge its relative duration with respect to the standards on that trial. Consistent with previous results, oddballs were reliably perceived as temporally distorted: expanded for longer standard stimuli blocks and compressed for shorter standards. The occurrence of these distortions of time perception correlated with perceptual processing; i.e. enumeration accuracy increased when time was perceived as expanded and decreased with temporal compression. These results suggest that subjective time distortions are not epiphenomenal, but reflect real changes in sensory processing. Such short-term plasticity in information processing rate could be evolutionarily advantageous in optimizing perception and action during critical moments.

  12. Integer-valued trawl processes

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.; Lunde, Asger; Shephard, Neil

    2014-01-01

    the probabilistic properties of such processes in detail and, in addition, study volatility modulation and multivariate extensions within the new modelling framework. Moreover, we describe how the parameters of a trawl process can be estimated and obtain promising estimation results in our simulation study. Finally......This paper introduces a new continuous-time framework for modelling serially correlated count and integer-valued data. The key component in our new model is the class of integer-valued trawl processes, which are serially correlated, stationary, infinitely divisible processes. We analyse...

  13. Integer and combinatorial optimization

    CERN Document Server

    Nemhauser, George L

    1999-01-01

    Rave reviews for INTEGER AND COMBINATORIAL OPTIMIZATION ""This book provides an excellent introduction and survey of traditional fields of combinatorial optimization . . . It is indeed one of the best and most complete texts on combinatorial optimization . . . available. [And] with more than 700 entries, [it] has quite an exhaustive reference list.""-Optima ""A unifying approach to optimization problems is to formulate them like linear programming problems, while restricting some or all of the variables to the integers. This book is an encyclopedic resource for such f

  14. A wavelet-based ECG delineation algorithm for 32-bit integer online processing.

    Science.gov (United States)

    Di Marco, Luigi Y; Chiari, Lorenzo

    2011-04-03

    Since the first well-known electrocardiogram (ECG) delineator based on Wavelet Transform (WT) presented by Li et al. in 1995, a significant research effort has been devoted to the exploitation of this promising method. Its ability to reliably delineate the major waveform components (mono- or bi-phasic P wave, QRS, and mono- or bi-phasic T wave) would make it a suitable candidate for efficient online processing of ambulatory ECG signals. Unfortunately, previous implementations of this method adopt non-linear operators such as root mean square (RMS) or floating point algebra, which are computationally demanding. This paper presents a 32-bit integer, linear algebra advanced approach to online QRS detection and P-QRS-T waves delineation of a single lead ECG signal, based on WT. The QRS detector performance was validated on the MIT-BIH Arrhythmia Database (sensitivity Se = 99.77%, positive predictive value P+ = 99.86%, on 109010 annotated beats) and on the European ST-T Database (Se = 99.81%, P+ = 99.56%, on 788050 annotated beats). The ECG delineator was validated on the QT Database, showing a mean error between manual and automatic annotation below 1.5 samples for all fiducial points: P-onset, P-peak, P-offset, QRS-onset, QRS-offset, T-peak, T-offset, and a mean standard deviation comparable to other established methods. The proposed algorithm exhibits reliable QRS detection as well as accurate ECG delineation, in spite of a simple structure built on integer linear algebra.

  15. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    Science.gov (United States)

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  16. Custom Gradient Compression Stockings May Prevent Orthostatic Intolerance in Astronauts After Space Flight

    Science.gov (United States)

    Stenger, Michael B.; Lee, Stuart M. C.; Westby, Christian M.; Platts, Steven H.

    2010-01-01

    Orthostatic intolerance after space flight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. NASA astronauts currently wear an inflatable anti-gravity suit (AGS) during re-entry, but this device is uncomfortable and loses effectiveness upon egress from the Shuttle. We recently determined that thigh-high, gradient compression stockings were comfortable and effective after space flight, though to a lesser degree than the AGS. We also recently showed that addition of splanchnic compression to this thigh-high compression stocking paradigm improved orthostatic tolerance to a level similar to the AGS, in a ground based model. Purpose: The purpose of this study was to evaluate a new, three-piece breast-high gradient compression garment as a countermeasure to post-space flight orthostatic intolerance. Methods: Eight U.S. astronauts have volunteered for this experiment and were individually fitted for a three-piece, breast-high compression garment to provide 55 mmHg compression at the ankle which decreased to approximately 20 mmHg at the top of the leg and provides 15 mmHg over the abdomen. Orthostatic testing occurred 30 days pre-flight (w/o garment) and 2 hours after flight (w/ garment) on landing day. Blood pressure (BP), Heart Rate (HR) and Stroke Volume (SV) were acquired for 2 minutes while the subject lay prone and then for 3.5 minutes after the subject stands up. To date, two astronauts have completed pre- and post-space flight testing. Data are mean SD. Results: BP [pre (prone to stand): 137+/-1.6 to 129+/-2.5; post: 130+/-2.4 to 122+/-1.6 mmHg] and SV [pre (prone to stand): 61+/-1.6 to 38+/-0.2; post: 58+/-6.4 to 37+/-6.0 ml] decreased with standing, but no differences were seen post-flight w/ compression garments compared to pre-flight w/o garments. HR [pre (prone to stand): 66+/-1.6 to 74+/-3.0, post: 67+/-5.6 to 78+/-6.8 bpm] increased with standing, but no differences were seen pre- to post-flight. Conclusion: After space

  17. A Statistically-Hiding Integer Commitment Scheme Based on Groups with Hidden Order

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Fujisaki, Eiichiro

    2002-01-01

    We present a statistically-hiding commitment scheme allowing commitment to arbitrary size integers, based on any (Abelian) group with certain properties, most importantly, that it is hard for the committer to compute its order. We also give efficient zero-knowledge protocols for proving knowledge...... input is chosen by the (possibly cheating) prover. -  - Our results apply to any group with suitable properties. In particular, they apply to a much larger class of RSA moduli than the safe prime products proposed in [14] - Potential examples include RSA moduli, class groups and, with a slight...

  18. INCREASE OF STABILITY AT JPEG COMPRESSION OF THE DIGITAL WATERMARKS EMBEDDED IN STILL IMAGES

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-07-01

    Full Text Available Subject of Research. The paper deals with creation and research of method for increasing stability at JPEG compressing of digital watermarks embedded in still images. Method. A new algorithm of digital watermarking for still images which embeds digital watermark into a still image via modification of frequency coefficients for Hadamard discrete transformation is presented. The choice of frequency coefficients for embedding of a digital watermark is based on existence of sharp change of their values after modification at the maximum compression of JPEG. The choice of blocks of pixels for embedding is based on the value of their entropy. The new algorithm was subjected to the analysis of resistance to an image compression, noising, filtration, change of size, color and histogram equalization. Elham algorithm possessing a good resistance to JPEG compression was chosen for comparative analysis. Nine gray-scale images were selected as objects for protection. Obscurity of the distortions embedded in them was defined on the basis of the peak value of a signal to noise ratio which should be not lower than 43 dB for obscurity of the brought distortions. Resistibility of embedded watermark was determined by the Pearson correlation coefficient, which value should not be below 0.5 for the minimum allowed stability. The algorithm of computing experiment comprises: watermark embedding into each test image by the new algorithm and Elham algorithm; introducing distortions to the object of protection; extracting of embedded information with its subsequent comparison with the original. Parameters of the algorithms were chosen so as to provide approximately the same level of distortions introduced into the images. Main Results. The method of preliminary processing of digital watermark presented in the paper makes it possible to reduce significantly the volume of information embedded in the still image. The results of numerical experiment have shown that the

  19. Energy Cascade Rate in Compressible Fast and Slow Solar Wind Turbulence

    Energy Technology Data Exchange (ETDEWEB)

    Hadid, L. Z.; Sahraoui, F.; Galtier, S., E-mail: lina.hadid@lpp.polytechnique.fr [LPP, CNRS, Ecole Polytechnique, UPMC Univ Paris 06, Univ. Paris-Sud, Observatoire de Paris, Université Paris-Saclay, Sorbonne Universités, PSL Research University, F-91128 Palaiseau (France)

    2017-03-20

    Estimation of the energy cascade rate in the inertial range of solar wind turbulence has been done so far mostly within incompressible magnetohydrodynamics (MHD) theory. Here, we go beyond that approximation to include plasma compressibility using a reduced form of a recently derived exact law for compressible, isothermal MHD turbulence. Using in situ data from the THEMIS / ARTEMIS spacecraft in the fast and slow solar wind, we investigate in detail the role of the compressible fluctuations in modifying the energy cascade rate with respect to the prediction of the incompressible MHD model. In particular, we found that the energy cascade rate (1) is amplified particularly in the slow solar wind; (2) exhibits weaker fluctuations in spatial scales, which leads to a broader inertial range than the previous reported ones; (3) has a power-law scaling with the turbulent Mach number; (4) has a lower level of spatial anisotropy. Other features of solar wind turbulence are discussed along with their comparison with previous studies that used incompressible or heuristic (nonexact) compressible MHD models.

  20. Energy Cascade Rate in Compressible Fast and Slow Solar Wind Turbulence

    International Nuclear Information System (INIS)

    Hadid, L. Z.; Sahraoui, F.; Galtier, S.

    2017-01-01

    Estimation of the energy cascade rate in the inertial range of solar wind turbulence has been done so far mostly within incompressible magnetohydrodynamics (MHD) theory. Here, we go beyond that approximation to include plasma compressibility using a reduced form of a recently derived exact law for compressible, isothermal MHD turbulence. Using in situ data from the THEMIS / ARTEMIS spacecraft in the fast and slow solar wind, we investigate in detail the role of the compressible fluctuations in modifying the energy cascade rate with respect to the prediction of the incompressible MHD model. In particular, we found that the energy cascade rate (1) is amplified particularly in the slow solar wind; (2) exhibits weaker fluctuations in spatial scales, which leads to a broader inertial range than the previous reported ones; (3) has a power-law scaling with the turbulent Mach number; (4) has a lower level of spatial anisotropy. Other features of solar wind turbulence are discussed along with their comparison with previous studies that used incompressible or heuristic (nonexact) compressible MHD models.

  1. An integer ambiguity resolution method for the global positioning system (GPS)-based land vehicle attitude determination

    International Nuclear Information System (INIS)

    Wang, Bo; Miao, Lingjuan; Wang, Shunting; Shen, Jun

    2009-01-01

    During attitude determination using a global positioning system (GPS), cycle slips occur due to the loss of lock and noise disturbance. Therefore, the integer ambiguity needs re-computation to isolate the error in carrier phase. This paper presents a fast method for integer ambiguity resolution for land vehicle application. After the cycle slips are detected, the velocity vector is utilized to obtain the rough baseline vector. The obtained baseline vector is substituted into carrier phase observation equations to solve the float ambiguity solution which can be used as a constraint to accelerate the integer ambiguity search procedure at next epochs. The probability of correct integer estimation in the expanded search space is analyzed. Experimental results demonstrate that the proposed method gives a fast approach to obtain new fixed ambiguities while the regular method takes longer time and sometimes results in incorrect solutions

  2. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  3. After microvascular decompression to treat trigeminal neuralgia, both immediate pain relief and recurrence rates are higher in patients with arterial compression than with venous compression.

    Science.gov (United States)

    Shi, Lei; Gu, Xiaoyan; Sun, Guan; Guo, Jun; Lin, Xin; Zhang, Shuguang; Qian, Chunfa

    2017-07-04

    We explored differences in postoperative pain relief achieved through decompression of the trigeminal nerve compressed by arteries and veins. Clinical characteristics, intraoperative findings, and postoperative curative effects were analyzed in 72 patients with trigeminal neuralgia who were treated by microvascular decompression. The patients were divided into arterial and venous compression groups based on intraoperative findings. Surgical curative effects included immediate relief, delayed relief, obvious reduction, and invalid result. Among the 40 patients in the arterial compression group, 32 had immediate pain relief of pain (80.0%), 5 cases had delayed relief (12.5%), and 3 cases had an obvious reduction (7.5%). In the venous compression group, 12 patients had immediate relief of pain (37.5%), 13 cases had delayed relief (40.6%), and 7 cases had an obvious reduction (21.9%). During 2-year follow-up period, 6 patients in the arterial compression group experienced recurrence of trigeminal neuralgia, but there were no recurrences in the venous compression group. Simple artery compression was followed by early relief of trigeminal neuralgia more often than simple venous compression. However, the trigeminal neuralgia recurrence rate was higher in the artery compression group than in the venous compression group.

  4. 'Distorted structure modelling' - a more physical approach to Rapid Distortion Theory

    International Nuclear Information System (INIS)

    Savill, A.M.

    1979-11-01

    Rapid Distortion Theory is reviewed in the light of the modern mechanistic approach to turbulent motion. The apparent failure of current models, based on this theory, to predict stress intensity ratios accurately in distorted shear flows is attributed to their oversimplistic assumptions concerning the inherent turbulence structure of such flows. A more realistic picture of this structure and the manner in which it responds to distortion is presented in terms of interactions between the mean flow and three principal types of eddies. If Rapid Distortion Theory is modified to account for this it is shown that the stress intensity ratios can be accurately predicted in three test flows. It is concluded that a computational scheme based on Rapid Distortion Theory might ultimately be capable of predicting turbulence parameters in the highly complex geometries of reactor cooling systems. (author)

  5. Plastic cap evolution law derived from induced transverse isotropy in dilatational triaxial compression.

    Energy Technology Data Exchange (ETDEWEB)

    Macon, David James [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brannon, Rebecca Moss [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Strack, Otto Eric [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-02-01

    Mechanical testing of porous materials generates physical data that contain contributions from more than one underlying physical phenomenon. All that is measurable is the "ensemble" hardening modulus. This thesis is concerned with the phenomenon of dilatation in triaxial compression of porous media, which has been modeled very accurately in the literature for monotonic loading using models that predict dilatation under triaxial compression (TXC) by presuming that dilatation causes the cap to move outwards. These existing models, however, predict a counter-intuitive (and never validated) increase in hydrostatic compression strength. This work explores an alternative approach for modeling TXC dilatation based on allowing induced elastic anisotropy (which makes the material both less stiff and less strong in the lateral direction) with no increase in hydrostatic strength. Induced elastic anisotropy is introduced through the use of a distortion operator. This operator is a fourth-order tensor consisting of a combination of the undeformed stiffness and deformed compliance and has the same eigenprojectors as the elastic compliance. In the undeformed state, the distortion operator is equal to the fourth-order identity. Through the use of the distortion operator, an evolved stress tensor is introduced. When the evolved stress tensor is substituted into an isotropic yield function, a new anisotropic yield function results. In the case of the von Mises isotropic yield function (which contains only deviatoric components), it is shown that the distortion operator introduces a dilatational contribution without requiring an increase in hydrostatic strength. In the thesis, an introduction and literature review of the cap function is given. A transversely isotropic compliance is presented, based on a linear combination of natural bases constructed about a transverse-symmetry axis. Using a probabilistic distribution of cracks constructed for the case of transverse isotropy, a

  6. Nonlinear feedback synchronisation control between fractional-order and integer-order chaotic systems

    International Nuclear Information System (INIS)

    Jia Li-Xin; Dai Hao; Hui Meng

    2010-01-01

    This paper focuses on the synchronisation between fractional-order and integer-order chaotic systems. Based on Lyapunov stability theory and numerical differentiation, a nonlinear feedback controller is obtained to achieve the synchronisation between fractional-order and integer-order chaotic systems. Numerical simulation results are presented to illustrate the effectiveness of this method

  7. Harmonic oscillator states with integer and non-integer orbital angular momentum

    International Nuclear Information System (INIS)

    Land, Martin

    2011-01-01

    We study the quantum mechanical harmonic oscillator in two and three dimensions, with particular attention to the solutions as basis states for representing their respective symmetry groups — O(2), O(1,1), O(3), and O(2,1). The goal of this study is to establish a correspondence between Hilbert space descriptions found by solving the Schrodinger equation in polar coordinates, and Fock space descriptions constructed by expressing the symmetry operators in terms of creation/annihilation operators. We obtain wavefunctions characterized by a principal quantum number, the group Casimir eigenvalue, and one group generator whose eigenvalue is m + s, for integer m and real constant parameter s. For the three groups that contain O(2), the solutions split into two inequivalent representations, one associated with s = 0, from which we recover the familiar description of the oscillator as a product of one-dimensional solutions, and the other with s > 0 (in three dimensions, solutions are found for s = 0 and s = 1/2) whose solutions are non-separable in Cartesian coordinates, and are hence overlooked by the standard Fock space approach. The O(1,1) solutions are singlet states, restricted to zero eigenvalue of the symmetry operator, which represents the boost, not angular momentum. For O(2), a single set of creation and annihilation operators forms a ladder representation for the allowed oscillator states for any s, and the degeneracy of energy states is always finite. However, in three dimensions, the integer and half-integer eigenstates are qualitatively different: the former can be expressed as finite dimensional irreducible tensors under O(3) or O(2,1) while the latter exhibit infinite degeneracy. Creation operators that produce the allowed integer states by acting on the non-degenerate ground state are constructed as irreducible tensor products of the fundamental vector representation. However, the half-integer eigenstates are infinite-dimensional, as expected for the non

  8. Reversible Integer Wavelet Transform for the Joint of Image Encryption and Watermarking

    Directory of Open Access Journals (Sweden)

    Bin Wang

    2015-01-01

    Full Text Available In recent years, signal processing in the encrypted domain has attracted considerable research interest, especially embedding watermarking in encrypted image. In this work, a novel joint of image encryption and watermarking based on reversible integer wavelet transform is proposed. Firstly, the plain-image is encrypted by chaotic maps and reversible integer wavelet transform. Then the lossless watermarking is embedded in the encrypted image by reversible integer wavelet transform and histogram modification. Finally an encrypted image containing watermarking is obtained by the inverse integer wavelet transform. What is more, the original image and watermarking can be completely recovered by inverse process. Numerical experimental results and comparing with previous works show that the proposed scheme possesses higher security and embedding capacity than previous works. It is suitable for protecting the image information.

  9. Quasi-greedy systems of integer translates

    DEFF Research Database (Denmark)

    Nielsen, Morten; Sikic, Hrvoje

    We consider quasi-greedy systems of integer translates in a finitely generated shift invariant subspace of L2(Rd), that is systems for which the thresholding approximation procedure is well behaved. We prove that every quasi-greedy system of integer translates is also a Riesz basis for its closed...

  10. Quasi-greedy systems of integer translates

    DEFF Research Database (Denmark)

    Nielsen, Morten; Sikic, Hrvoje

    2008-01-01

    We consider quasi-greedy systems of integer translates in a finitely generated shift-invariant subspace of L2(Rd), that is systems for which the thresholding approximation procedure is well behaved. We prove that every quasi-greedy system of integer translates is also a Riesz basis for its closed...

  11. Forecasting of magnitude and duration of currency crises based on the analysis of distortions of fractal scaling in exchange rate fluctuations

    Science.gov (United States)

    Uritskaya, Olga Y.

    2005-05-01

    Results of fractal stability analysis of daily exchange rate fluctuations of more than 30 floating currencies for a 10-year period are presented. It is shown for the first time that small- and large-scale dynamical instabilities of national monetary systems correlate with deviations of the detrended fluctuation analysis (DFA) exponent from the value 1.5 predicted by the efficient market hypothesis. The observed dependence is used for classification of long-term stability of floating exchange rates as well as for revealing various forms of distortion of stable currency dynamics prior to large-scale crises. A normal range of DFA exponents consistent with crisis-free long-term exchange rate fluctuations is determined, and several typical scenarios of unstable currency dynamics with DFA exponents fluctuating beyond the normal range are identified. It is shown that monetary crashes are usually preceded by prolonged periods of abnormal (decreased or increased) DFA exponent, with the after-crash exponent tending to the value 1.5 indicating a more reliable exchange rate dynamics. Statistically significant regression relations (R=0.99, pcurrency crises and the degree of distortion of monofractal patterns of exchange rate dynamics are found. It is demonstrated that the parameters of these relations characterizing small- and large-scale crises are nearly equal, which implies a common instability mechanism underlying these events. The obtained dependences have been used as a basic ingredient of a forecasting technique which provided correct in-sample predictions of monetary crisis magnitude and duration over various time scales. The developed technique can be recommended for real-time monitoring of dynamical stability of floating exchange rate systems and creating advanced early-warning-system models for currency crisis prevention.

  12. Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol

    Science.gov (United States)

    Huang, Xiaowan; Singh, Anu; Smolka, Scott A.

    2010-01-01

    We use the UPPAAL model checker for Timed Automata to verify the Timing-Sync time-synchronization protocol for sensor networks (TPSN). The TPSN protocol seeks to provide network-wide synchronization of the distributed clocks in a sensor network. Clock-synchronization algorithms for sensor networks such as TPSN must be able to perform arithmetic on clock values to calculate clock drift and network propagation delays. They must be able to read the value of a local clock and assign it to another local clock. Such operations are not directly supported by the theory of Timed Automata. To overcome this formal-modeling obstacle, we augment the UPPAAL specification language with the integer clock derived type. Integer clocks, which are essentially integer variables that are periodically incremented by a global pulse generator, greatly facilitate the encoding of the operations required to synchronize clocks as in the TPSN protocol. With this integer-clock-based model of TPSN in hand, we use UPPAAL to verify that the protocol achieves network-wide time synchronization and is devoid of deadlock. We also use the UPPAAL Tracer tool to illustrate how integer clocks can be used to capture clock drift and resynchronization during protocol execution

  13. Toward topology-based characterization of small-scale mixing in compressible turbulence

    Science.gov (United States)

    Suman, Sawan; Girimaji, Sharath

    2011-11-01

    Turbulent mixing rate at small scales of motion (molecular mixing) is governed by the steepness of the scalar-gradient field which in turn is dependent upon the prevailing velocity gradients. Thus motivated, we propose a velocity-gradient topology-based approach for characterizing small-scale mixing in compressible turbulence. We define a mixing efficiency metric that is dependent upon the topology of the solenoidal and dilatational deformation rates of a fluid element. The mixing characteristics of solenoidal and dilatational velocity fluctuations are clearly delineated. We validate this new approach by employing mixing data from direct numerical simulations (DNS) of compressible decaying turbulence with passive scalar. For each velocity-gradient topology, we compare the mixing efficiency predicted by the topology-based model with the corresponding conditional scalar variance obtained from DNS. The new mixing metric accurately distinguishes good and poor mixing topologies and indeed reasonably captures the numerical values. The results clearly demonstrate the viability of the proposed approach for characterizing and predicting mixing in compressible flows.

  14. Effect of Compression Garments on Physiological Responses After Uphill Running.

    Science.gov (United States)

    Struhár, Ivan; Kumstát, Michal; Králová, Dagmar Moc

    2018-03-01

    Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60º·s-1, 120º·s-1) was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.

  15. Effect of Compression Garments on Physiological Responses After Uphill Running

    Directory of Open Access Journals (Sweden)

    Struhár Ivan

    2018-03-01

    Full Text Available Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60o·s-1, 120o·s-1 was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.

  16. A quantum architecture for multiplying signed integers

    International Nuclear Information System (INIS)

    Alvarez-Sanchez, J J; Alvarez-Bravo, J V; Nieto, L M

    2008-01-01

    A new quantum architecture for multiplying signed integers is presented based on Booth's algorithm, which is well known in classical computation. It is shown how a quantum binary chain might be encoded by its flank changes, giving the final product in 2's-complement representation.

  17. Encoding technique for high data compaction in data bases of fusion devices

    International Nuclear Information System (INIS)

    Vega, J.; Cremy, C.; Sanchez, E.; Portas, A.; Dormido, S.

    1996-01-01

    At present, data requirements of hundreds of Mbytes/discharge are typical in devices such as JET, TFTR, DIII-D, etc., and these requirements continue to increase. With these rates, the amount of storage required to maintain discharge information is enormous. Compaction techniques are now essential to reduce storage. However, general compression techniques may distort signals, but this is undesirable for fusion diagnostics. We have developed a general technique for data compression which is described here. The technique, which is based on delta compression, does not require an examination of the data as in delayed methods. Delta values are compacted according to general encoding forms which satisfy a prefix code property and which are defined prior to data capture. Several prefix codes, which are bit oriented and which have variable code lengths, have been developed. These encoding methods are independent of the signal analog characteristics and enable one to store undistorted signals. The technique has been applied to databases of the TJ-I tokamak and the TJ-IU torsatron. Compaction rates of over 80% with negligible computational effort were achieved. Computer programs were written in ANSI C, thus ensuring portability and easy maintenance. We also present an interpretation, based on information theory, of the high compression rates achieved without signal distortion. copyright 1996 American Institute of Physics

  18. IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS

    Science.gov (United States)

    Fogle, F. R.

    1994-01-01

    IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.

  19. Influence of chest compression rate guidance on the quality of cardiopulmonary resuscitation performed on manikins.

    Science.gov (United States)

    Jäntti, H; Silfvast, T; Turpeinen, A; Kiviniemi, V; Uusaro, A

    2009-04-01

    The adequate chest compression rate during CPR is associated with improved haemodynamics and primary survival. To explore whether the use of a metronome would affect also chest compression depth beside the rate, we evaluated CPR quality using a metronome in a simulated CPR scenario. Forty-four experienced intensive care unit nurses participated in two-rescuer basic life support given to manikins in 10min scenarios. The target chest compression to ventilation ratio was 30:2 performed with bag and mask ventilation. The rescuer performing the compressions was changed every 2min. CPR was performed first without and then with a metronome that beeped 100 times per minute. The quality of CPR was analysed with manikin software. The effect of rescuer fatigue on CPR quality was analysed separately. The mean compression rate between ventilation pauses was 137+/-18compressions per minute (cpm) without and 98+/-2cpm with metronome guidance (pmetronome (pmetronome guidance (p=0.09). The total number of chest compressions performed was 1022 without metronome guidance, 42% at the correct depth; and 780 with metronome guidance, 61% at the correct depth (p=0.09 for difference for percentage of compression with correct depth). Metronome guidance corrected chest compression rates for each compression cycle to within guideline recommendations, but did not affect chest compression quality or rescuer fatigue.

  20. Information theoretic bounds for compressed sensing in SAR imaging

    International Nuclear Information System (INIS)

    Jingxiong, Zhang; Ke, Yang; Jianzhong, Guo

    2014-01-01

    Compressed sensing (CS) is a new framework for sampling and reconstructing sparse signals from measurements significantly fewer than those prescribed by Nyquist rate in the Shannon sampling theorem. This new strategy, applied in various application areas including synthetic aperture radar (SAR), relies on two principles: sparsity, which is related to the signals of interest, and incoherence, which refers to the sensing modality. An important question in CS-based SAR system design concerns sampling rate necessary and sufficient for exact or approximate recovery of sparse signals. In the literature, bounds of measurements (or sampling rate) in CS have been proposed from the perspective of information theory. However, these information-theoretic bounds need to be reviewed and, if necessary, validated for CS-based SAR imaging, as there are various assumptions made in the derivations of lower and upper bounds on sub-Nyquist sampling rates, which may not hold true in CS-based SAR imaging. In this paper, information-theoretic bounds of sampling rate will be analyzed. For this, the SAR measurement system is modeled as an information channel, with channel capacity and rate-distortion characteristics evaluated to enable the determination of sampling rates required for recovery of sparse scenes. Experiments based on simulated data will be undertaken to test the theoretic bounds against empirical results about sampling rates required to achieve certain detection error probabilities

  1. BETTER FINGERPRINT IMAGE COMPRESSION AT LOWER BIT-RATES: AN APPROACH USING MULTIWAVELETS WITH OPTIMISED PREFILTER COEFFICIENTS

    Directory of Open Access Journals (Sweden)

    N R Rema

    2017-08-01

    Full Text Available In this paper, a multiwavelet based fingerprint compression technique using set partitioning in hierarchical trees (SPIHT algorithm with optimised prefilter coefficients is proposed. While wavelet based progressive compression techniques give a blurred image at lower bit rates due to lack of high frequency information, multiwavelets can be used efficiently to represent high frequency information. SA4 (Symmetric Antisymmetric multiwavelet when combined with SPIHT reduces the number of nodes during initialization to 1/4th compared to SPIHT with wavelet. This reduction in nodes leads to improvement in PSNR at lower bit rates. The PSNR can be further improved by optimizing the prefilter coefficients. In this work genetic algorithm (GA is used for optimizing prefilter coefficients. Using the proposed technique, there is a considerable improvement in PSNR at lower bit rates, compared to existing techniques in literature. An overall average improvement of 4.23dB and 2.52dB for bit rates in between 0.01 to 1 has been achieved for the images in the databases FVC 2000 DB1 and FVC 2002 DB3 respectively. The quality of the reconstructed image is better even at higher compression ratios like 80:1 and 100:1. The level of decomposition required for a multiwavelet is lesser compared to a wavelet.

  2. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Variable Frame Rate and Length Analysis for Data Compression in Distributed Speech Recognition

    DEFF Research Database (Denmark)

    Kraljevski, Ivan; Tan, Zheng-Hua

    2014-01-01

    This paper addresses the issue of data compression in distributed speech recognition on the basis of a variable frame rate and length analysis method. The method first conducts frame selection by using a posteriori signal-to-noise ratio weighted energy distance to find the right time resolution...... length for steady regions. The method is applied to scalable source coding in distributed speech recognition where the target bitrate is met by adjusting the frame rate. Speech recognition results show that the proposed approach outperforms other compression methods in terms of recognition accuracy...... for noisy speech while achieving higher compression rates....

  4. Objective video quality assessment method for freeze distortion based on freeze aggregation

    Science.gov (United States)

    Watanabe, Keishiro; Okamoto, Jun; Kurita, Takaaki

    2006-01-01

    With the development of the broadband network, video communications such as videophone, video distribution, and IPTV services are beginning to become common. In order to provide these services appropriately, we must manage them based on subjective video quality, in addition to designing a network system based on it. Currently, subjective quality assessment is the main method used to quantify video quality. However, it is time-consuming and expensive. Therefore, we need an objective quality assessment technology that can estimate video quality from video characteristics effectively. Video degradation can be categorized into two types: spatial and temporal. Objective quality assessment methods for spatial degradation have been studied extensively, but methods for temporal degradation have hardly been examined even though it occurs frequently due to network degradation and has a large impact on subjective quality. In this paper, we propose an objective quality assessment method for temporal degradation. Our approach is to aggregate multiple freeze distortions into an equivalent freeze distortion and then derive the objective video quality from the equivalent freeze distortion. Specifically, our method considers the total length of all freeze distortions in a video sequence as the length of the equivalent single freeze distortion. In addition, we propose a method using the perceptual characteristics of short freeze distortions. We verified that our method can estimate the objective video quality well within the deviation of subjective video quality.

  5. Brief compression-only cardiopulmonary resuscitation training video and simulation with homemade mannequin improves CPR skills.

    Science.gov (United States)

    Wanner, Gregory K; Osborne, Arayel; Greene, Charlotte H

    2016-11-29

    Cardiopulmonary resuscitation (CPR) training has traditionally involved classroom-based courses or, more recently, home-based video self-instruction. These methods typically require preparation and purchase fee; which can dissuade many potential bystanders from receiving training. This study aimed to evaluate the effectiveness of teaching compression-only CPR to previously untrained individuals using our 6-min online CPR training video and skills practice on a homemade mannequin, reproduced by viewers with commonly available items (towel, toilet paper roll, t-shirt). Participants viewed the training video and practiced with the homemade mannequin. This was a parallel-design study with pre and post training evaluations of CPR skills (compression rate, depth, hand position, release), and hands-off time (time without compressions). CPR skills were evaluated using a sensor-equipped mannequin and two blinded CPR experts observed testing of participants. Twenty-four participants were included: 12 never-trained and 12 currently certified in CPR. Comparing pre and post training, the never-trained group had improvements in average compression rate per minute (64.3 to 103.9, p = 0.006), compressions with correct hand position in 1 min (8.3 to 54.3, p = 0.002), and correct compression release in 1 min (21.2 to 76.3, p 100/min), but an improved number of compressions with correct release (53.5 to 94.7, p 50 mm) remained problematic in both groups. Comparisons made between groups indicated significant improvements in compression depth, hand position, and hands-off time in never-trained compared to CPR-certified participants. Inter-rater agreement values were also calculated between the CPR experts and sensor-equipped mannequin. A brief internet-based video coupled with skill practice on a homemade mannequin improved compression-only CPR skills, especially in the previously untrained participants. This training method allows for widespread compression-only CPR

  6. High bit depth infrared image compression via low bit depth codecs

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-01-01

    images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed.......264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can...

  7. Data compression of digital X-ray images from a clinical viewpoint

    International Nuclear Information System (INIS)

    Ando, Yutaka

    1992-01-01

    For the PACS (picture archiving and communication system), large storage capacity recording media and a fast data transfer network are necessary. When the PACS are working, these technology requirements become an large problem. So we need image data compression having a higher recording efficiency media and an improved transmission ratio. There are two kinds of data compression methods, one is reversible compression and other is the irreversible one. By these reversible compression methods, a compressed-expanded image is exactly equal to the original image. The ratio of data compression is about between 1/2 an d1/3. On the other hand, for irreversible data compression, the compressed-expanded image is a distorted image, and we can achieve a high compression ratio by using this method. In the medical field, the discrete cosine transform (DCT) method is popular because of the low distortion and fast performance. The ratio of data compression is actually from 1/10 to 1/20. It is important for us to decide the compression ratio according to the purposes and modality of the image. We must carefully select the ratio of the data compression because the suitable compression ratio alters in the usage of image for education, clinical diagnosis and reference. (author)

  8. A Feasibility Study for Measuring Accurate Chest Compression Depth and Rate on Soft Surfaces Using Two Accelerometers and Spectral Analysis

    Directory of Open Access Journals (Sweden)

    Sofía Ruiz de Gauna

    2016-01-01

    Full Text Available Background. Cardiopulmonary resuscitation (CPR feedback devices are being increasingly used. However, current accelerometer-based devices overestimate chest displacement when CPR is performed on soft surfaces, which may lead to insufficient compression depth. Aim. To assess the performance of a new algorithm for measuring compression depth and rate based on two accelerometers in a simulated resuscitation scenario. Materials and Methods. Compressions were provided to a manikin on two mattresses, foam and sprung, with and without a backboard. One accelerometer was placed on the chest and the second at the manikin’s back. Chest displacement and mattress displacement were calculated from the spectral analysis of the corresponding acceleration every 2 seconds and subtracted to compute the actual sternal-spinal displacement. Compression rate was obtained from the chest acceleration. Results. Median unsigned error in depth was 2.1 mm (4.4%. Error was 2.4 mm in the foam and 1.7 mm in the sprung mattress (p<0.001. Error was 3.1/2.0 mm and 1.8/1.6 mm with/without backboard for foam and sprung, respectively (p<0.001. Median error in rate was 0.9 cpm (1.0%, with no significant differences between test conditions. Conclusion. The system provided accurate feedback on chest compression depth and rate on soft surfaces. Our solution compensated mattress displacement, avoiding overestimation of compression depth when CPR is performed on soft surfaces.

  9. Compressive strength measurements of hybrid dental composites treated with dry heat and light emitting diodes (LED post cure treatment

    Directory of Open Access Journals (Sweden)

    Jenny Krisnawaty

    2014-11-01

    Full Text Available Hybrid composites are mostly used on large cavities as restorative dental materials, whether it is used directly or indirectly. The mechanical properties of composite resin shall increase if it is treated with post cure treatment. The aim of this study is to evaluate compressive strength differences between dry heat and Light Emitting Diodes (LED treatment on the hybrid dental composite. A quasi-experimental was applied on this research with a total of 30 samples that were divided into two groups. Each sample was tested using LLOYD Universal Testing Machine with 1 mm/min speed to evaluate the compressive strength. The compressive strength results were marked when the sample was broken. The results of two groups were then analyzed using t-test statistical calculation. The results of this study show that post cure treatment on hybrid composite using LED light box (194.138 MPa was lower than dry heat treatment (227.339 MPa, which was also significantly different from statistical analysis. It can be concluded that compressive strength of LED light box was lower than dry heat post-cure treatment on the hybrid composite resin.

  10. Ramsey theory on the integers

    CERN Document Server

    Landman, Bruce M

    2003-01-01

    Ramsey theory is the study of the structure of mathematical objects that is preserved under partitions. In its full generality, Ramsey theory is quite powerful, but can quickly become complicated. By limiting the focus of this book to Ramsey theory applied to the set of integers, the authors have produced a gentle, but meaningful, introduction to an important and enticing branch of modern mathematics. Ramsey Theory on the Integers offers students something quite rare for a book at this level: a glimpse into the world of mathematical research and the opportunity to begin pondering unsolved problems themselves. In addition to being the first truly accessible book on Ramsey theory, this innovative book also provides the first cohesive study of Ramsey theory on the integers. It contains perhaps the most substantial account of solved and unsolved problems in this blossoming subarea of Ramsey theory. The result is a breakthrough book that will engage students, teachers, and researchers alike.

  11. Fractal electrodynamics via non-integer dimensional space approach

    Science.gov (United States)

    Tarasov, Vasily E.

    2015-09-01

    Using the recently suggested vector calculus for non-integer dimensional space, we consider electrodynamics problems in isotropic case. This calculus allows us to describe fractal media in the framework of continuum models with non-integer dimensional space. We consider electric and magnetic fields of fractal media with charges and currents in the framework of continuum models with non-integer dimensional spaces. An application of the fractal Gauss's law, the fractal Ampere's circuital law, the fractal Poisson equation for electric potential, and equation for fractal stream of charges are suggested. Lorentz invariance and speed of light in fractal electrodynamics are discussed. An expression for effective refractive index of non-integer dimensional space is suggested.

  12. DESIGN STUDY: INTEGER SUBTRACTION OPERATION TEACHING LEARNING USING MULTIMEDIA IN PRIMARY SCHOOL

    Directory of Open Access Journals (Sweden)

    Rendi Muhammad Aris

    2016-12-01

    Full Text Available This study aims to develop a learning trajectory to help students understand concept of subtraction of integers using multimedia in the fourth grade. This study is thematic integrative learning in Curriculum 2013 PMRI based. The method used is design research consists of three stages; preparing for the experiment, design experiment, retrospective analysis. The studied was conducted on 20 students of grade four SDN 1 Muara Batun, OKI. The activities of students in this study consisted of six learning trajectories. The first activity asks the students to classify heroism and non-heroism acts, summarize, and classify integers and non-integer. The second activity asks the students to answer the questions in the film given. The third activity asks students to count the remaining gravel in the film. The fourth activity asks students to count remaining spent money in the film. The fifth activity invites students to play rubber seeds in the bag. The last activity asks students to answer the questions in the student worksheet. The media used along the learning activities are a ruler, rubber seed, student worksheet, money, gravel, and film. The results indicate that the learning trajectory using multimedia help students understand the concept of integer subtraction integer. Keywords: Subtraction Integer, PMRI, Multimedia DOI: http://dx.doi.org/10.22342/jme.8.1.3233.95-102

  13. Dynamic High-Temperature Characterization of an Iridium Alloy in Compression at High Strain Rates

    Energy Technology Data Exchange (ETDEWEB)

    Song, Bo [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Experimental Environment Simulation Dept.; Nelson, Kevin [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Mechanics of Materials Dept.; Lipinski, Ronald J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Advanced Nuclear Fuel Cycle Technology Dept.; Bignell, John L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Structural and Thermal Analysis Dept.; Ulrich, G. B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Radioisotope Power Systems Program; George, E. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Radioisotope Power Systems Program

    2014-06-01

    Iridium alloys have superior strength and ductility at elevated temperatures, making them useful as structural materials for certain high-temperature applications. However, experimental data on their high-temperature high-strain-rate performance are needed for understanding high-speed impacts in severe elevated-temperature environments. Kolsky bars (also called split Hopkinson bars) have been extensively employed for high-strain-rate characterization of materials at room temperature, but it has been challenging to adapt them for the measurement of dynamic properties at high temperatures. Current high-temperature Kolsky compression bar techniques are not capable of obtaining satisfactory high-temperature high-strain-rate stress-strain response of thin iridium specimens investigated in this study. We analyzed the difficulties encountered in high-temperature Kolsky compression bar testing of thin iridium alloy specimens. Appropriate modifications were made to the current high-temperature Kolsky compression bar technique to obtain reliable compressive stress-strain response of an iridium alloy at high strain rates (300 – 10000 s-1) and temperatures (750°C and 1030°C). Uncertainties in such high-temperature high-strain-rate experiments on thin iridium specimens were also analyzed. The compressive stress-strain response of the iridium alloy showed significant sensitivity to strain rate and temperature.

  14. FORMALIZING PRODUCT COST DISTORTION: The Impact of Volume-Related Allocation Bases on Cost Information

    Directory of Open Access Journals (Sweden)

    Johnny Jermias

    2003-09-01

    Full Text Available The purpose o f this study is to formally analyze product cost distortions resulting from the process of allocating costs to products based on Activity-Based Costing (ABC and the conventional product costing systems. The model developed in this paper rigorously shows the impact of treating costs that are not volume related as if they are. The model demonstrates that the source of product cost distortion is the difference between the proportion of driver used by each product in ABC and the proportion of the base used by the same product in the conventional costing systems. The difference arises because the conventional costing systems ignore the existence of batch-related and product-related costs. The model predicts a positive association between volume and size diversity with product cost distortions. When interaction between volume and size diversity exists, the distortion is either mitigated or exacerbated. The magnitude of the distortion is jointly determined by the size of the differences and the size of the total indirect costs.

  15. Theory of the Thermal Diffusion of Microgel Particles in Highly Compressed Suspensions

    Science.gov (United States)

    Sokoloff, Jeffrey; Maloney, Craig; Ciamarra, Massimo; Bi, Dapeng

    One amazing property of microgel colloids is the ability of the particles to thermally diffuse, even when they are compressed to a volume well below their swollen state volume, despite the fact that they are surrounded by and pressed against other particles. A glass transition is expected to occur when the colloid is sufficiently compressed for diffusion to cease. It is proposed that the diffusion is due to the ability of the highly compressed particles to change shape with little cost in free energy. It will be shown that most of the free energy required to compress microgel particles is due to osmotic pressure resulting from either counterions or monomers inside of the gel, which depends on the particle's volume. There is still, however, a cost in free energy due to polymer elasticity when particles undergo the distortions necessary for them to move around each other as they diffuse through the compressed colloid, even if it occurs at constant volume. Using a scaling theory based on simple models for the linking of polymers belonging to the microgel particles, we examine the conditions under which the cost in free energy needed for a particle to diffuse is smaller than or comparable to thermal energy, which is a necessary condition for particle diffusion. Based on our scaling theory, we predict that thermally activated diffusion should be possible when the mean number of links along the axis along which a distortion occurs is much larger than N 1 / 5, where Nis the mean number of monomers in a polymer chain connecting two links in the gel.

  16. Linear and integer programming made easy

    CERN Document Server

    Hu, T C

    2016-01-01

    Linear and integer programming are fundamental toolkits for data and information science and technology, particularly in the context of today’s megatrends toward statistical optimization, machine learning, and big data analytics. Drawn from over 30 years of classroom teaching and applied research experience, this textbook provides a crisp and practical introduction to the basics of linear and integer programming. The authors’ approach is accessible to students from all fields of engineering, including operations research, statistics, machine learning, control system design, scheduling, formal verification, and computer vision. Readers will learn to cast hard combinatorial problems as mathematical programming optimizations, understand how to achieve formulations where the objective and constraints are linear, choose appropriate solution methods, and interpret results appropriately. •Provides a concise introduction to linear and integer programming, appropriate for undergraduates, graduates, a short cours...

  17. Accelerated high-frame-rate mouse heart cine-MRI using compressed sensing reconstruction.

    Science.gov (United States)

    Motaal, Abdallah G; Coolen, Bram F; Abdurrachim, Desiree; Castro, Rui M; Prompers, Jeanine J; Florack, Luc M J; Nicolay, Klaas; Strijkers, Gustav J

    2013-04-01

    We introduce a new protocol to obtain very high-frame-rate cinematographic (Cine) MRI movies of the beating mouse heart within a reasonable measurement time. The method is based on a self-gated accelerated fast low-angle shot (FLASH) acquisition and compressed sensing reconstruction. Key to our approach is that we exploit the stochastic nature of the retrospective triggering acquisition scheme to produce an undersampled and random k-t space filling that allows for compressed sensing reconstruction and acceleration. As a standard, a self-gated FLASH sequence with a total acquisition time of 10 min was used to produce single-slice Cine movies of seven mouse hearts with 90 frames per cardiac cycle. Two times (2×) and three times (3×) k-t space undersampled Cine movies were produced from 2.5- and 1.5-min data acquisitions, respectively. The accelerated 90-frame Cine movies of mouse hearts were successfully reconstructed with a compressed sensing algorithm. The movies had high image quality and the undersampling artifacts were effectively removed. Left ventricular functional parameters, i.e. end-systolic and end-diastolic lumen surface areas and early-to-late filling rate ratio as a parameter to evaluate diastolic function, derived from the standard and accelerated Cine movies, were nearly identical. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Distortion correction algorithm for UAV remote sensing image based on CUDA

    International Nuclear Information System (INIS)

    Wenhao, Zhang; Yingcheng, Li; Delong, Li; Changsheng, Teng; Jin, Liu

    2014-01-01

    In China, natural disasters are characterized by wide distribution, severe destruction and high impact range, and they cause significant property damage and casualties every year. Following a disaster, timely and accurate acquisition of geospatial information can provide an important basis for disaster assessment, emergency relief, and reconstruction. In recent years, Unmanned Aerial Vehicle (UAV) remote sensing systems have played an important role in major natural disasters, with UAVs becoming an important technique of obtaining disaster information. UAV is equipped with a non-metric digital camera with lens distortion, resulting in larger geometric deformation for acquired images, and affecting the accuracy of subsequent processing. The slow speed of the traditional CPU-based distortion correction algorithm cannot meet the requirements of disaster emergencies. Therefore, we propose a Compute Unified Device Architecture (CUDA)-based image distortion correction algorithm for UAV remote sensing, which takes advantage of the powerful parallel processing capability of the GPU, greatly improving the efficiency of distortion correction. Our experiments show that, compared with traditional CPU algorithms and regardless of image loading and saving times, the maximum acceleration ratio using our proposed algorithm reaches 58 times that using the traditional algorithm. Thus, data processing time can be reduced by one to two hours, thereby considerably improving disaster emergency response capability

  19. The Formation and Evolution of Shear Bands in Plane Strain Compressed Nickel-Base Superalloy

    Directory of Open Access Journals (Sweden)

    Bin Tang

    2018-02-01

    Full Text Available The formation and evolution of shear bands in Inconel 718 nickel-base superalloy under plane strain compression was investigated in the present work. It is found that the propagation of shear bands under plane strain compression is more intense in comparison with conventional uniaxial compression. The morphology of shear bands was identified to generally fall into two categories: in “S” shape at severe conditions (low temperatures and high strain rates and “X” shape at mild conditions (high temperatures and low strain rates. However, uniform deformation at the mesoscale without shear bands was also obtained by compressing at 1050 °C/0.001 s−1. By using the finite element method (FEM, the formation mechanism of the shear bands in the present study was explored for the special deformation mode of plane strain compression. Furthermore, the effect of processing parameters, i.e., strain rate and temperature, on the morphology and evolution of shear bands was discussed following a phenomenological approach. The plane strain compression attempt in the present work yields important information for processing parameters optimization and failure prediction under plane strain loading conditions of the Inconel 718 superalloy.

  20. Discovery of Boolean metabolic networks: integer linear programming based approach.

    Science.gov (United States)

    Qiu, Yushan; Jiang, Hao; Ching, Wai-Ki; Cheng, Xiaoqing

    2018-04-11

    Traditional drug discovery methods focused on the efficacy of drugs rather than their toxicity. However, toxicity and/or lack of efficacy are produced when unintended targets are affected in metabolic networks. Thus, identification of biological targets which can be manipulated to produce the desired effect with minimum side-effects has become an important and challenging topic. Efficient computational methods are required to identify the drug targets while incurring minimal side-effects. In this paper, we propose a graph-based computational damage model that summarizes the impact of enzymes on compounds in metabolic networks. An efficient method based on Integer Linear Programming formalism is then developed to identify the optimal enzyme-combination so as to minimize the side-effects. The identified target enzymes for known successful drugs are then verified by comparing the results with those in the existing literature. Side-effects reduction plays a crucial role in the study of drug development. A graph-based computational damage model is proposed and the theoretical analysis states the captured problem is NP-completeness. The proposed approaches can therefore contribute to the discovery of drug targets. Our developed software is available at " http://hkumath.hku.hk/~wkc/APBC2018-metabolic-network.zip ".

  1. A design approach for systems based on magnetic pulse compression

    International Nuclear Information System (INIS)

    Praveen Kumar, D. Durga; Mitra, S.; Senthil, K.; Sharma, D. K.; Rajan, Rehim N.; Sharma, Archana; Nagesh, K. V.; Chakravarthy, D. P.

    2008-01-01

    A design approach giving the optimum number of stages in a magnetic pulse compression circuit and gain per stage is given. The limitation on the maximum gain per stage is discussed. The total system volume minimization is done by considering the energy storage capacitor volume and magnetic core volume at each stage. At the end of this paper, the design of a magnetic pulse compression based linear induction accelerator of 200 kV, 5 kA, and 100 ns with a repetition rate of 100 Hz is discussed with its experimental results

  2. STUDI PERBANDINGAN PERFORMANCE ALGORITMA HEURISTIK POUR TERHADAP MIXED INTEGER PROGRAMMING DALAM MENYELESAIKAN PENJADWALAN FLOWSHOP

    Directory of Open Access Journals (Sweden)

    Tessa Vanina Soetanto

    2004-01-01

    Full Text Available This paper presents a study about new heuristic algorithm performance compared to Mixed Integer Programming (MIP method in solving flowshop scheduling problem to reach minimum makespan. Performance appraisal is based on Efficiency Index (EI, Relative Error (RE and Elapsed Runtime. Abstract in Bahasa Indonesia : Makalah ini menyajikan penelitian tentang performance algoritma heuristik Pour terhadap metode Mixed Integer Programming (MIP dalam menyelesaikan masalah penjadwalan flowshop dengan tujuan meminimalkan makespan. Penilaian performance dilakukan berdasarkan nilai Efficiency Index (EI, Relative Error (RE dan Elapsed Runtime. Kata kunci: flowshop, makespan, algoritma heuristik Pour, Mixed Integer Programming.

  3. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  4. Positive integer solutions of certain diophantine equations

    Indian Academy of Sciences (India)

    BIJAN KUMAR PATEL

    2018-03-19

    Mar 19, 2018 ... integer solutions. They also found all the positive integer solutions of the given equations in terms of Fibonacci and Lucas numbers. Another interesting number sequence which is closely related to the sequence of. Fibonacci numbers is the sequence of balancing numbers. In 1999, Behera et al. [1] intro-.

  5. Computer Corner: Spreadsheets, Power Series, Generating Functions, and Integers.

    Science.gov (United States)

    Snow, Donald R.

    1989-01-01

    Implements a table algorithm on a spreadsheet program and obtains functions for several number sequences such as the Fibonacci and Catalan numbers. Considers other applications of the table algorithm to integers represented in various number bases. (YP)

  6. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  7. Imaging the Conductance of Integer and Fractional Quantum Hall Edge States

    Directory of Open Access Journals (Sweden)

    Nikola Pascher

    2014-01-01

    Full Text Available We measure the conductance of a quantum point contact while the biased tip of a scanning probe microscope induces a depleted region in the electron gas underneath. At a finite magnetic field, we find plateaus in the real-space maps of the conductance as a function of tip position at integer (ν=1, 2, 3, 4, 6, 8 and fractional (ν=1/3, 2/3, 5/3, 4/5 values of transmission. They resemble theoretically predicted compressible and incompressible stripes of quantum Hall edge states. The scanning tip allows us to shift the constriction limiting the conductance in real space over distances of many microns. The resulting stripes of integer and fractional filling factors are rugged on scales of a few hundred nanometers, i.e., on a scale much smaller than the zero-field elastic mean free path of the electrons. Our experiments demonstrate that microscopic inhomogeneities are relevant even in high-quality samples and lead to locally strongly fluctuating widths of incompressible regions even down to their complete suppression for certain tip positions. The macroscopic quantization of the Hall resistance measured experimentally in a nonlocal contact configuration survives in the presence of these inhomogeneities, and the relevant local energy scale for the ν=2 state turns out to be independent of tip position.

  8. S-parts of terms of integer linear recurrence sequences

    NARCIS (Netherlands)

    Bugeaud, Y.; Evertse, J.H.

    2017-01-01

    Let S = {q1 , . . . , qs } be a finite, non-empty set of distinct prime numbers. For a non-zero integer m, write m = q1^ r1 . . . qs^rs M, where r1 , . . . , rs  are non-negative integers and M is an integer relatively prime to q1 . . . qs. We define the S-part [m]_S of m by [m]_S := q1^r1 . . .

  9. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    Science.gov (United States)

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  10. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT

    Directory of Open Access Journals (Sweden)

    Ran Li

    2018-04-01

    Full Text Available Aimed at a low-energy consumption of Green Internet of Things (IoT, this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  11. The Effect of Al on the Compressibility of Silicate Perovskite

    Science.gov (United States)

    Walter, M. J.; Kubo, A.; Yoshino, T.; Koga, K. T.; Ohishi, Y.

    2003-12-01

    Experimental data on compressibility of aluminous silicate perovskite show widely disparate results. Several studies show that Al causes a dramatic increase in compressibility1-3, while another study indicates a mild decrease in compressibility4. Here we report new results for the effect of Al on the room-temperature compressibility of perovskite using in situ X-ray diffraction in the diamond anvil cell from 30 to 100 GPa. We studied compressibility of perovskite in the system MgSiO3-Al2O3 in compositions with 0 to 25 mol% Al. Perovskite was synthesized from starting glasses using laser-heating in the DAC, with KBr as a pressure medium. Diffraction patterns were obtained using monochromatic radiation and an imaging plate detector at beamline BL10XU, SPring8, Japan. Addition of Al into the perovskite structure causes systematic increases in orthorhombic distortion and unit cell volume at ambient conditions (V0). Compression of the perovskite unit cell is anisotropic, with the a axis about 25% and 3% more compressive than the b and c axes, respectively. The magnitude of orthorhombic distortion increases with pressure, but aluminous perovskite remains stable to at least 100 GPa. Our results show that Al causes only a mild increase in compressibility, with the bulk modulus (K0) decreasing at a rate of 0.7 GPa/0.01 XAl. This increase in compressibility is consistent with recent ab initio calculations if Al mixes into both the 6- and 8-coordinated sites by coupled substitution5, where 2 Al3+ = Mg2+ + Si4+. Our results together with those of [4] indicate that this substitution mechanism predominates throughout the lower mantle. Previous mineralogic models indicating the upper and lower mantle are compositionally similar in terms of major elements remain effectively unchanged because solution of 5 mol% Al into perovskite has a minor effect on density. 1. Zhang & Weidner (1999). Science 284, 782-784. 2. Kubo et al. (2000) Proc. Jap. Acad. 76B, 103-107. 3. Daniel et al

  12. Electrostatic and Quantum Transport Simulations of Quantum Point Contacts in the Integer Quantum Hall Regime

    Science.gov (United States)

    Sahasrabudhe, Harshad; Fallahi, Saeed; Nakamura, James; Povolotskyi, Michael; Novakovic, Bozidar; Rahman, Rajib; Manfra, Michael; Klimeck, Gerhard

    Quantum Point Contacts (QPCs) are extensively used in semiconductor devices for charge sensing, tunneling and interference experiments. Fabry-Pérot interferometers containing 2 QPCs have applications in quantum computing, in which electrons/quasi-particles undergo interference due to back-scattering from the QPCs. Such experiments have turned out to be difficult because of the complex structure of edge states near the QPC boundary. We present realistic simulations of the edge states in QPCs based on GaAs/AlGaAs heterostructures, which can be used to predict conductance and edge state velocities. Conduction band profile is obtained by solving decoupled effective mass Schrödinger and Poisson equations self-consistently on a finite element mesh of a realistic geometry. In the integer quantum Hall regime, we obtain compressible and in-compressible regions near the edges. We then use the recursive Green`s function algorithm to solve Schrödinger equation with open boundary conditions for calculating transmission and local current density in the QPCs. Impurities are treated by inserting bumps in the potential with a Gaussian distribution. We compare observables with experiments for fitting some adjustable parameters. The authors would like to thank Purdue Research Foundation and Purdue Center for Topological Materials for their support.

  13. A method based on moving least squares for XRII image distortion correction

    International Nuclear Information System (INIS)

    Yan Shiju; Wang Chengtao; Ye Ming

    2007-01-01

    This paper presents a novel integrated method to correct geometric distortions of XRII (x-ray image intensifier) images. The method has been compared, in terms of mean-squared residual error measured at control and intermediate points, with two traditional local methods and a traditional global methods. The proposed method is based on the methods of moving least squares (MLS) and polynomial fitting. Extensive experiments were performed on simulated and real XRII images. In simulation, the effect of pincushion distortion, sigmoidal distortion, local distortion, noise, and the number of control points was tested. The traditional local methods were sensitive to pincushion and sigmoidal distortion. The traditional global method was only sensitive to sigmoidal distortion. The proposed method was found neither sensitive to pincushion distortion nor sensitive to sigmoidal distortion. The sensitivity of the proposed method to local distortion was lower than or comparable with that of the traditional global method. The sensitivity of the proposed method to noise was higher than that of all three traditional methods. Nevertheless, provided the standard deviation of noise was not greater than 0.1 pixels, accuracy of the proposed method is still higher than the traditional methods. The sensitivity of the proposed method to the number of control points was greatly lower than that of the traditional methods. Provided that a proper cutoff radius is chosen, accuracy of the proposed method is higher than that of the traditional methods. Experiments on real images, carried out by using a 9 in. XRII, showed that residual error of the proposed method (0.2544±0.2479 pixels) is lower than that of the traditional global method (0.4223±0.3879 pixels) and local methods (0.4555±0.3518 pixels and 0.3696±0.4019 pixels, respectively)

  14. Compression stockings significantly improve hemodynamic performance in post-thrombotic syndrome irrespective of class or length.

    Science.gov (United States)

    Lattimer, Christopher R; Azzam, Mustapha; Kalodiki, Evi; Makris, Gregory C; Geroulakos, George

    2013-07-01

    Graduated elastic compression (GEC) stockings have been demonstrated to reduce the morbidity associated with post-thrombotic syndrome. The ideal length or compression strength required to achieve this is speculative and related to physician preference and patient compliance. The aim of this study was to evaluate the hemodynamic performance of four different stockings and determine the patient's preference. Thirty-four consecutive patients (40 legs, 34 male) with post-thrombotic syndrome were tested with four different stockings (Mediven plus open toe, Bayreuth, Germany) of their size in random order: class 1 (18-21 mm Hg) and class II (23-32 mm Hg), below-knee (BK) and above-knee thigh-length (AK). The median age, Venous Clinical Severity Score, Venous Segmental Disease Score, and Villalta scale were 62 years (range, 31-81 years), 8 (range, 1-21), 5 (range, 2-10), and 10 (range, 2-22), respectively. The C of C0-6EsAs,d,pPr,o was C0 = 2, C2 = 1, C3 = 3, C4a = 12, C4b = 7, C5 = 12, C6 = 3. Obstruction and reflux was observed on duplex in 47.5% legs, with deep venous reflux alone in 45%. Air plethysmography was used to measure the venous filling index (VFI), venous volume, and time to fill 90% of the venous volume. Direct pressure measurements were obtained while lying and standing using the PicoPress device (Microlab Elettronica, Nicolò, Italy). The pressure sensor was placed underneath the test stocking 5 cm above and 2 cm posterior to the medial malleolus. At the end of the study session, patients stated their preferred stocking based on comfort. The VFI, venous volume, and time to fill 90% of the venous volume improved significantly with all types of stocking versus no compression. In class I, the VFI (mL/s) improved from a median of 4.9 (range, 1.7-16.3) without compression to 3.7 (range, 0-14) BK (24.5%) and 3.6 (range, 0.6-14.5) AK (26.5%). With class II, the corresponding improvement was to 4.0 (range, 0.3-16.2) BK (18.8%) and 3.7 (range, 0.5-14.2) AK (24

  15. A 172 $\\mu$W Compressively Sampled Photoplethysmographic (PPG) Readout ASIC With Heart Rate Estimation Directly From Compressively Sampled Data.

    Science.gov (United States)

    Pamula, Venkata Rajesh; Valero-Sarmiento, Jose Manuel; Yan, Long; Bozkurt, Alper; Hoof, Chris Van; Helleputte, Nick Van; Yazicioglu, Refet Firat; Verhelst, Marian

    2017-06-01

    A compressive sampling (CS) photoplethysmographic (PPG) readout with embedded feature extraction to estimate heart rate (HR) directly from compressively sampled data is presented. It integrates a low-power analog front end together with a digital back end to perform feature extraction to estimate the average HR over a 4 s interval directly from compressively sampled PPG data. The application-specified integrated circuit (ASIC) supports uniform sampling mode (1x compression) as well as CS modes with compression ratios of 8x, 10x, and 30x. CS is performed through nonuniformly subsampling the PPG signal, while feature extraction is performed using least square spectral fitting through Lomb-Scargle periodogram. The ASIC consumes 172  μ W of power from a 1.2 V supply while reducing the relative LED driver power consumption by up to 30 times without significant loss of relevant information for accurate HR estimation.

  16. Effect of magnet sorting using a simple resonance cancellation method on the RMS orbit distortion at the APS injector synchrotron

    International Nuclear Information System (INIS)

    Lopez, F.; Koul, R.; Mills, F.E.

    1993-01-01

    The Advanced Photon Source injector synchrotron is a 7-GeV positron machine with a standard alternating gradient lattice. The calculated effect of dipole magnet strength errors on the orbit distortion, simulated by Monte Carlo, was reduced by sorting pairs of magnets having the closest simulated measured strengths to reduce the driving the term of the integer resonance nearest the operating point. This method resulted in a factor of four average reduction in the rms orbit distortion when all 68 magnets were sorted at once. The simulated effect of magnet measurement experimental resolution was found to limit the actual improvement. The Β-beat factors were similarly reduced by sorting the quadrupole magnets according to their gradients

  17. Peak reduction and clipping mitigation in OFDM by augmented compressive sensing

    KAUST Repository

    Al-Safadi, Ebrahim B.

    2012-07-01

    This work establishes the design, analysis, and fine-tuning of a peak-to-average-power-ratio (PAPR) reducing system, based on compressed sensing (CS) at the receiver of a peak-reducing sparse clipper applied to an orthogonal frequency-division multiplexing (OFDM) signal at the transmitter. By exploiting the sparsity of clipping events in the time domain relative to a predefined clipping threshold, the method depends on partially observing the frequency content of the clipping distortion over reserved tones to estimate the remaining distortion. The approach has the advantage of eliminating the computational complexity at the transmitter and reducing the overall complexity of the system compared to previous methods which incorporate pilots to cancel nonlinear distortion. Data-based augmented CS methods are also proposed that draw upon available phase and support information from data tones for enhanced estimation and cancelation of clipping noise. This enables signal recovery under more severe clipping scenarios and hence lower PAPR can be achieved compared to conventional CS techniques. © 2012 IEEE.

  18. Peak reduction and clipping mitigation in OFDM by augmented compressive sensing

    KAUST Repository

    Al-Safadi, Ebrahim B.; Al-Naffouri, Tareq Y.

    2012-01-01

    This work establishes the design, analysis, and fine-tuning of a peak-to-average-power-ratio (PAPR) reducing system, based on compressed sensing (CS) at the receiver of a peak-reducing sparse clipper applied to an orthogonal frequency-division multiplexing (OFDM) signal at the transmitter. By exploiting the sparsity of clipping events in the time domain relative to a predefined clipping threshold, the method depends on partially observing the frequency content of the clipping distortion over reserved tones to estimate the remaining distortion. The approach has the advantage of eliminating the computational complexity at the transmitter and reducing the overall complexity of the system compared to previous methods which incorporate pilots to cancel nonlinear distortion. Data-based augmented CS methods are also proposed that draw upon available phase and support information from data tones for enhanced estimation and cancelation of clipping noise. This enables signal recovery under more severe clipping scenarios and hence lower PAPR can be achieved compared to conventional CS techniques. © 2012 IEEE.

  19. Neomysis integer: a review

    OpenAIRE

    Fockedey, N.

    2005-01-01

    The present chapter aims to be a literature review on the brackish water mysid Neomysis integer, with focus on its feeding ecology, life history aspects, behaviour, physiology, biochemical composition, bioenergetics and ecotoxico10gy. All records on the species, available from literature, are listed as an appendix. The review aims to identify the state-of-the-art and the gaps in our knowledge on the species. Abundant information is available on the distribution patterns of Neomysis integer in...

  20. Applying image quality in cell phone cameras: lens distortion

    Science.gov (United States)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  1. Current prescribing patterns of elastic compression stockings post-deep venous thrombosis.

    LENUS (Irish Health Repository)

    Roche-Nagle, G

    2012-02-01

    OBJECTIVES: Post-thrombotic syndrome (PTS) is a complication of deep vein thrombosis (DVT) characterized by chronic pain, swelling and heaviness, and may result in ulceration. Elastic compression stockings (ECS) worn daily after DVT appear to reduce the incidence and severity of PTS. The aims of our study were to investigate practices and perceptions of DVT patients and physicians regarding the use of ECS after DVT. METHODS: Two surveys were conducted. The first was sent to 225 staff and trainee clinicians and the second was administered to 150 DVT patients. RESULTS: The results demonstrated that the majority of senior staff (75%) believed that ECS were effective in preventing PTS and in managing venous symptoms. However, this was in contrast with junior trainees (21%) (P < 0.05). This resulted in only 63% of patients being prescribed ECS post-DVT. There was a lack of consensus as regards the optimal timing of initiation of ECS, duration of therapy and compression strength. Nearly all DVT patients who were prescribed ECS purchased them, 74% wore them daily, and most (61%) reported that ECS relieved swelling and symptoms. Physicians correctly predicted the main reasons for non-compliance, but misjudged the scale of patient compliance with ECS. CONCLUSIONS: Our findings suggest that there is a lack of consensus among doctors regarding ECS use after DVT and widespread education regarding the latest evidence of the benefit of ECS after DVT.

  2. Metronome improves compression and ventilation rates during CPR on a manikin in a randomized trial.

    Science.gov (United States)

    Kern, Karl B; Stickney, Ronald E; Gallison, Leanne; Smith, Robert E

    2010-02-01

    We hypothesized that a unique tock and voice metronome could prevent both suboptimal chest compression rates and hyperventilation. A prospective, randomized, parallel design study involving 34 pairs of paid firefighter/emergency medical technicians (EMTs) performing two-rescuer CPR using a Laerdal SkillReporter Resusci Anne manikin with and without metronome guidance was performed. Each CPR session consisted of 2 min of 30:2 CPR with an unsecured airway, then 4 min of CPR with a secured airway (continuous compressions at 100 min(-1) with 8-10 ventilations/min), repeated after the rescuers switched roles. The metronome provided "tock" prompts for compressions, transition prompts between compressions and ventilations, and a spoken "ventilate" prompt. During CPR with a bag/valve/mask the target compression rate of 90-110 min(-1) was achieved in 5/34 CPR sessions (15%) for the control group and 34/34 sessions (100%) for the metronome group (pmetronome or control group during CPR with a bag/valve/mask. During CPR with a bag/endotracheal tube, the target of both a compression rate of 90-110 min(-1) and a ventilation rate of 8-11 min(-1) was achieved in 3/34 CPR sessions (9%) for the control group and 33/34 sessions (97%) for the metronome group (pMetronome use with the secured airway scenario significantly decreased the incidence of over-ventilation (11/34 EMT pairs vs. 0/34 EMT pairs; pmetronome was effective at directing correct chest compression and ventilation rates both before and after intubation. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  3. Mixed Integer Programming and Heuristic Scheduling for Space Communication

    Science.gov (United States)

    Lee, Charles H.; Cheung, Kar-Ming

    2013-01-01

    Optimal planning and scheduling for a communication network was created where the nodes within the network are communicating at the highest possible rates while meeting the mission requirements and operational constraints. The planning and scheduling problem was formulated in the framework of Mixed Integer Programming (MIP) to introduce a special penalty function to convert the MIP problem into a continuous optimization problem, and to solve the constrained optimization problem using heuristic optimization. The communication network consists of space and ground assets with the link dynamics between any two assets varying with respect to time, distance, and telecom configurations. One asset could be communicating with another at very high data rates at one time, and at other times, communication is impossible, as the asset could be inaccessible from the network due to planetary occultation. Based on the network's geometric dynamics and link capabilities, the start time, end time, and link configuration of each view period are selected to maximize the communication efficiency within the network. Mathematical formulations for the constrained mixed integer optimization problem were derived, and efficient analytical and numerical techniques were developed to find the optimal solution. By setting up the problem using MIP, the search space for the optimization problem is reduced significantly, thereby speeding up the solution process. The ratio of the dimension of the traditional method over the proposed formulation is approximately an order N (single) to 2*N (arraying), where N is the number of receiving antennas of a node. By introducing a special penalty function, the MIP problem with non-differentiable cost function and nonlinear constraints can be converted into a continuous variable problem, whose solution is possible.

  4. Distortion

    OpenAIRE

    Schultz, Isabella Odorico; Zmylon, Nanna Nielsen; Britze, Juliane

    2014-01-01

    This paper investigates the audience’s perception of the music festival Distortion. By conducting a field-study focusing on the subject’s perception of Distortion, their perception of the Distortion-attendants, and their perception on the promotion of Distortion, the paper will relate the audience perception to the promotion of the event. Using the group’s own research on the promotion of Distortion, the paper points out both the consistencies and the inconsistencies between the promotion and...

  5. Subsampling-based compression and flow visualization

    Energy Technology Data Exchange (ETDEWEB)

    Agranovsky, Alexy; Camp, David; Joy, I; Childs, Hank

    2016-01-19

    As computational capabilities increasingly outpace disk speeds on leading supercomputers, scientists will, in turn, be increasingly unable to save their simulation data at its native resolution. One solution to this problem is to compress these data sets as they are generated and visualize the compressed results afterwards. We explore this approach, specifically subsampling velocity data and the resulting errors for particle advection-based flow visualization. We compare three techniques: random selection of subsamples, selection at regular locations corresponding to multi-resolution reduction, and introduce a novel technique for informed selection of subsamples. Furthermore, we explore an adaptive system which exchanges the subsampling budget over parallel tasks, to ensure that subsampling occurs at the highest rate in the areas that need it most. We perform supercomputing runs to measure the effectiveness of the selection and adaptation techniques. Overall, we find that adaptation is very effective, and, among selection techniques, our informed selection provides the most accurate results, followed by the multi-resolution selection, and with the worst accuracy coming from random subsamples.

  6. Behavior of quenched and tempered steels under high strain rate compression loading

    International Nuclear Information System (INIS)

    Meyer, L.W.; Seifert, K.; Abdel-Malek, S.

    1997-01-01

    Two quenched and tempered steels were tested under compression loading at strain rates of ε = 2.10 2 s -1 and ε = 2.10 3 s -1 . By applying the thermal activation theory, the flow stress at very high strain rates of 10 5 to 10 6 s -1 is derived from low temperature and high strain rate tests. Dynamic true stress - true strain behaviour presents, that stress increases with increasing strain until a maximum, then it decreases. Because of the adiabatic process under dynamic loading the maximum flow stress will occur at a lower strain if the strain rate is increased. Considering strain rate, strain hardening, strain rate hardening and strain softening, a constitutive equation with different additive terms is successfully used to describe the behaviour of material under dynamic compression loading. Results are compared with other models of constitutive equations. (orig.)

  7. An Efficient Integer Coding and Computing Method for Multiscale Time Segment

    Directory of Open Access Journals (Sweden)

    TONG Xiaochong

    2016-12-01

    Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.

  8. Binary Positive Semidefinite Matrices and Associated Integer Polytopes

    DEFF Research Database (Denmark)

    Letchford, Adam N.; Sørensen, Michael Malmros

    2012-01-01

    We consider the positive semidefinite (psd) matrices with binary entries, along with the corresponding integer polytopes. We begin by establishing some basic properties of these matrices and polytopes. Then, we show that several families of integer polytopes in the literature-the cut, boolean qua...

  9. Neutrosophic Integer Programming Problem

    Directory of Open Access Journals (Sweden)

    Mai Mohamed

    2017-02-01

    Full Text Available In this paper, we introduce the integer programming in neutrosophic environment, by considering coffecients of problem as a triangulare neutrosophic numbers. The degrees of acceptance, indeterminacy and rejection of objectives are simultaneously considered.

  10. Wireless EEG System Achieving High Throughput and Reduced Energy Consumption Through Lossless and Near-Lossless Compression.

    Science.gov (United States)

    Alvarez, Guillermo Dufort Y; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo

    2018-02-01

    This work presents a wireless multichannel electroencephalogram (EEG) recording system featuring lossless and near-lossless compression of the digitized EEG signal. Two novel, low-complexity, efficient compression algorithms were developed and tested in a low-power platform. The algorithms were tested on six public EEG databases comparing favorably with the best compression rates reported up to date in the literature. In its lossless mode, the platform is capable of encoding and transmitting 59-channel EEG signals, sampled at 500 Hz and 16 bits per sample, at a current consumption of 337 A per channel; this comes with a guarantee that the decompressed signal is identical to the sampled one. The near-lossless mode allows for significant energy savings and/or higher throughputs in exchange for a small guaranteed maximum per-sample distortion in the recovered signal. Finally, we address the tradeoff between computation cost and transmission savings by evaluating three alternatives: sending raw data, or encoding with one of two compression algorithms that differ in complexity and compression performance. We observe that the higher the throughput (number of channels and sampling rate) the larger the benefits obtained from compression.

  11. Spoofing cyber attack detection in probe-based traffic monitoring systems using mixed integer linear programming

    KAUST Repository

    Canepa, Edward S.

    2013-01-01

    Traffic sensing systems rely more and more on user generated (insecure) data, which can pose a security risk whenever the data is used for traffic flow control. In this article, we propose a new formulation for detecting malicious data injection in traffic flow monitoring systems by using the underlying traffic flow model. The state of traffic is modeled by the Lighthill-Whitham- Richards traffic flow model, which is a first order scalar conservation law with concave flux function. Given a set of traffic flow data, we show that the constraints resulting from this partial differential equation are mixed integer linear inequalities for some decision variable. We use this fact to pose the problem of detecting spoofing cyber-attacks in probe-based traffic flow information systems as mixed integer linear feasibility problem. The resulting framework can be used to detect spoofing attacks in real time, or to evaluate the worst-case effects of an attack offline. A numerical implementation is performed on a cyber-attack scenario involving experimental data from the Mobile Century experiment and the Mobile Millennium system currently operational in Northern California. © 2013 IEEE.

  12. Spoofing cyber attack detection in probe-based traffic monitoring systems using mixed integer linear programming

    KAUST Repository

    Canepa, Edward S.

    2013-09-01

    Traffic sensing systems rely more and more on user generated (insecure) data, which can pose a security risk whenever the data is used for traffic flow control. In this article, we propose a new formulation for detecting malicious data injection in traffic flow monitoring systems by using the underlying traffic flow model. The state of traffic is modeled by the Lighthill- Whitham-Richards traffic flow model, which is a first order scalar conservation law with concave flux function. Given a set of traffic flow data generated by multiple sensors of different types, we show that the constraints resulting from this partial differential equation are mixed integer linear inequalities for a specific decision variable. We use this fact to pose the problem of detecting spoofing cyber attacks in probe-based traffic flow information systems as mixed integer linear feasibility problem. The resulting framework can be used to detect spoofing attacks in real time, or to evaluate the worst-case effects of an attack offliine. A numerical implementation is performed on a cyber attack scenario involving experimental data from the Mobile Century experiment and the Mobile Millennium system currently operational in Northern California. © American Institute of Mathematical Sciences.

  13. Effect of loading rate on the compressive mechanics of the immature baboon cervical spine.

    Science.gov (United States)

    Elias, Paul Z; Nuckley, David J; Ching, Randal P

    2006-02-01

    Thirty-four cervical spine segments were harvested from 12 juvenile male baboons and compressed to failure at displacement rates of 5, 50, 500, or 5000 mm/s. Compressive stiffness, failure load, and failure displacement were measured for comparison across loading rate groups. Stiffness showed a significant concomitant increase with loading rate, increasing by 62% between rates of 5 and 5000 mm/s. Failure load also demonstrated an increasing relationship with loading rate, while displacement at failure showed no rate dependence. These data may help in the development of improved pediatric automotive safety standards and more biofidelic physical and computational models.

  14. Determining on-fault earthquake magnitude distributions from integer programming

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2018-01-01

    Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106  variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions. 

  15. A fixed recourse integer programming approach towards a ...

    African Journals Online (AJOL)

    Regardless of the success that linear programming and integer linear programming has had in applications in engineering, business and economics, one has to challenge the assumed reality that these optimization models represent. In this paper the certainty assumptions of an integer linear program application is ...

  16. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  17. A fuzzy mixed integer programming for marketing planning

    Directory of Open Access Journals (Sweden)

    Abolfazl Danaei

    2014-03-01

    Full Text Available One of the primary concerns to market a product is to find appropriate channel to target customers. The recent advances on information technology have created new products with tremendous opportunities. This paper presents a mixed integer programming technique based on McCarthy's 4PS to locate suitable billboards for marketing newly introduced IPHONE product. The paper considers two types of information including age and income and tries to find the best places such that potential consumers aged 25-35 with high income visit the billboards and the cost of advertisement is minimized. The model is formulated in terms of mixed integer programming and it has been applied for potential customers who live in city of Tabriz, Iran. Using a typical software package, the model detects appropriate places in various parts of the city.

  18. Hemiparesis caused by vertebral artery compression of the medulla oblongata

    International Nuclear Information System (INIS)

    Kim, Phyo; Takahashi, Hiroshi; Shimizu, Hiroyuki; Yokochi, Masayuki; Ishijima, Buichi

    1984-01-01

    A case is reported of a patient with progressive left hemiparesis due to the vascular compression of the medulla oblongata. Metrizamide CT cisternography revealed the left vertebral artery to be compressing and distorting the left lateral surface of the medulla. This compression was relieved surgically, and the symptoms improved postoperatively. Neurological and symptomatic considerations are discussed in relation to the topographical anatomy of the lateral corticospinal tract. (author)

  19. A Base Integer Programming Model and Benchmark Suite for Liner-Shipping Network Design

    DEFF Research Database (Denmark)

    Brouer, Berit Dangaard; Alvarez, Fernando; Plum, Christian Edinger Munk

    2014-01-01

    . The potential for making cost-effective and energy-efficient liner-shipping networks using operations research (OR) is huge and neglected. The implementation of logistic planning tools based upon OR has enhanced performance of airlines, railways, and general transportation companies, but within the field......The liner-shipping network design problem is to create a set of nonsimple cyclic sailing routes for a designated fleet of container vessels that jointly transports multiple commodities. The objective is to maximize the revenue of cargo transport while minimizing the costs of operation...... sources of liner shipping for OR researchers in general. We describe and analyze the liner-shipping domain applied to network design and present a rich integer programming model based on services that constitute the fixed schedule of a liner shipping company. We prove the liner-shipping network design...

  20. Integers annual volume 2013

    CERN Document Server

    Landman, Bruce

    2014-01-01

    ""Integers"" is a refereed online journal devoted to research in the area of combinatorial number theory. It publishes original research articles in combinatorics and number theory. This work presents all papers of the 2013 volume in book form.

  1. A mixed integer linear program for an integrated fishery | Hasan ...

    African Journals Online (AJOL)

    ... and labour allocation of quota based integrated fisheries. We demonstrate the workability of our model with a numerical example and sensitivity analysis based on data obtained from one of the major fisheries in New Zealand. Keywords: mixed integer linear program, fishing, trawler scheduling, processing, quotas ORiON: ...

  2. A General Approach for Orthogonal 4-Tap Integer Multiwavelet Transforms

    Directory of Open Access Journals (Sweden)

    Mingli Jing

    2010-01-01

    Full Text Available An algorithm for orthogonal 4-tap integer multiwavelet transforms is proposed. We compute the singular value decomposition (SVD of block recursive matrices of transform matrix, and then transform matrix can be rewritten in a product of two block diagonal matrices and a permutation matrix. Furthermore, we factorize the block matrix of block diagonal matrices into triangular elementary reversible matrices (TERMs, which map integers to integers by rounding arithmetic. The cost of factorizing block matrix into TERMs does not increase with the increase of the dimension of transform matrix, and the proposed algorithm is in-place calculation and without allocating auxiliary memory. Examples of integer multiwavelet transform using DGHM and CL are given, which verify that the proposed algorithm is an executable algorithm and outperforms the existing algorithm for orthogonal 4-tap integer multiwavelet transform.

  3. Facial Image Compression Based on Structured Codebooks in Overcomplete Domain

    Directory of Open Access Journals (Sweden)

    Vila-Forcén JE

    2006-01-01

    Full Text Available We advocate facial image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: image compression is considered from the position of source coding with side information and, contrarily to the existing scenarios where the side information is given explicitly; the side information is created based on a deterministic approximation of the local image features. We consider an image in the overcomplete transform domain as a realization of a random source with a structured codebook of symbols where each symbol represents a particular edge shape. Due to the partial availability of the side information at both encoder and decoder, we treat our problem as a modification of the Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available at the decoder. Finally, the paper presents a practical image compression algorithm for facial images based on our concept that demonstrates the superior performance in the very-low-bit-rate regime.

  4. RSM 1.0 - A RESUPPLY SCHEDULER USING INTEGER OPTIMIZATION

    Science.gov (United States)

    Viterna, L. A.

    1994-01-01

    RSM, Resupply Scheduling Modeler, is a fully menu-driven program that uses integer programming techniques to determine an optimum schedule for replacing components on or before the end of a fixed replacement period. Although written to analyze the electrical power system on the Space Station Freedom, RSM is quite general and can be used to model the resupply of almost any system subject to user-defined resource constraints. RSM is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more computationally intensive, integer programming was required for accuracy when modeling systems with small quantities of components. Input values for component life cane be real numbers, RSM converts them to integers by dividing the lifetime by the period duration, then reducing the result to the next lowest integer. For each component, there is a set of constraints that insure that it is replaced before its lifetime expires. RSM includes user-defined constraints such as transportation mass and volume limits, as well as component life, available repair crew time and assembly sequences. A weighting factor allows the program to minimize factors such as cost. The program then performs an iterative analysis, which is displayed during the processing. A message gives the first period in which resources are being exceeded on each iteration. If the scheduling problem is unfeasible, the final message will also indicate the first period in which resources were exceeded. RSM is written in APL2 for IBM PC series computers and compatibles. A stand-alone executable version of RSM is provided; however, this is a "packed" version of RSM which can only utilize the memory within the 640K DOS limit. This executable requires at least 640K of memory and DOS 3.1 or higher. Source code for an APL2/PC workspace version is also provided. This version of RSM can make full use of any

  5. Rate-independent dissipation and loading direction effects in compressed carbon nanotube arrays

    International Nuclear Information System (INIS)

    Raney, J R; Fraternali, F; Daraio, C

    2013-01-01

    Arrays of nominally-aligned carbon nanotubes (CNTs) under compression deform locally via buckling, exhibit a foam-like, dissipative response, and can often recover most of their original height. We synthesize millimeter-scale CNT arrays and report the results of compression experiments at different strain rates, from 10 −4 to 10 −1 s −1 , and for multiple compressive cycles to different strains. We observe that the stress–strain response proceeds independently of the strain rate for all tests, but that it is highly dependent on loading history. Additionally, we examine the effect of loading direction on the mechanical response of the system. The mechanical behavior is modeled using a multiscale series of bistable springs. This model captures the rate independence of the constitutive response, the local deformation, and the history-dependent effects. We develop here a macroscopic formulation of the model to represent a continuum limit of the mesoscale elements developed previously. Utilizing the model and our experimental observations we discuss various possible physical mechanisms contributing to the system’s dissipative response. (paper)

  6. [Belated diagnosis of medullar compression in a case of post-polio syndrome].

    Science.gov (United States)

    Boulay, C; Hamonet, C; Galaup, N; Djindjian, M; Montagne, A; Vivant, R

    2001-03-01

    The physiatrist observes about his practice individuals with sequela of old poliomyelitics. A part of them have unusual fatigue and muscular pains and weakness. The hypothesis of an evolution of neuro-biological mechanism suggested by few authors isn't, actually, demonstrated. More probably, the modifications of lesional and, functional changes with disability observed are the consequence of elderly effects and decreasing of physical activites. We report a case of spinal cord compression by intramedullar tumor, associated with a post-polio syndrome.

  7. Process induced residual stresses and distortions in pultrusion

    DEFF Research Database (Denmark)

    Baran, Ismet; Tutum, Cem Celal; Nielsen, Michael Wenani

    2013-01-01

    In the present study, a coupled 3D transient Eulerian thermo-chemical analysis together with a 2D plane strain Lagrangian mechanical analysis of the pultrusion process, which has not been considered until now, is carried out. The development of the process induced residual stresses and strains...... together with the distortions are predicted during the pultrusion in which the cure hardening instantaneous linear elastic (CHILE) approach is implemented. At the end of the process, tension stresses prevail for the inner region of the composite since the curing rate is higher here as compared to the outer...... regions where compression stresses are obtained. The separation between the heating die and the part due to shrinkage is also investigated using a mechanical contact formulation at the die-part interface. The proposed approach is found to be efficient and fast for the calculation of the residual stresses...

  8. Spreading Sequences Generated Using Asymmetrical Integer-Number Maps

    Directory of Open Access Journals (Sweden)

    V. Sebesta

    2007-09-01

    Full Text Available Chaotic sequences produced by piecewise linear maps can be transformed to binary sequences. The binary sequences are optimal for the asynchronous DS/CDMA systems in case of certain shapes of the maps. This paper is devoted to the one-to-one integer-number maps derived from the suitable asymmetrical piecewise linear maps. Such maps give periodic integer-number sequences, which can be transformed to the binary sequences. The binary sequences produced via proposed modified integer-number maps are perfectly balanced and embody good autocorrelation and crosscorrelation properties. The number of different binary sequences is sizable. The sequences are suitable as spreading sequences in DS/CDMA systems.

  9. Integer-valued time series

    NARCIS (Netherlands)

    van den Akker, R.

    2007-01-01

    This thesis adresses statistical problems in econometrics. The first part contributes statistical methodology for nonnegative integer-valued time series. The second part of this thesis discusses semiparametric estimation in copula models and develops semiparametric lower bounds for a large class of

  10. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    Science.gov (United States)

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  11. Integer programming techniques for educational timetabling

    DEFF Research Database (Denmark)

    Fonseca, George H.G.; Santos, Haroldo G.; Carrano, Eduardo G.

    2017-01-01

    in recent studies in the field. This work presents new cuts and reformulations for the existing integer programming model for XHSTT. The proposed cuts improved hugely the linear relaxation of the formulation, leading to an average gap reduction of 32%. Applied to XHSTT-2014 instance set, the alternative...... formulation provided four new best known lower bounds and, used in a matheuristic framework, improved eleven best known solutions. The computational experiments also show that the resulting integer programming models from the proposed formulation are more effectively solved for most of the instances....

  12. Does accelerometer feedback on high-quality chest compression improve survival rate? An in-hospital cardiac arrest simulation.

    Science.gov (United States)

    Jung, Min Hee; Oh, Je Hyeok; Kim, Chan Woong; Kim, Sung Eun; Lee, Dong Hoon; Chang, Wen Joen

    2015-08-01

    We investigated whether visual feedback from an accelerometer device facilitated high-quality chest compressions during an in-hospital cardiac arrest simulation using a manikin. Thirty health care providers participated in an in-hospital cardiac arrest simulation with 1 minute of continuous chest compressions. Chest compressions were performed on a manikin lying on a bed according to visual feedback from an accelerometer feedback device. The manikin and accelerometer recorded chest compression data simultaneously. The simulated patient was deemed to have survived when the chest compression data satisfied all of the preset high-quality chest compression criteria (depth ≥51 mm, rate >100 per minute, and ≥95% full recoil). Survival rates were calculated from the feedback device and manikin data. The survival rate according to the feedback device data was 80%; however, the manikin data indicated a significantly lower survival rate (46.7%; P = .015). The difference between the accelerometer and manikin survival rates was not significant for participants with a body mass index greater than or equal to 20 kg/m(2) (93.3 vs 73.3%, respectively; P = .330); however, the difference in survival rate was significant in participants with body mass index less than 20 kg/m(2) (66.7 vs 20.0%, respectively; P = .025). The use of accelerometer feedback devices to facilitate high-quality chest compression may not be appropriate for lightweight rescuers because of the potential for compression depth overestimation. Clinical Research Information Service (KCT0001449). Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Effects of the Mach number on the evolution of vortex-surface fields in compressible Taylor-Green flows

    Science.gov (United States)

    Peng, Naifu; Yang, Yue

    2018-01-01

    We investigate the evolution of vortex-surface fields (VSFs) in compressible Taylor-Green flows at Mach numbers (Ma) ranging from 0.5 to 2.0 using direct numerical simulation. The formulation of VSFs in incompressible flows is extended to compressible flows, and a mass-based renormalization of VSFs is used to facilitate characterizing the evolution of a particular vortex surface. The effects of the Mach number on the VSF evolution are different in three stages. In the early stage, the jumps of the compressive velocity component near shocklets generate sinks to contract surrounding vortex surfaces, which shrink vortex volume and distort vortex surfaces. The subsequent reconnection of vortex surfaces, quantified by the minimal distance between approaching vortex surfaces and the exchange of vorticity fluxes, occurs earlier and has a higher reconnection degree for larger Ma owing to the dilatational dissipation and shocklet-induced reconnection of vortex lines. In the late stage, the positive dissipation rate and negative pressure work accelerate the loss of kinetic energy and suppress vortex twisting with increasing Ma.

  14. Numerical and semi-analytical modelling of the process induced distortions in pultrusion

    DEFF Research Database (Denmark)

    Baran, Ismet; Carlone, P.; Hattel, Jesper Henri

    2013-01-01

    , the transient distortions are inferred adopting a semi-analytical procedure, i.e. post processing numerical results by means of analytical methods. The predictions of the process induced distortion development using the aforementioned methods are found to be qualitatively close to each other...

  15. Quantum recurrence and integer ratios in neutron resonances

    Energy Technology Data Exchange (ETDEWEB)

    Ohkubo, Makio

    1998-03-01

    Quantum recurrence of the compound nucleus in neutron resonance reactions are described for normal modes which are excited on the compound nucleus simultaneously. In the structure of the recurrence time, integer relations among dominant level spacings are derived. The `base modes` are assumed as stable combinations of the normal modes, preferably excited in many nuclei. (author)

  16. Anisotropic fractal media by vector calculus in non-integer dimensional space

    Energy Technology Data Exchange (ETDEWEB)

    Tarasov, Vasily E., E-mail: tarasov@theory.sinp.msu.ru [Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, Moscow 119991 (Russian Federation)

    2014-08-15

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.

  17. Anisotropic fractal media by vector calculus in non-integer dimensional space

    Science.gov (United States)

    Tarasov, Vasily E.

    2014-08-01

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media.

  18. Anisotropic fractal media by vector calculus in non-integer dimensional space

    International Nuclear Information System (INIS)

    Tarasov, Vasily E.

    2014-01-01

    A review of different approaches to describe anisotropic fractal media is proposed. In this paper, differentiation and integration non-integer dimensional and multi-fractional spaces are considered as tools to describe anisotropic fractal materials and media. We suggest a generalization of vector calculus for non-integer dimensional space by using a product measure method. The product of fractional and non-integer dimensional spaces allows us to take into account the anisotropy of the fractal media in the framework of continuum models. The integration over non-integer-dimensional spaces is considered. In this paper differential operators of first and second orders for fractional space and non-integer dimensional space are suggested. The differential operators are defined as inverse operations to integration in spaces with non-integer dimensions. Non-integer dimensional space that is product of spaces with different dimensions allows us to give continuum models for anisotropic type of the media. The Poisson's equation for fractal medium, the Euler-Bernoulli fractal beam, and the Timoshenko beam equations for fractal material are considered as examples of application of suggested generalization of vector calculus for anisotropic fractal materials and media

  19. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  20. INTEG INSPEC, Accident Frequencies and Safety Analysis for Nuclear Power Plant

    International Nuclear Information System (INIS)

    Arnett, L.M.

    1976-01-01

    1 - Description of problem or function: These programs analyze the characteristics of a general model developed to represent the safety aspects of an operating nuclear reactor. These characteristics are the frequencies of incidents that are departures from the expected behavior of the reactor. Each incident is assumed to be preceded by a sequence of events starting at some initiating event. At each member in this sequence there may be functions such as safety circuits, and personnel operations that stop the sequence at that member. When mechanical devices fail they are assumed to remain inoperative until repaired. The model accounts for scheduled inspection and maintenance of all equipment in the system. 2 - Method of solution: In INTEG, the discontinuous density function is integrated by the trapezoidal rule from time equals zero to time equals t. INSPEC is based on the simulation of reactor operation as a Markov process. A vector of probabilities is successively multiplied by a transition matrix. 3 - Restrictions on the complexity of the problem: INSPEC is limited to subsystems with no more than 7 safety circuits. The transition matrix can be made up as desired so that any intercorrelations between failures of circuits can be accommodated. In INTEG, failure rates of safety circuits are restricted to independence

  1. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    Science.gov (United States)

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Influences of inflation rate and duration on vasodilatory effect by intermittent pneumatic compression in distant skeletal muscle.

    Science.gov (United States)

    Liu, K; Chen, L E; Seaber, A V; Urbaniak, J R

    1999-05-01

    Previous study has demonstrated that application of intermittent pneumatic compression on legs can cause vasodilation in distant skeletal muscle at the microcirculation level. This study evaluated the influence of inflation rate and peak-pressure duration on the vasodilatory effects of intermittent pneumatic compression. The cremaster muscles of 50 male rats were exposed and divided into five groups of 10 each. A specially designed intermittent pneumatic-compression device was applied in a medial-lateral fashion to both legs of all rats for 60 minutes, with an inflation rate and peak-pressure duration of 0.5 and 5 seconds, respectively, in group A, 5 and 0 seconds in group B, 5 and 5 seconds in group C, 10 and 0 seconds in group D, and 10 and 5 seconds in group E. Diameters of arterial segments were measured in vessels of three size categories (10-20, 21-40, and 41-70 microm) for 120 minutes. The results showed that the greatest increase in diameter was produced by intermittent pneumatic compression with the shortest inflation rate (0.5 seconds). A moderate increase resulted from compression with an inflation rate of 5 seconds, and no effective vasodilation occurred during compression with the longest inflation rate (10 seconds). When the groups with different inflation rates but the same peak-pressure duration were compared, there was a significant difference between any two groups among groups A, C, and E and between groups B and D. When the groups with different peak-pressure durations but the same inflation rate were compared, compression with a peak-pressure duration of 5 seconds caused a generally similar degree of diameter change as did compression without inflation at peak pressure. The findings suggest that inflation rate plays an important role in the modulation of distant microcirculation induced by intermittent pneumatic compression whereas peak-pressure duration does not significantly influence the vasodilatory effects of the compression. This may be

  3. Artifact reduction of compressed images and video combining adaptive fuzzy filtering and directional anisotropic diffusion

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Forchhammer, Søren; Korhonen, Jari

    2011-01-01

    and ringing artifacts, we have applied directional anisotropic diffusion. Besides that, the selection of the adaptive threshold parameter for the diffusion coefficient has also improved the performance of the algorithm. Experimental results on JPEG compressed images as well as MJPEG and H.264 compressed......Fuzzy filtering is one of the recently developed methods for reducing distortion in compressed images and video. In this paper, we combine the powerful anisotropic diffusion equations with fuzzy filtering in order to reduce the impact of artifacts. Based on the directional nature of the blocking...... videos show improvement in artifact reduction of the proposed algorithm over other directional and spatial fuzzy filters....

  4. [Comparative investigation of compressive resistance of glass-cermet cements used as a core material in post-core systems].

    Science.gov (United States)

    Ersoy, E; Cetiner, S; Koçak, F

    1989-09-01

    In post-core applications, addition to the cast designs restorations that are performed on fabrication posts with restorative materials are being used. To improve the physical properties of glass-ionomer cements that are popular today, glass-cermet cements have been introduced and those materials have been proposed to be an alternative restorative material in post-core applications. In this study, the compressive resistance of Ketac-Silver as a core material was investigated comparatively with amalgam and composite resins.

  5. Quantifying the effect of disruptions to temporal coherence on the intelligibility of compressed American Sign Language video

    Science.gov (United States)

    Ciaramello, Frank M.; Hemami, Sheila S.

    2009-02-01

    Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.

  6. Efficient Algorithms for gcd and Cubic Residuosity in the Ring of Eisenstein Integers

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2003-01-01

    We present simple and efficient algorithms for computing gcd and cubic residuosity in the ring of Eisenstein integers, bf Z[ ]i.e. the integers extended with , a complex primitive third root of unity. The algorithms are similar and may be seen as generalisations of the binary integer gcd and deri......We present simple and efficient algorithms for computing gcd and cubic residuosity in the ring of Eisenstein integers, bf Z[ ]i.e. the integers extended with , a complex primitive third root of unity. The algorithms are similar and may be seen as generalisations of the binary integer gcd...

  7. Compressive sensing-based wideband capacitance measurement with a fixed sampling rate lower than the highest exciting frequency

    International Nuclear Information System (INIS)

    Xu, Lijun; Ren, Ying; Sun, Shijie; Cao, Zhang

    2016-01-01

    In this paper, an under-sampling method for wideband capacitance measurement was proposed by using the compressive sensing strategy. As the excitation signal is sparse in the frequency domain, the compressed sampling method that uses a random demodulator was adopted, which could greatly decrease the sampling rate. Besides, four switches were used to replace the multiplier in the random demodulator. As a result, not only the sampling rate can be much smaller than the signal excitation frequency, but also the circuit’s structure is simpler and its power consumption is lower. A hardware prototype was constructed to validate the method. In the prototype, an excitation voltage with a frequency up to 200 kHz was applied to a capacitance-to-voltage converter. The output signal of the converter was randomly modulated by a pseudo-random sequence through four switches. After a low-pass filter, the signal was sampled by an analog-to-digital converter at a sampling rate of 50 kHz, which was three times lower than the highest exciting frequency. The frequency and amplitude of the signal were then reconstructed to obtain the measured capacitance. Both theoretical analysis and experiments were carried out to show the feasibility of the proposed method and to evaluate the performance of the prototype, including its linearity, sensitivity, repeatability, accuracy and stability within a given measurement range. (paper)

  8. Dominant distortion classification for pre-processing of vowels in remote biomedical voice analysis

    DEFF Research Database (Denmark)

    Poorjam, Amir Hossein; Jensen, Jesper Rindom; Little, Max A

    2017-01-01

    for pathological voice assessments and investigate the impact of four major types of distortion that are commonly present during recording or transmission in voice analysis, namely: background noise, reverberation, clipping and compression, on Mel-frequency cepstral coefficients (MFCCs) – the most widely...

  9. Patterns of neurovascular compression in patients with classic trigeminal neuralgia: A high-resolution MRI-based study

    International Nuclear Information System (INIS)

    Lorenzoni, José; David, Philippe; Levivier, Marc

    2012-01-01

    Purpose: To describe the anatomical characteristics and patterns of neurovascular compression in patients suffering classic trigeminal neuralgia (CTN), using high-resolution magnetic resonance imaging (MRI). Materials and methods: The analysis of the anatomy of the trigeminal nerve, brain stem and the vascular structures related to this nerve was made in 100 consecutive patients treated with a Gamma Knife radiosurgery for CTN between December 1999 and September 2004. MRI studies (T1, T1 enhanced and T2-SPIR) with axial, coronal and sagital simultaneous visualization were dynamically assessed using the software GammaPlan™. Three-dimensional reconstructions were also developed in some representative cases. Results: In 93 patients (93%), there were one or several vascular structures in contact, either, with the trigeminal nerve, or close to its origin in the pons. The superior cerebellar artery was involved in 71 cases (76%). Other vessels identified were the antero-inferior cerebellar artery, the basilar artery, the vertebral artery, and some venous structures. Vascular compression was found anywhere along the trigeminal nerve. The mean distance between the nerve compression and the origin of the nerve in the brainstem was 3.76 ± 2.9 mm (range 0–9.8 mm). In 39 patients (42%), the vascular compression was located proximally and in 42 (45%) the compression was located distally. Nerve dislocation or distortion by the vessel was observed in 30 cases (32%). Conclusions: The findings of this study are similar to those reported in surgical and autopsy series. This non-invasive MRI-based approach could be useful for diagnostic and therapeutic decisions in CTN, and it could help to understand its pathogenesis.

  10. Visual Communications for Heterogeneous Networks/Visually Optimized Scalable Image Compression. Final Report for September 1, 1995 - February 28, 2002

    Energy Technology Data Exchange (ETDEWEB)

    Hemami, S. S.

    2003-06-03

    The authors developed image and video compression algorithms that provide scalability, reconstructibility, and network adaptivity, and developed compression and quantization strategies that are visually optimal at all bit rates. The goal of this research is to enable reliable ''universal access'' to visual communications over the National Information Infrastructure (NII). All users, regardless of their individual network connection bandwidths, qualities-of-service, or terminal capabilities, should have the ability to access still images, video clips, and multimedia information services, and to use interactive visual communications services. To do so requires special capabilities for image and video compression algorithms: scalability, reconstructibility, and network adaptivity. Scalability allows an information service to provide visual information at many rates, without requiring additional compression or storage after the stream has been compressed the first time. Reconstructibility allows reliable visual communications over an imperfect network. Network adaptivity permits real-time modification of compression parameters to adjust to changing network conditions. Furthermore, to optimize the efficiency of the compression algorithms, they should be visually optimal, where each bit expended reduces the visual distortion. Visual optimality is achieved through first extensive experimentation to quantify human sensitivity to supra-threshold compression artifacts and then incorporation of these experimental results into quantization strategies and compression algorithms.

  11. n-Gram-Based Text Compression

    Science.gov (United States)

    Duong, Hieu N.; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  12. n-Gram-Based Text Compression

    Directory of Open Access Journals (Sweden)

    Vu H. Nguyen

    2016-01-01

    Full Text Available We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  13. Spinor Field Realizations of the half-integer $W_{2,s}$ Strings

    OpenAIRE

    Wei, Shao-Wen; Liu, Yu-Xiao; Zhang, Li-Jie; Ren, Ji-Rong

    2008-01-01

    The grading Becchi-Rouet-Stora-Tyutin (BRST) method gives a way to construct the integer $W_{2,s}$ strings, where the BRST charge is written as $Q_B=Q_0+Q_1$. Using this method, we reconstruct the nilpotent BRST charges $Q_{0}$ for the integer $W_{2,s}$ strings and the half-integer $W_{2,s}$ strings. Then we construct the exact grading BRST charge with spinor fields and give the new realizations of the half-integer $W_{2,s}$ strings for the cases of $s=3/2$, 5/2, and 7/2.

  14. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  15. Integer programming for the generalized high school timetabling problem

    DEFF Research Database (Denmark)

    Kristiansen, Simon; Sørensen, Matias; Stidsen, Thomas Riis

    2015-01-01

    , the XHSTT format serves as a common ground for researchers within this area. This paper describes the first exact method capable of handling an arbitrary instance of the XHSTT format. The method is based on a mixed-integer linear programming (MIP) model, which is solved in two steps with a commercial...

  16. Strain Rate Dependent Behavior and Modeling for Compression Response of Hybrid Fiber Reinforced Concrete

    Directory of Open Access Journals (Sweden)

    S.M. Ibrahim

    Full Text Available Abstract This paper investigates the stress-strain characteristics of Hybrid fiber reinforced concrete (HFRC composites under dynamic compression using Split Hopkinson Pressure Bar (SHPB for strain rates in the range of 25 to 125 s-1. Three types of fibers - hooked ended steel fibers, monofilament crimped polypropylene fibers and staple Kevlar fibers were used in the production of HFRC composites. The influence of different fibers in HFRC composites on the failure mode, dynamic increase factor (DIF of strength, toughness and strain are also studied. Degree of fragmentation of HFRC composite specimens increases with increase in the strain rate. Although the use of high percentage of steel fibers leads to the best performance but among the hybrid fiber combinations studied, HFRC composites with relatively higher percentage of steel fibers and smaller percentage of polypropylene and Kevlar fibers seem to reflect the equally good synergistic effects of fibers under dynamic compression. A rate dependent analytical model is proposed for predicting complete stress-strain curves of HFRC composites. The model is based on a comprehensive fiber reinforcing index and complements well with the experimental results.

  17. Evaluation of the distortions of the digital chest image caused by the data compression

    International Nuclear Information System (INIS)

    Ando, Yutaka; Kunieda, Etsuo; Ogawa, Koichi; Tukamoto, Nobuhiro; Hashimoto, Shozo; Aoki, Makoto; Kurotani, Kenichi.

    1988-01-01

    The image data compression methods using orthogonal transforms (Discrete cosine transform, Discrete fourier transform, Hadamard transform, Haar transform, Slant transform) were analyzed. From the points of the error and the speed of the data conversion, the discrete cosine transform method (DCT) is superior to the other methods. The block quantization by the DCT for the digital chest image was used. The quality of data compressed and reconstructed images by the score analysis and the ROC curve analysis was examined. The chest image with the esophageal cancer and metastatic lung tumors was evaluated at the 17 checkpoints (the tumor, the vascular markings, the border of the heart and ribs, the mediastinal structures and et al). By our score analysis, the satisfactory ratio of the data compression is 1/5 and 1/10. The ROC analysis using normal chest images superimposed by the artificial coin lesions was made. The ROC curve of the 1/5 compressed ratio is almost as same as the original one. To summarize our study, the image data compression method using the DCT is thought to be useful for the clinical use and the 1/5 compression ratio is a tolerable ratio. (author)

  18. Evaluation of the distortions of the digital chest image caused by the data compression

    Energy Technology Data Exchange (ETDEWEB)

    Ando, Yutaka; Kunieda, Etsuo; Ogawa, Koichi; Tukamoto, Nobuhiro; Hashimoto, Shozo; Aoki, Makoto; Kurotani, Kenichi

    1988-08-01

    The image data compression methods using orthogonal transforms (Discrete cosine transform, Discrete fourier transform, Hadamard transform, Haar transform, Slant transform) were analyzed. From the points of the error and the speed of the data conversion, the discrete cosine transform method (DCT) is superior to the other methods. The block quantization by the DCT for the digital chest image was used. The quality of data compressed and reconstructed images by the score analysis and the ROC curve analysis was examined. The chest image with the esophageal cancer and metastatic lung tumors was evaluated at the 17 checkpoints (the tumor, the vascular markings, the border of the heart and ribs, the mediastinal structures and et al). By our score analysis, the satisfactory ratio of the data compression is 1/5 and 1/10. The ROC analysis using normal chest images superimposed by the artificial coin lesions was made. The ROC curve of the 1/5 compressed ratio is almost as same as the original one. To summarize our study, the image data compression method using the DCT is thought to be useful for the clinical use and the 1/5 compression ratio is a tolerable ratio.

  19. Light-weight reference-based compression of FASTQ data.

    Science.gov (United States)

    Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan

    2015-06-09

    The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.

  20. Object-based warping: an illusory distortion of space within objects.

    Science.gov (United States)

    Vickery, Timothy J; Chun, Marvin M

    2010-12-01

    Visual objects are high-level primitives that are fundamental to numerous perceptual functions, such as guidance of attention. We report that objects warp visual perception of space in such a way that spatial distances within objects appear to be larger than spatial distances in ground regions. When two dots were placed inside a rectangular object, they appeared farther apart from one another than two dots with identical spacing outside of the object. To investigate whether this effect was object based, we measured the distortion while manipulating the structure surrounding the dots. Object displays were constructed with a single object, multiple objects, a partially occluded object, and an illusory object. Nonobject displays were constructed to be comparable to object displays in low-level visual attributes. In all cases, the object displays resulted in a more powerful distortion of spatial perception than comparable non-object-based displays. These results suggest that perception of space within objects is warped.

  1. Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery

    Science.gov (United States)

    Xie, Hua; Klimesh, Matthew A.

    2009-01-01

    This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.

  2. Resource efficient data compression algorithms for demanding, WSN based biomedical applications.

    Science.gov (United States)

    Antonopoulos, Christos P; Voros, Nikolaos S

    2016-02-01

    During the last few years, medical research areas of critical importance such as Epilepsy monitoring and study, increasingly utilize wireless sensor network technologies in order to achieve better understanding and significant breakthroughs. However, the limited memory and communication bandwidth offered by WSN platforms comprise a significant shortcoming to such demanding application scenarios. Although, data compression can mitigate such deficiencies there is a lack of objective and comprehensive evaluation of relative approaches and even more on specialized approaches targeting specific demanding applications. The research work presented in this paper focuses on implementing and offering an in-depth experimental study regarding prominent, already existing as well as novel proposed compression algorithms. All algorithms have been implemented in a common Matlab framework. A major contribution of this paper, that differentiates it from similar research efforts, is the employment of real world Electroencephalography (EEG) and Electrocardiography (ECG) datasets comprising the two most demanding Epilepsy modalities. Emphasis is put on WSN applications, thus the respective metrics focus on compression rate and execution latency for the selected datasets. The evaluation results reveal significant performance and behavioral characteristics of the algorithms related to their complexity and the relative negative effect on compression latency as opposed to the increased compression rate. It is noted that the proposed schemes managed to offer considerable advantage especially aiming to achieve the optimum tradeoff between compression rate-latency. Specifically, proposed algorithm managed to combine highly completive level of compression while ensuring minimum latency thus exhibiting real-time capabilities. Additionally, one of the proposed schemes is compared against state-of-the-art general-purpose compression algorithms also exhibiting considerable advantages as far as the

  3. Upper Bounds for the Rate Distortion Function of Finite-Length Data Blocks of Gaussian WSS Sources

    Directory of Open Access Journals (Sweden)

    Jesús Gutiérrez-Gutiérrez

    2017-10-01

    Full Text Available In this paper, we present upper bounds for the rate distortion function (RDF of finite-length data blocks of Gaussian wide sense stationary (WSS sources and we propose coding strategies to achieve such bounds. In order to obtain those bounds, we previously derive new results on the discrete Fourier transform (DFT of WSS processes.

  4. A novel frame-level constant-distortion bit allocation for smooth H.264/AVC video quality

    Science.gov (United States)

    Liu, Li; Zhuang, Xinhua

    2009-01-01

    It is known that quality fluctuation has a major negative effect on visual perception. In previous work, we introduced a constant-distortion bit allocation method [1] for H.263+ encoder. However, the method in [1] can not be adapted to the newest H.264/AVC encoder directly as the well-known chicken-egg dilemma resulted from the rate-distortion optimization (RDO) decision process. To solve this problem, we propose a new two stage constant-distortion bit allocation (CDBA) algorithm with enhanced rate control for H.264/AVC encoder. In stage-1, the algorithm performs RD optimization process with a constant quantization QP. Based on prediction residual signals from stage-1 and target distortion for smooth video quality purpose, the frame-level bit target is allocated by using a close-form approximations of ratedistortion relationship similar to [1], and a fast stage-2 encoding process is performed with enhanced basic unit rate control. Experimental results show that, compared with original rate control algorithm provided by H.264/AVC reference software JM12.1, the proposed constant-distortion frame-level bit allocation scheme reduces quality fluctuation and delivers much smoother PSNR on all testing sequences.

  5. Uniaxial Compressive Strength and Fracture Mode of Lake Ice at Moderate Strain Rates Based on a Digital Speckle Correlation Method for Deformation Measurement

    Directory of Open Access Journals (Sweden)

    Jijian Lian

    2017-05-01

    Full Text Available Better understanding of the complex mechanical properties of ice is the foundation to predict the ice fail process and avoid potential ice threats. In the present study, uniaxial compressive strength and fracture mode of natural lake ice are investigated over moderate strain-rate range of 0.4–10 s−1 at −5 °C and −10 °C. The digital speckle correlation method (DSCM is used for deformation measurement through constructing artificial speckle on ice sample surface in advance, and two dynamic load cells are employed to measure the dynamic load for monitoring the equilibrium of two ends’ forces under high-speed loading. The relationships between uniaxial compressive strength and strain-rate, temperature, loading direction, and air porosity are investigated, and the fracture mode of ice at moderate rates is also discussed. The experimental results show that there exists a significant difference between true strain-rate and nominal strain-rate derived from actuator displacement under dynamic loading conditions. Over the employed strain-rate range, the dynamic uniaxial compressive strength of lake ice shows positive strain-rate sensitivity and decreases with increasing temperature. Ice obtains greater strength values when it is with lower air porosity and loaded vertically. The fracture mode of ice seems to be a combination of splitting failure and crushing failure.

  6. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  7. Towards distortion-free robust image authentication

    International Nuclear Information System (INIS)

    Coltuc, D

    2007-01-01

    This paper investigates a general framework for distortion-free robust image authentication by multiple marking. First, by robust watermarking a subsampled version of image edges is embedded. Then, by reversible watermarking the information needed to recover the original image is inserted, too. The hiding capacity of the reversible watermarking is the essential requirement for this approach. Thus in case of no attacks not only image is authenticated but also the original is exactly recovered. In case of attacks, reversibility is lost, but image can still be authenticated. Preliminary results providing very good robustness against JPEG compression are presented

  8. Effect of the loading rate on compressive properties of goose eggs.

    Science.gov (United States)

    Nedomová, Š; Kumbár, V; Trnka, J; Buchar, J

    2016-03-01

    The resistance of goose (Anser anser f. domestica) eggs to damage was determined by measuring the average rupture force, specific deformation and rupture energy during their compression at different compression speeds (0.0167, 0.167, 0.334, 1.67, 6.68 and 13.36 mm/s). Eggs have been loaded between their poles (along X axis) and in the equator plane (Z axis). The greatest amount of force required to break the eggs was required when eggs were loaded along the X axis and the least compression force was required along the Z axis. This effect of the loading orientation can be described in terms of the eggshell contour curvature. The rate sensitivity of the eggshell rupture force is higher than that observed for the Japanese quail's eggs.

  9. SPECTRUM analysis of multispectral imagery in conjunction with wavelet/KLT data compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-12-01

    The data analysis program, SPECTRUM, is used for fusion, visualization, and classification of multi-spectral imagery. The raw data used in this study is Landsat Thematic Mapper (TM) 7-channel imagery, with 8 bits of dynamic range per channel. To facilitate data transmission and storage, a compression algorithm is proposed based on spatial wavelet transform coding and KLT decomposition of interchannel spectral vectors, followed by adaptive optimal multiband scalar quantization. The performance of SPECTRUM clustering and visualization is evaluated on compressed multispectral data. 8-bit visualizations of 56-bit data show little visible distortion at 50:1 compression and graceful degradation at higher compression ratios. Two TM images were processed in this experiment: a 1024 x 1024-pixel scene of the region surrounding the Chernobyl power plant, taken a few months before the reactor malfunction, and a 2048 x 2048 image of Moscow and surrounding countryside.

  10. H∞ Robust Current Control for DFIG Based Wind Turbine subject to Grid Voltage Distortions

    DEFF Research Database (Denmark)

    Wang, Yun; Wu, Qiuwei; Gong, Wenming

    2016-01-01

    This paper proposes an H∞ robust current controller for doubly fed induction generator (DFIG) based wind turbines (WTs) subject to grid voltage distortions. The controller is to mitigate the impact of the grid voltage distortions on rotor currents with DFIG parameter perturbation. The grid voltage...... distortions considered include asymmetric voltage dips and grid background harmonics. An uncertain DFIG model is developed with uncertain factors originating from distorted stator voltage, and changed generator parameters due to the flux saturation effect, the skin effect, etc. Weighting functions...... are designed to efficiently track the unbalanced current components and the 5th and 7th background harmonics. The robust stability (RS) and robust performance (RP) of the proposed controller are verified by the structured singular value µ. The performance of the H∞ robust current controller was demonstrated...

  11. EPC: A Provably Secure Permutation Based Compression Function

    DEFF Research Database (Denmark)

    Bagheri, Nasour; Gauravaram, Praveen; Naderi, Majid

    2010-01-01

    The security of permutation-based hash functions in the ideal permutation model has been studied when the input-length of compression function is larger than the input-length of the permutation function. In this paper, we consider permutation based compression functions that have input lengths sh...

  12. Right Propositional Neighborhood Logic over Natural Numbers with Integer Constraints for Interval Lengths

    DEFF Research Database (Denmark)

    Bresolin, Davide; Goranko, Valentin; Montanari, Angelo

    2009-01-01

    Interval temporal logics are based on interval structures over linearly (or partially) ordered domains, where time intervals, rather than time instants, are the primitive ontological entities. In this paper we introduce and study Right Propositional Neighborhood Logic over natural numbers...... with integer constraints for interval lengths, which is a propositional interval temporal logic featuring a modality for the 'right neighborhood' relation between intervals and explicit integer constraints for interval lengths. We prove that it has the bounded model property with respect to ultimately periodic...

  13. Effect of quantum well position on the distortion characteristics of transistor laser

    Science.gov (United States)

    Piramasubramanian, S.; Ganesh Madhan, M.; Radha, V.; Shajithaparveen, S. M. S.; Nivetha, G.

    2018-05-01

    The effect of quantum well position on the modulation and distortion characteristics of a 1300 nm transistor laser is analyzed in this paper. Standard three level rate equations are numerically solved to study this characteristics. Modulation depth, second order harmonic and third order intermodulation distortion of the transistor laser are evaluated for different quantum well positions for a 900 MHz RF signal modulation. From the DC analysis, it is observed that optical power is maximum, when the quantum well is positioned near base-emitter interface. The threshold current of the device is found to increase with increasing the distance between the quantum well and the base-emitter junction. A maximum modulation depth of 0.81 is predicted, when the quantum well is placed at 10 nm from the base-emitter junction, under RF modulation. The magnitude of harmonic and intermodulation distortion are found to decrease with increasing current and with an increase in quantum well distance from the emitter base junction. A minimum second harmonic distortion magnitude of -25.96 dBc is predicted for quantum well position (230 nm) near to the base-collector interface for 900 MHz modulation frequency at a bias current of 20 Ibth. Similarly, a minimum third order intermodulation distortion of -38.2 dBc is obtained for the same position and similar biasing conditions.

  14. Allocating the Fixed Resources and Setting Targets in Integer Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Kobra Gholami

    2013-11-01

    Full Text Available Data envelopment analysis (DEA is a non-parametric approach to evaluate a set of decision making units (DMUs consuming multiple inputs to produce multiple outputs. Formally, DEA use to estimate the efficiency score into the empirical efficient frontier. Also, DEA can be used to allocate resources and set targets for future forecast. The data are continuous in the standard DEA model whereas there are many problems in the real life that data must be integer such as number of employee, machinery, expert and so on. Thus in this paper we propose an approach to allocate fixed resources and set fixed targets with selective integer assumption that is based on an integer data envelopment analysis (IDEA approach for the first time. The major aim in this approach is preserving the efficiency score of DMUs. We use the concept of benchmarking to reach this aim. The numerical example gets to illustrate the applicability of the proposed method.

  15. Nonlinear frequency compression: effects on sound quality ratings of speech and music.

    Science.gov (United States)

    Parsa, Vijay; Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-03-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality.

  16. Pilotless recovery of clipped OFDM signals by compressive sensing over reliable data carriers

    KAUST Repository

    Al-Safadi, Ebrahim B.

    2012-06-01

    In this paper we propose a novel method of clipping mitigation in OFDM using compressive sensing that completely avoids using reserved tones or channel-estimation pilots. The method builds on selecting the most reliable perturbations from the constellation lattice upon decoding at the receiver (in the frequency domain), and performs compressive sensing over these observations in order to completely recover the sparse nonlinear distortion in the time domain. As such, the method provides a practical solution to the problem of initial erroneous decoding decisions in iterative ML methods, and the ability to recover the distorted signal in one shot. © 2012 IEEE.

  17. Pilotless recovery of clipped OFDM signals by compressive sensing over reliable data carriers

    KAUST Repository

    Al-Safadi, Ebrahim B.; Al-Naffouri, Tareq Y.

    2012-01-01

    In this paper we propose a novel method of clipping mitigation in OFDM using compressive sensing that completely avoids using reserved tones or channel-estimation pilots. The method builds on selecting the most reliable perturbations from the constellation lattice upon decoding at the receiver (in the frequency domain), and performs compressive sensing over these observations in order to completely recover the sparse nonlinear distortion in the time domain. As such, the method provides a practical solution to the problem of initial erroneous decoding decisions in iterative ML methods, and the ability to recover the distorted signal in one shot. © 2012 IEEE.

  18. Strength and Absorption Rate of Compressed Stabilized Earth Bricks (CSEBs Due to Different Mixture Ratios and Degree of Compaction

    Directory of Open Access Journals (Sweden)

    Abdullah Abd Halid

    2017-01-01

    Full Text Available Compressed Stabilized Earth Brick (CSEB is produced by compressing a mixture of water with three main materials such as Ordinary Portland Cement (OPC, soil, and sand. It becomes popularfor its good strength, better insulation properties, and a sustainable product due to its easy production with low carbon emission and less skilled labour required. Different types of local soils usedwill produce CSEB of different physical properties in terms of its strength, durability, and water absorption rate. This study focuses on laterite soil taken from the surrounding local area in Parit Raja, Johor, and CSEB samples are produced based on prototype brick size 100×50×30 mm. The investigations are based on four different degree of compactions (i.e. 1500, 2000, 2500, and 3000 Psi and three different mix proportion ratios of cement:sand:laterite soil (i.e. 1:1:9, 1:2:8, 1:3:7. A total of 144 CSEB samples have been tested at 7 and 28 days curing periods to determine the compressive strength (BS 3921:1985 and water absorption rate (MS 76:1972. It was found that maximum compressive strength of CSEB was 14.68 N/mm2 for mixture ratio of 1:3:7 at 2500 Psi compaction. Whereas, the minimum strengthis 6.87 N/mm2 for 1:1:9mixture ratio at 1500 Psi. Meanwhile, the lowest water absorption was 12.35% for mixture ratio of 1:2:8 at 3000 Psi; while the 1:1:9 mixture ratio at 1500 Psi gave the highest rate of 16.81%. This study affirms that the sand content in the mixture and the degree of compaction would affect the value of compressive strength and water absorption of CSEB.

  19. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  20. Presolving and regularization in mixed-integer second-order cone optimization

    DEFF Research Database (Denmark)

    Friberg, Henrik Alsing

    Mixed-integer second-order cone optimization is a powerful mathematical framework capable of representing both logical conditions and nonlinear relationships in mathematical models of industrial optimization problems. What is more, solution methods are already part of many major commercial solvers...... both continuous and mixed-integer conic optimization in general, is discovered and treated. This part of the thesis continues the studies of facial reduction preceding the work of Borwein and Wolkowicz [17] in 1981, when the first algorithmic cure for these kinds of reliability issues were formulated....... An important distinction to make between continuous and mixed-integer optimization, however, is that the reliability issues occurring in mixed-integer optimization cannot be blamed on the practitioner’s formulation of the problem. Specifically, as shown, the causes for these issues may well lie within...

  1. An improvement analysis on video compression using file segmentation

    Science.gov (United States)

    Sharma, Shubhankar; Singh, K. John; Priya, M.

    2017-11-01

    From the past two decades the extreme evolution of the Internet has lead a massive rise in video technology and significantly video consumption over the Internet which inhabits the bulk of data traffic in general. Clearly, video consumes that so much data size on the World Wide Web, to reduce the burden on the Internet and deduction of bandwidth consume by video so that the user can easily access the video data.For this, many video codecs are developed such as HEVC/H.265 and V9. Although after seeing codec like this one gets a dilemma of which would be improved technology in the manner of rate distortion and the coding standard.This paper gives a solution about the difficulty for getting low delay in video compression and video application e.g. ad-hoc video conferencing/streaming or observation by surveillance. Also this paper describes the benchmark of HEVC and V9 technique of video compression on subjective oral estimations of High Definition video content, playback on web browsers. Moreover, this gives the experimental ideology of dividing the video file into several segments for compression and putting back together to improve the efficiency of video compression on the web as well as on the offline mode.

  2. Using pre-distorted PAM-4 signal and parallel resistance circuit to enhance the passive solar cell based visible light communication

    Science.gov (United States)

    Wang, Hao-Yu; Wu, Jhao-Ting; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung; Liao, Xin-Lan; Lin, Kun-Hsien; Wu, Wei-Liang; Chen, Yi-Yuan

    2018-01-01

    Using solar cell (or photovoltaic cell) for visible light communication (VLC) is attractive. Apart from acting as a VLC receiver (Rx), the solar cell can provide energy harvesting. This can be used in self-powered smart devices, particularly in the emerging ;Internet of Things (IoT); networks. Here, we propose and demonstrate for the first time using pre-distortion pulse-amplitude-modulation (PAM)-4 signal and parallel resistance circuit to enhance the transmission performance of solar cell Rx based VLC. Pre-distortion is a simple non-adaptive equalization technique that can significantly mitigate the slow charging and discharging of the solar cell. The equivalent circuit model of the solar cell and the operation of using parallel resistance to increase the bandwidth of the solar cell are discussed. By using the proposed schemes, the experimental results show that the data rate of the solar cell Rx based VLC can increase from 20 kbit/s to 1.25 Mbit/s (about 60 times) with the bit error-rate (BER) satisfying the 7% forward error correction (FEC) limit.

  3. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  4. An Improved Distortion Operator for Insurance Risks

    Institute of Scientific and Technical Information of China (English)

    GAO Jian-wei; QIU Wan-hua

    2002-01-01

    This paper reviews the distortion function approach developed in the actuarial literature for insurance risks. The main aim of this paper is to derive an extensive distortion operator, and to propose a new premium principle based on this extensive distortion operator. Furthermore, the non-robustness of general distortion operator is also discussed. Examples are provided using Bernoulli, Pareto, Lognormal and Gamma distribution assumptions.

  5. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  6. Video quality pooling adaptive to perceptual distortion severity.

    Science.gov (United States)

    Park, Jincheol; Seshadrinathan, Kalpana; Lee, Sanghoon; Bovik, Alan Conrad

    2013-02-01

    It is generally recognized that severe video distortions that are transient in space and/or time have a large effect on overall perceived video quality. In order to understand this phenomena, we study the distribution of spatio-temporally local quality scores obtained from several video quality assessment (VQA) algorithms on videos suffering from compression and lossy transmission over communication channels. We propose a content adaptive spatial and temporal pooling strategy based on the observed distribution. Our method adaptively emphasizes "worst" scores along both the spatial and temporal dimensions of a video sequence and also considers the perceptual effect of large-area cohesive motion flow such as egomotion. We demonstrate the efficacy of the method by testing it using three different VQA algorithms on the LIVE Video Quality database and the EPFL-PoliMI video quality database.

  7. A quadratic approximation-based algorithm for the solution of multiparametric mixed-integer nonlinear programming problems

    KAUST Repository

    Domí nguez, Luis F.; Pistikopoulos, Efstratios N.

    2012-01-01

    An algorithm for the solution of convex multiparametric mixed-integer nonlinear programming problems arising in process engineering problems under uncertainty is introduced. The proposed algorithm iterates between a multiparametric nonlinear

  8. Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility

    Directory of Open Access Journals (Sweden)

    Herzke Tobias

    2005-01-01

    Full Text Available The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase

  9. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  10. BioTwist : overcoming severe distortions in ridge-based biometrics for succesful identification

    NARCIS (Netherlands)

    Kotzerke, J.

    2016-01-01

    This thesis focuses on ridge-based and highly distorted biometrics, the different chal-lenges involved in a verification of identity scenario, and how to overcome them. More specifically, we work on ridge-based biometrics in two different contexts: (i) newborn and infant biometrics and (ii) quality

  11. Use of Vortex Generators to Reduce Distortion for Mach 1.6 Streamline-Traced Supersonic Inlets

    Science.gov (United States)

    Baydar, Ezgihan; Lu, Frank; Slater, John W.; Trefny, Chuck

    2016-01-01

    Reduce the total pressure distortion at the engine-fan face due to low-momentum flow caused by the interaction of an external terminal shock at the turbulent boundary layer along a streamline-traced external-compression (STEX) inlet for Mach 1.6.

  12. Signal Recovery in Compressive Sensing via Multiple Sparsifying Bases

    DEFF Research Database (Denmark)

    Wijewardhana, U. L.; Belyaev, Evgeny; Codreanu, M.

    2017-01-01

    is sparse is the key assumption utilized by such algorithms. However, the basis in which the signal is the sparsest is unknown for many natural signals of interest. Instead there may exist multiple bases which lead to a compressible representation of the signal: e.g., an image is compressible in different...... wavelet transforms. We show that a significant performance improvement can be achieved by utilizing multiple estimates of the signal using sparsifying bases in the context of signal reconstruction from compressive samples. Further, we derive a customized interior-point method to jointly obtain multiple...... estimates of a 2-D signal (image) from compressive measurements utilizing multiple sparsifying bases as well as the fact that the images usually have a sparse gradient....

  13. Rate dependent image distortions in proportional counters

    International Nuclear Information System (INIS)

    Trow, M.W.; Bento, A.C.; Smith, A.

    1994-01-01

    The positional linearity of imaging proportional counters is affected by the intensity distribution of the incident radiation. A mechanism for this effect is described, in which drifting positive ions in the gas produce a distorting electric field which perturbs the trajectories of the primary electrons. In certain cases, the phenomenon causes an apparent improvement of the position resolution. We demonstrate the effect in a detector filled with a xenon-argon-CO 2 mixture. The images obtained are compared with the results of a simulation. If quantitative predictions for a particular detector are required, accurate values of the absolute detector gain, ion mobility and electron drift velocity are needed. ((orig.))

  14. Involvement of upper torso stress amplification, tissue compression and distortion in the pathogenesis of keloids.

    Science.gov (United States)

    Bux, Shamin; Madaree, Anil

    2012-03-01

    Keloids are benign tumours composed of fibrous tissue produced during excessive tissue repair triggered by minor injury, trauma or surgical incision. Although it is recognized that keloids have a propensity to form in the upper torso of the body, the predisposing factors responsible for this have not been investigated. It is crucial that the aetiopathoical factors implicated in keloid formation be established to provide guidelines for well-informed more successful treatment. We compared keloid-prone and keloid-protected skin, identified pertinent morphological differences and explored how inherent structural characteristics and intrinsic factors may promote keloid formation. It was determined that keloid prone areas were covered with high tension skin that had low stretch and a low elastic modulus when compared with skin in keloid protected areas where the skin was lax with a high elastic modulus and low pre-stress level. Factors contributing to elevated internal stress in keloid susceptible skin were the protrusion of hard connective tissue such as bony prominences or cartilage into the dermis of skin as well as inherent skin characteristics such as the bundled arrangement of collagen in the reticular dermis, the existent high tension, the low elastic modulus, low stretch ability, contractile forces exerted by wound healing fibroblastic cells and external forces. Stress promotes keloid formation by causing dermal distortion and compression which subsequently stimulate proliferation and enhanced protein synthesis in wound healing fibroblastic cells. The strain caused by stress also compresses and occludes microvessels causing ischaemic effects and reperfusion injury which stimulate growth when blood rich in growth factors returns to the tissue. The growth promoting effects of increased internal stress, primarily, and growth factors released by reperfusing blood, manifest in keloid formation. Other inherent skin characteristics promoting keloid growth during the

  15. High bit depth infrared image compression via low bit depth codecs

    Science.gov (United States)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  16. Rewrite systems for integer arithmetic

    NARCIS (Netherlands)

    H.R. Walters (Pum); H. Zantema (Hans)

    1995-01-01

    textabstractWe present three term rewrite systems for integer arithmetic with addition, multiplication, and, in two cases, subtraction. All systems are ground confluent and terminating; termination is proved by semantic labelling and recursive path order. The first system represents numbers by

  17. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  18. Quark enables semi-reference-based compression of RNA-seq data.

    Science.gov (United States)

    Sarkar, Hirak; Patro, Rob

    2017-11-01

    The past decade has seen an exponential increase in biological sequencing capacity, and there has been a simultaneous effort to help organize and archive some of the vast quantities of sequencing data that are being generated. Although these developments are tremendous from the perspective of maximizing the scientific utility of available data, they come with heavy costs. The storage and transmission of such vast amounts of sequencing data is expensive. We present Quark, a semi-reference-based compression tool designed for RNA-seq data. Quark makes use of a reference sequence when encoding reads, but produces a representation that can be decoded independently, without the need for a reference. This allows Quark to achieve markedly better compression rates than existing reference-free schemes, while still relieving the burden of assuming a specific, shared reference sequence between the encoder and decoder. We demonstrate that Quark achieves state-of-the-art compression rates, and that, typically, only a small fraction of the reference sequence must be encoded along with the reads to allow reference-free decompression. Quark is implemented in C ++11, and is available under a GPLv3 license at www.github.com/COMBINE-lab/quark. rob.patro@cs.stonybrook.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  19. Statistical conditional sampling for variable-resolution video compression.

    Directory of Open Access Journals (Sweden)

    Alexander Wong

    Full Text Available In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.

  20. Hot compressive deformation behavior of the as-quenched A357 aluminum alloy

    International Nuclear Information System (INIS)

    Yang, X.W.; Lai, Z.H.; Zhu, J.C.; Liu, Y.; He, D.

    2012-01-01

    Highlights: ► We create a thermal history curve which was applied to carry out compression tests. ► We make an analysis of deformation performance for as-quenched A357 alloy. ► We create a constitutive equation which has good accuracy. - Abstract: The objective of the present work was to establish an accurate thermal-stress mathematical model of the quenching operation for A357 (Al–7Si–0.6Mg) alloy and to investigate the deformation behavior of this alloy. Isothermal compression tests of as-quenched A357 alloy were performed in the temperature range of 350–500 °C and at the strain rate range of 0.001–1 s −1 . Experimental results show that the flow stress of as-quenched A357 alloy decreases with the increase of temperature and the decrease of strain rate. Based on the hyperbolic sine equation, a constitutive equation is a relation between 0.2 pct yield stress and deformation conditions (strain rate and deformation temperature) was established. The corresponding hot deformation activation energy (Q) for as-quenched A357 alloy is 252.095 kJ/mol. Under the different small strains (≤0.01), the constitutive equation parameters of as-quenched A357 alloy were calculated. Values of flow stress calculated by constitutive equation were in a very good agreement with experimental results. Therefore, it can be used as an accurate thermal-stress model to solve the problems of quench distortion of parts.

  1. Spatial “Artistic” Networks: From Deconstructing Integer-Functions to Visual Arts

    Directory of Open Access Journals (Sweden)

    Ernesto Estrada

    2018-01-01

    Full Text Available Deconstructivism is an aesthetically appealing architectonic style. Here, we identify some general characteristics of this style, such as decomposition of the whole into parts, superposition of layers, and conservation of the memory of the whole. Using these attributes, we propose a method to deconstruct functions based on integers. Using this integer-function deconstruction we generate spatial networks which display a few artistic attributes such as (i biomorphic shapes, (ii symmetry, and (iii beauty. In building these networks, the deconstructed integer-functions are used as the coordinates of the nodes in a unit square, which are then joined according to a given connection radius like in random geometric graphs (RGGs. Some graph-theoretic invariants of these networks are calculated and compared with the classical RGGs. We then show how these networks inspire an artist to create artistic compositions using mixed techniques on canvas and on paper. Finally, we call for avoiding that the applicability of (network sciences should not go in detriment of curiosity-driven, and aesthetic-driven, researches. We claim that the aesthetic of network research, and not only its applicability, would be an attractor for new minds to this field.

  2. Rewrite systems for integer arithmetic

    NARCIS (Netherlands)

    Walters, H.R.; Zantema, H.

    1994-01-01

    We present three term rewrite systems for integer arithmetic with addition, multiplication, and, in two cases, subtraction. All systems are ground con uent and terminating; termination is proved by semantic labelling and recursive path order. The first system represents numbers by successor and

  3. A few Smarandache Integer Sequences

    OpenAIRE

    Ibstedt, Henry

    2010-01-01

    This paper deals with the analysis of a few Smarandache Integer Sequences which first appeared in Properties or the Numbers, F. Smarandache, University or Craiova Archives, 1975. The first four sequences are recurrence generated sequences while the last three are concatenation sequences.

  4. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    Science.gov (United States)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  5. A time series model: First-order integer-valued autoregressive (INAR(1))

    Science.gov (United States)

    Simarmata, D. M.; Novkaniza, F.; Widyaningsih, Y.

    2017-07-01

    Nonnegative integer-valued time series arises in many applications. A time series model: first-order Integer-valued AutoRegressive (INAR(1)) is constructed by binomial thinning operator to model nonnegative integer-valued time series. INAR (1) depends on one period from the process before. The parameter of the model can be estimated by Conditional Least Squares (CLS). Specification of INAR(1) is following the specification of (AR(1)). Forecasting in INAR(1) uses median or Bayesian forecasting methodology. Median forecasting methodology obtains integer s, which is cumulative density function (CDF) until s, is more than or equal to 0.5. Bayesian forecasting methodology forecasts h-step-ahead of generating the parameter of the model and parameter of innovation term using Adaptive Rejection Metropolis Sampling within Gibbs sampling (ARMS), then finding the least integer s, where CDF until s is more than or equal to u . u is a value taken from the Uniform(0,1) distribution. INAR(1) is applied on pneumonia case in Penjaringan, Jakarta Utara, January 2008 until April 2016 monthly.

  6. An effective approach to attenuate random noise based on compressive sensing and curvelet transform

    International Nuclear Information System (INIS)

    Liu, Wei; Cao, Siyuan; Zu, Shaohuan; Chen, Yangkang

    2016-01-01

    Random noise attenuation is an important step in seismic data processing. In this paper, we propose a novel denoising approach based on compressive sensing and the curvelet transform. We formulate the random noise attenuation problem as an L _1 norm regularized optimization problem. We propose to use the curvelet transform as the sparse transform in the optimization problem to regularize the sparse coefficients in order to separate signal and noise and to use the gradient projection for sparse reconstruction (GPSR) algorithm to solve the formulated optimization problem with an easy implementation and a fast convergence. We tested the performance of our proposed approach on both synthetic and field seismic data. Numerical results show that the proposed approach can effectively suppress the distortion near the edge of seismic events during the noise attenuation process and has high computational efficiency compared with the traditional curvelet thresholding and iterative soft thresholding based denoising methods. Besides, compared with f-x deconvolution, the proposed denoising method is capable of eliminating the random noise more effectively while preserving more useful signals. (paper)

  7. Integers without large prime factors in short intervals: Conditional ...

    Indian Academy of Sciences (India)

    α > 0 the interval (X, X +. √. X(log X)1/2+o(1)] contains an integer having no prime factor exceeding Xα for all X sufficiently large. Keywords. Smooth numbers; Riemann zeta function. 1. Introduction. Suppose P (n) denotes the largest prime factor of an integer n > 1 and let us declare. P(1) = 1. Given a positive real number y, ...

  8. Disk-based compression of data from genome sequencing.

    Science.gov (United States)

    Grabowski, Szymon; Deorowicz, Sebastian; Roguski, Łukasz

    2015-05-01

    High-coverage sequencing data have significant, yet hard to exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA stream of large datasets, since the redundancy between overlapping reads cannot be easily captured in the (relatively small) main memory. More interesting solutions for this problem are disk based, where the better of these two, from Cox et al. (2012), is based on the Burrows-Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0 Gbp human genome sequencing collection with almost 45-fold coverage. We propose overlapping reads compression with minimizers, a compression algorithm dedicated to sequencing reads (DNA only). Our method makes use of a conceptually simple and easily parallelizable idea of minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to fit the 134.0 Gbp dataset into only 5.31 GB of space. http://sun.aei.polsl.pl/orcom under a free license. sebastian.deorowicz@polsl.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Parallel Integer Factorization Using Quadratic Forms

    National Research Council Canada - National Science Library

    McMath, Stephen S

    2005-01-01

    Factorization is important for both practical and theoretical reasons. In secure digital communication, security of the commonly used RSA public key cryptosystem depends on the difficulty of factoring large integers...

  10. Optimal Diet Planning for Eczema Patient Using Integer Programming

    Science.gov (United States)

    Zhen Sheng, Low; Sufahani, Suliadi

    2018-04-01

    Human diet planning is conducted by choosing appropriate food items that fulfill the nutritional requirements into the diet formulation. This paper discusses the application of integer programming to build the mathematical model of diet planning for eczema patients. The model developed is used to solve the diet problem of eczema patients from young age group. The integer programming is a scientific approach to select suitable food items, which seeks to minimize the costs, under conditions of meeting desired nutrient quantities, avoiding food allergens and getting certain foods into the diet that brings relief to the eczema conditions. This paper illustrates that the integer programming approach able to produce the optimal and feasible solution to deal with the diet problem of eczema patient.

  11. Compression of Human Motion Animation Using the Reduction of Interjoint Correlation

    Directory of Open Access Journals (Sweden)

    Shiyu Li

    2008-01-01

    Full Text Available We propose two compression methods for the human motion in 3D space, based on the forward and inverse kinematics. In a motion chain, a movement of each joint is represented by a series of vector signals in 3D space. In general, specific types of joints such as end effectors often require higher precision than other general types of joints in, for example, CG animation and robot manipulation. The first method, which combines wavelet transform and forward kinematics, enables users to reconstruct the end effectors more precisely. Moreover, progressive decoding can be realized. The distortion of parent joint coming from quantization affects its child joint in turn and is accumulated to the end effector. To address this problem and to control the movement of the whole body, we propose a prediction method further based on the inverse kinematics. This method achieves efficient compression with a higher compression ratio and higher quality of the motion data. By comparing with some conventional methods, we demonstrate the advantage of ours with typical motions.

  12. Compression of Human Motion Animation Using the Reduction of Interjoint Correlation

    Directory of Open Access Journals (Sweden)

    Li Shiyu

    2008-01-01

    Full Text Available Abstract We propose two compression methods for the human motion in 3D space, based on the forward and inverse kinematics. In a motion chain, a movement of each joint is represented by a series of vector signals in 3D space. In general, specific types of joints such as end effectors often require higher precision than other general types of joints in, for example, CG animation and robot manipulation. The first method, which combines wavelet transform and forward kinematics, enables users to reconstruct the end effectors more precisely. Moreover, progressive decoding can be realized. The distortion of parent joint coming from quantization affects its child joint in turn and is accumulated to the end effector. To address this problem and to control the movement of the whole body, we propose a prediction method further based on the inverse kinematics. This method achieves efficient compression with a higher compression ratio and higher quality of the motion data. By comparing with some conventional methods, we demonstrate the advantage of ours with typical motions.

  13. Development of Gradient Compression Garments for Protection Against Post Flight Orthostatic Intolerance

    Science.gov (United States)

    Stenger, M. B.; Lee, S. M. C.; Westby, C. M.; Platts, S. H.

    2010-01-01

    Orthostatic intolerance after space flight is still an issue for astronaut health. No in-flight countermeasure has been 100% effective to date. NASA currently uses an inflatable anti-gravity suit (AGS) during reentry, but this device is uncomfortable and loses effectiveness upon egress from the Shuttle. The Russian Space Agency currently uses a mechanical counter-pressure garment (Kentavr) that is difficult to adjust alone, and prolonged use may result in painful swelling at points where the garment is not continuous (feet, knees, and groin). To improve comfort, reduce upmass and stowage requirements, and control fabrication and maintenance costs, we have been evaluating a variety of gradient compression, mechanical counter-pressure garments, constructed from spandex and nylon, as a possible replacement for the current AGS. We have examined comfort and cardiovascular responses to knee-high garments in normovolemic subjects; thigh-high garments in hypovolemic subjects and in astronauts after space flight; and 1-piece, breast-high garments in hypovolemic subjects. These gradient compression garments provide 55 mmHg of compression over the ankle, decreasing linearly to 35 mmHg at the knee. In thigh-high versions the compression continues to decrease to 20 mmHg at the top of the leg, and for breast-high versions, to 15 mmHg over the abdomen. Measures of efficacy include increased tilt survival time, elevated blood pressure and stroke volume, and lower heart-rate response to orthostatic stress. Results from these studies indicate that the greater the magnitude of compression and the greater the area of coverage, the more effective the compression garment becomes. Therefore, we are currently testing a 3-piece breast-high compression garment on astronauts after short-duration flight. We chose a 3-piece garment consisting of thigh-high stockings and shorts, because it is easy to don and comfortable to wear, and should provide the same level of protection as the 1-piece

  14. A new algorithm for benchmarking in integer data envelopment analysis

    Directory of Open Access Journals (Sweden)

    M. M. Omran

    2012-08-01

    Full Text Available The aim of this study is to investigate the effect of integer data in data envelopment analysis (DEA. The inputs and outputs in different types of DEA are considered to be continuous. In most application-oriented problems, some or all data are integers; and subsequently, the continuous condition of the values is omitted. For example, situations in which the inputs/outputs are representatives of the number of cars, people, etc. In fact, the benchmark unit is artificial and does not contain integer inputs/outputs after projection on the efficiency frontier. By rounding off the projection point, we may lose the feasibility or end up having inefficient DMU. In such cases, it is required to provide a benchmark unit such that the considered unit reaches the efficiency. In the present short communication, by proposing a novel algorithm, the projecting of an inefficient DMU is carried out in such a way that produced benchmarking takes values with fully integer inputs/outputs.

  15. End User Perceptual Distorted Scenes Enhancement Algorithm Using Partition-Based Local Color Values for QoE-Guaranteed IPTV

    Science.gov (United States)

    Kim, Jinsul

    In this letter, we propose distorted scenes enhancement algorithm in order to provide end user perceptual QoE-guaranteed IPTV service. The block edge detection with weight factor and partition-based local color values method can be applied for the degraded video frames which are affected by network transmission errors such as out of order, jitter, and packet loss to improve QoE efficiently. Based on the result of quality metric after using the distorted scenes enhancement algorithm, the distorted scenes have been restored better than others.

  16. Splanchnic Compression Improves the Efficacy of Compression Stockings to Prevent Orthostatic Intolerance

    Science.gov (United States)

    Platts, Steven H.; Brown, A. K.; Lee, S. M.; Stenger, M. B.

    2009-01-01

    Purpose: Post-spaceflight orthostatic intolerance (OI) is observed in 20-30% of astronauts. Previous data from our laboratory suggests that this is largely a result of decreased venous return. Currently, NASA astronauts wear an anti-gravity suit (AGS) which consists of inflatable air bladders over the calves, thighs and abdomen, typically pressurized from 26 to 78 mmHg. We recently determined that, thigh-high graded compression stockings (JOBST , 55 mmHg at ankle, 6 mmHg at top of thigh) were effective, though to a lesser degree than the AGS. The purpose of this study was to evaluate the addition of splanchnic compression to prevent orthostatic intolerance. Methods: Ten healthy volunteers (6M, 4F) participated in three 80 head-up tilts on separate days while (1) normovolemic (2) hypovolemic w/ breast-high compression stockings (BS)(JOBST(R), 55 mmHg at the ankle, 6 mmHg at top of thigh, 12 mmHg over abdomen) (3) hypovolemic w/o stockings. Hypovolemia was induced by IV infusion of furosemide (0.5 mg/kg) and 48 hrs of a low salt diet to simulate plasma volume loss following space flight. Hypovolemic testing occurred 24 and 48 hrs after furosemide. One-way repeated measures ANOVA, with Bonferroni corrections, was used to test for differences in blood pressure and heart rate responses to head-up tilt, stand times were compared using a Kaplan-Meyer survival analysis. Results: BS were effective in preventing OI and presyncope in hypovolemic test subjects ( p = 0.015). BS prevented the decrease in systolic blood pressure seen during tilt in normovolemia (p < 0.001) and hypovolemia w/o countermeasure (p = 0.005). BS also prevented the decrease in diastolic blood pressure seen during tilt in normovolemia (p = 0.006) and hypovolemia w/o countermeasure (p = 0.041). Hypovolemia w/o countermeasure showed a higher tilt-induced heart rate increase (p = 0.022) than seen in normovolemia; heart rate while wearing BS was not different than normovolemia (p = 0.353). Conclusion: BS may

  17. Low-latency video transmission over high-speed WPANs based on low-power video compression

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Ann

    2010-01-01

    This paper presents latency-constrained video transmission over high-speed wireless personal area networks (WPANs). Low-power video compression is proposed as an alternative to uncompressed video transmission. A video source rate control based on MINMAX quality criteria is introduced. Practical...

  18. Anisotropic crystal structure distortion of the monoclinic polymorph of acetaminophen at high hydrostatic pressures.

    Science.gov (United States)

    Boldyreva, E V; Shakhtshneider, T P; Vasilchenko, M A; Ahsbahs, H; Uchtmann, H

    2000-04-01

    The anisotropy of structural distortion of the monoclinic polymorph of acetaminophen induced by hydrostatic pressure up to 4.0 GPa was studied by single-crystal X-ray diffraction in a Merrill-Bassett diamond anvil cell (DAC). The space group (P2(1)/n) and the general structural pattern remained unchanged with pressure. Despite the overall decrease in the molar volume with pressure, the structure expanded in particular crystallographic directions. One of the linear cell parameters (c) passed through a minimum as the pressure increased. The intramolecular bond lengths changed only slightly with pressure, but the changes in the dihedral and torsion angles were very large. The compressibility of the intermolecular hydrogen bonds NH...O and OH...O was measured. NH...O bonds were shown to be slightly more compressible than OH...O bonds. The anisotropy of structural distortion was analysed in detail in relation to the pressure-induced changes in the molecular conformations, to the compression of the hydrogen-bond network, and to the changes in the orientation of molecules with respect to each other in the pleated sheets in the structure. Dirichlet domains were calculated in order to analyse the relative shifts of the centroids of the hydrogen-bonded cycles and of the centroids of the benzene rings with pressure.

  19. Structural distortions in 5-10 nm silver nanoparticles under high pressure

    Energy Technology Data Exchange (ETDEWEB)

    Koski, Kristie J.; Kamp, Noelle M.; Kunz, Martin; Knight, Jason K.; Alivisatos, A.P.; Smith, R.K.

    2008-10-13

    We present experimental evidence that silver nanoparticles in the size range of 5-10 nm undergo a reversible structural transformation under hydrostatic pressures up to 10 GPa. We have used x-ray diffraction with a synchrotron light source to investigate pressure-dependent and size-dependent trends in the crystal structure of silver nanoparticles in a hydrostatic medium compressed in a diamond-anvil cell. Results suggest a reversible linear pressure-dependent rhombohedral distortion which has not been previously observed in bulk silver. We propose a mechanism for this transition that considers the bond-length distribution in idealized multiply twinned icosahedral particles. To further support this hypothesis, we also show that similar measurements of single-crystal platinum nanoparticles reveal no such distortions.

  20. Stochastic programming with integer recourse

    NARCIS (Netherlands)

    van der Vlerk, Maarten Hendrikus

    1995-01-01

    In this thesis we consider two-stage stochastic linear programming models with integer recourse. Such models are at the intersection of two different branches of mathematical programming. On the one hand some of the model parameters are random, which places the problem in the field of stochastic

  1. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC combined with image data compression (IDC approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE. Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS-based algorithm has better compression performance than the traditional compression approaches.

  2. Multispectral image compression based on DSC combined with CCSDS-IDC.

    Science.gov (United States)

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  3. Ramsey theory on the integers

    CERN Document Server

    Landman, Bruce M

    2014-01-01

    Ramsey theory is the study of the structure of mathematical objects that is preserved under partitions. In its full generality, Ramsey theory is quite powerful, but can quickly become complicated. By limiting the focus of this book to Ramsey theory applied to the set of integers, the authors have produced a gentle, but meaningful, introduction to an important and enticing branch of modern mathematics. Ramsey Theory on the Integers offers students a glimpse into the world of mathematical research and the opportunity for them to begin pondering unsolved problems. For this new edition, several sections have been added and others have been significantly updated. Among the newly introduced topics are: rainbow Ramsey theory, an "inequality" version of Schur's theorem, monochromatic solutions of recurrence relations, Ramsey results involving both sums and products, monochromatic sets avoiding certain differences, Ramsey properties for polynomial progressions, generalizations of the Erdős-Ginzberg-Ziv theorem, and t...

  4. Real-time lossless data compression techniques for long-pulse operation

    International Nuclear Information System (INIS)

    Jesus Vega, J.; Sanchez, E.; Portas, A.; Pereira, A.; Ruiz, M.

    2006-01-01

    Data logging and data distribution will be two main tasks connected with data handling in ITER. Data logging refers to the recovery and ultimate storage of all data, independent on the data source. Control data and physics data distribution is related, on the one hand, to the on-line data broadcasting for immediate data availability for both data analysis and data visualization. On the other hand, delayed analyses require off-line data access. Due to the large data volume expected, data compression will be mandatory in order to save storage and bandwidth. On-line data distribution in a long pulse environment requires the use of a deterministic approach to be able to ensure a proper response time for data availability. However, an essential feature for all the above purposes is to apply compression techniques that ensure the recovery of the initial signals without spectral distortion when compacted data are expanded (lossless techniques). Delta compression methods are independent on the analogue characteristics of waveforms and there exist a variety of implementations that have been applied to the databases of several fusion devices such as Alcator, JET and TJ-II among others. Delta compression techniques are carried out in a two step algorithm. The first step consists of a delta calculation, i.e. the computation of the differences between the digital codes of adjacent signal samples. The resultant deltas are then encoded according to constant- or variable-length bit allocation. Several encoding forms can be considered for the second step and they have to satisfy a prefix code property. However, and in order to meet the requirement of on-line data distribution, the encoding forms have to be defined prior to data capture. This article reviews different lossless data compression techniques based on delta compression. In addition, the concept of cyclic delta transformation is introduced. Furthermore, comparative results concerning compression rates on different

  5. Suppression of tunneling by interference in half-integer--spin particles

    OpenAIRE

    Loss, Daniel; DiVincenzo, David P.; Grinstein, G.

    1992-01-01

    Within a wide class of ferromagnetic and antiferromagnetic systems, quantum tunneling of magnetization direction is spin-parity dependent: it vanishes for magnetic particles with half-integer spin, but is allowed for integer spin. A coherent-state path integral calculation shows that this topological effect results from interference between tunneling paths.

  6. Distortion-Free 1-Bit PWM Coding for Digital Audio Signals

    Directory of Open Access Journals (Sweden)

    John Mourjopoulos

    2007-01-01

    Full Text Available Although uniformly sampled pulse width modulation (UPWM represents a very efficient digital audio coding scheme for digital-to-analog conversion and full-digital amplification, it suffers from strong harmonic distortions, as opposed to benign non-harmonic artifacts present in analog PWM (naturally sampled PWM, NPWM. Complete elimination of these distortions usually requires excessive oversampling of the source PCM audio signal, which results to impractical realizations of digital PWM systems. In this paper, a description of digital PWM distortion generation mechanism is given and a novel principle for their minimization is proposed, based on a process having some similarity to the dithering principle employed in multibit signal quantization. This conditioning signal is termed “jither” and it can be applied either in the PCM amplitude or the PWM time domain. It is shown that the proposed method achieves significant decrement of the harmonic distortions, rendering digital PWM performance equivalent to that of source PCM audio, for mild oversampling (e.g., ×4 resulting to typical PWM clock rates of 90 MHz.

  7. Distortion-Free 1-Bit PWM Coding for Digital Audio Signals

    Directory of Open Access Journals (Sweden)

    Mourjopoulos John

    2007-01-01

    Full Text Available Although uniformly sampled pulse width modulation (UPWM represents a very efficient digital audio coding scheme for digital-to-analog conversion and full-digital amplification, it suffers from strong harmonic distortions, as opposed to benign non-harmonic artifacts present in analog PWM (naturally sampled PWM, NPWM. Complete elimination of these distortions usually requires excessive oversampling of the source PCM audio signal, which results to impractical realizations of digital PWM systems. In this paper, a description of digital PWM distortion generation mechanism is given and a novel principle for their minimization is proposed, based on a process having some similarity to the dithering principle employed in multibit signal quantization. This conditioning signal is termed "jither" and it can be applied either in the PCM amplitude or the PWM time domain. It is shown that the proposed method achieves significant decrement of the harmonic distortions, rendering digital PWM performance equivalent to that of source PCM audio, for mild oversampling (e.g., resulting to typical PWM clock rates of 90 MHz.

  8. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  9. Identification of Coupled Map Lattice Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Dong Xie

    2016-01-01

    Full Text Available A novel approach for the parameter identification of coupled map lattice (CML based on compressed sensing is presented in this paper. We establish a meaningful connection between these two seemingly unrelated study topics and identify the weighted parameters using the relevant recovery algorithms in compressed sensing. Specifically, we first transform the parameter identification problem of CML into the sparse recovery problem of underdetermined linear system. In fact, compressed sensing provides a feasible method to solve underdetermined linear system if the sensing matrix satisfies some suitable conditions, such as restricted isometry property (RIP and mutual coherence. Then we give a low bound on the mutual coherence of the coefficient matrix generated by the observed values of CML and also prove that it satisfies the RIP from a theoretical point of view. If the weighted vector of each element is sparse in the CML system, our proposed approach can recover all the weighted parameters using only about M samplings, which is far less than the number of the lattice elements N. Another important and significant advantage is that if the observed data are contaminated with some types of noises, our approach is still effective. In the simulations, we mainly show the effects of coupling parameter and noise on the recovery rate.

  10. Gambling-Related Distortions and Problem Gambling in Adolescents: A Model to Explain Mechanisms and Develop Interventions

    Directory of Open Access Journals (Sweden)

    Maria Anna Donati

    2018-01-01

    Full Text Available Although a number of gambling preventive initiatives have been realized with adolescents, many of them have been developed in absence of a clear and explicitly described theoretical model. The present work was aimed to analyze the adequacy of a model to explain gambling behavior referring to gambling-related cognitive distortions (Study 1, and to verify the effectiveness of a preventive intervention developed on the basis of this model (Study 2. Following dual-process theories on cognitive functioning, in Study 1 we tested a model in which mindware gap, i.e., susceptibility to the gambler’s fallacy, and contaminated mindware, i.e., superstitious thinking, were the antecedents of gambling-related cognitive distortions that, in turn, affect gambling frequency and problem gambling. Participants were 306 male adolescents (Mage = 17.2 years. A path analysis indicated that cognitive distortions have a mediating role in the relationship that links probabilistic reasoning fallacy and superstitious thinking with problem gambling. Following these findings, in Study 2 we developed a school-based intervention aimed to reduce gambling-related cognitive distortions acting on the above cited mindware problems. A pre- and post-test design – with a 6 months follow-up – was performed with 34 male adolescents (Mage = 16.8, randomly assigned to two groups (Training and No Training, and their baseline equivalence was verified. A Mixed 2 × 2 ANOVA attested a significant Time X Group interaction, indicating a significant reduction of the cognitive distortions from pre-test to post-test only in the Training group. The follow-up attested to the stability of the training effects and the reduction of gambling frequency over time. These findings suggest that prevention strategies should address mindware problems, which can be considered as predictors of gambling-related cognitive distortions.

  11. Gambling-Related Distortions and Problem Gambling in Adolescents: A Model to Explain Mechanisms and Develop Interventions

    Science.gov (United States)

    Donati, Maria Anna; Chiesi, Francesca; Iozzi, Adriana; Manfredi, Antonella; Fagni, Fabrizio; Primi, Caterina

    2018-01-01

    Although a number of gambling preventive initiatives have been realized with adolescents, many of them have been developed in absence of a clear and explicitly described theoretical model. The present work was aimed to analyze the adequacy of a model to explain gambling behavior referring to gambling-related cognitive distortions (Study 1), and to verify the effectiveness of a preventive intervention developed on the basis of this model (Study 2). Following dual-process theories on cognitive functioning, in Study 1 we tested a model in which mindware gap, i.e., susceptibility to the gambler’s fallacy, and contaminated mindware, i.e., superstitious thinking, were the antecedents of gambling-related cognitive distortions that, in turn, affect gambling frequency and problem gambling. Participants were 306 male adolescents (Mage = 17.2 years). A path analysis indicated that cognitive distortions have a mediating role in the relationship that links probabilistic reasoning fallacy and superstitious thinking with problem gambling. Following these findings, in Study 2 we developed a school-based intervention aimed to reduce gambling-related cognitive distortions acting on the above cited mindware problems. A pre- and post-test design – with a 6 months follow-up – was performed with 34 male adolescents (Mage = 16.8), randomly assigned to two groups (Training and No Training), and their baseline equivalence was verified. A Mixed 2 × 2 ANOVA attested a significant Time X Group interaction, indicating a significant reduction of the cognitive distortions from pre-test to post-test only in the Training group. The follow-up attested to the stability of the training effects and the reduction of gambling frequency over time. These findings suggest that prevention strategies should address mindware problems, which can be considered as predictors of gambling-related cognitive distortions. PMID:29354081

  12. Non-integer viscoelastic constitutive law to model soft biological tissues to in-vivo indentation.

    Science.gov (United States)

    Demirci, Nagehan; Tönük, Ergin

    2014-01-01

    During the last decades, derivatives and integrals of non-integer orders are being more commonly used for the description of constitutive behavior of various viscoelastic materials including soft biological tissues. Compared to integer order constitutive relations, non-integer order viscoelastic material models of soft biological tissues are capable of capturing a wider range of viscoelastic behavior obtained from experiments. Although integer order models may yield comparably accurate results, non-integer order material models have less number of parameters to be identified in addition to description of an intermediate material that can monotonically and continuously be adjusted in between an ideal elastic solid and an ideal viscous fluid. In this work, starting with some preliminaries on non-integer (fractional) calculus, the "spring-pot", (intermediate mechanical element between a solid and a fluid), non-integer order three element (Zener) solid model, finally a user-defined large strain non-integer order viscoelastic constitutive model was constructed to be used in finite element simulations. Using the constitutive equation developed, by utilizing inverse finite element method and in vivo indentation experiments, soft tissue material identification was performed. The results indicate that material coefficients obtained from relaxation experiments, when optimized with creep experimental data could simulate relaxation, creep and cyclic loading and unloading experiments accurately. Non-integer calculus viscoelastic constitutive models, having physical interpretation and modeling experimental data accurately is a good alternative to classical phenomenological viscoelastic constitutive equations.

  13. Cognitive distortions and gambling near-misses in Internet Gaming Disorder: A preliminary study.

    Directory of Open Access Journals (Sweden)

    Yin Wu

    Full Text Available Increased cognitive distortions (i.e. biased processing of chance, probability and skill are a key psychopathological process in disordered gambling. The present study investigated state and trait aspects of cognitive distortions in 22 individuals with Internet Gaming Disorder (IGD and 22 healthy controls. Participants completed the Gambling Related Cognitions Scale as a trait measure of cognitive distortions, and played a slot machine task delivering wins, near-misses and full-misses. Ratings of pleasure ("liking" and motivation to play ("wanting" were taken following the different outcomes, and gambling persistence was measured after a mandatory phase. IGD was associated with elevated trait cognitive distortions, in particular skill-oriented cognitions. On the slot machine task, the IGD group showed increased "wanting" ratings compared with control participants, while the two groups did not differ regarding their "liking" of the game. The IGD group displayed increased persistence on the slot machine task. Near-miss outcomes did not elicit stronger motivation to play compared to full-miss outcomes overall, and there was no group difference on this measure. However, a near-miss position effect was observed, such that near-misses stopping before the payline were rated as more motivating than near-misses that stopped after the payline, and this differentiation was attenuated in the IGD group, suggesting possible counterfactual thinking deficits in this group. These data provide preliminary evidence for increased incentive motivation and cognitive distortions in IGD, at least in the context of a chance-based gambling environment.

  14. Cognitive distortions and gambling near-misses in Internet Gaming Disorder: A preliminary study.

    Science.gov (United States)

    Wu, Yin; Sescousse, Guillaume; Yu, Hongbo; Clark, Luke; Li, Hong

    2018-01-01

    Increased cognitive distortions (i.e. biased processing of chance, probability and skill) are a key psychopathological process in disordered gambling. The present study investigated state and trait aspects of cognitive distortions in 22 individuals with Internet Gaming Disorder (IGD) and 22 healthy controls. Participants completed the Gambling Related Cognitions Scale as a trait measure of cognitive distortions, and played a slot machine task delivering wins, near-misses and full-misses. Ratings of pleasure ("liking") and motivation to play ("wanting") were taken following the different outcomes, and gambling persistence was measured after a mandatory phase. IGD was associated with elevated trait cognitive distortions, in particular skill-oriented cognitions. On the slot machine task, the IGD group showed increased "wanting" ratings compared with control participants, while the two groups did not differ regarding their "liking" of the game. The IGD group displayed increased persistence on the slot machine task. Near-miss outcomes did not elicit stronger motivation to play compared to full-miss outcomes overall, and there was no group difference on this measure. However, a near-miss position effect was observed, such that near-misses stopping before the payline were rated as more motivating than near-misses that stopped after the payline, and this differentiation was attenuated in the IGD group, suggesting possible counterfactual thinking deficits in this group. These data provide preliminary evidence for increased incentive motivation and cognitive distortions in IGD, at least in the context of a chance-based gambling environment.

  15. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  16. Frozen density embedding with non-integer subsystems' particle numbers.

    Science.gov (United States)

    Fabiano, Eduardo; Laricchia, Savio; Della Sala, Fabio

    2014-03-21

    We extend the frozen density embedding theory to non-integer subsystems' particles numbers. Different features of this formulation are discussed, with special concern for approximate embedding calculations. In particular, we highlight the relation between the non-integer particle-number partition scheme and the resulting embedding errors. Finally, we provide a discussion of the implications of the present theory for the derivative discontinuity issue and the calculation of chemical reactivity descriptors.

  17. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  18. Clinical evaluation of the JPEG2000 compression rate of CT and MR images for long term archiving in PACS

    International Nuclear Information System (INIS)

    Cha, Soon Joo; Kim, Sung Hwan; Kim, Yong Hoon

    2006-01-01

    We wanted to evaluate an acceptable compression rate of JPEG2000 for long term archiving of CT and MR images in PACS. Nine CT images and 9 MR images that had small or minimal lesions were randomly selected from the PACS at our institute. All the images are compressed with rates of 5:1, 10:1, 20:1, 40:1 and 80:1 by the JPEG2000 compression protocol. Pairs of original and compressed images were compared by 9 radiologists who were working independently. We designed a JPEG2000 viewing program for comparing two images on one monitor system for performing easy and quick evaluation. All the observers performed the comparison study twice on 5 mega pixel grey scale LCD monitors and 2 mega pixel color LCD monitors, respectively. The PSNR (Peak Signal to Noise Ratio) values were calculated for making quantitative comparisions. On MR and CT, all the images with 5:1 compression images showed no difference from the original images by all 9 observers and only one observer could detect a image difference on one CT image for 10:1 compression on only the 5 mega pixel monitor. For the 20:1 compression rate, clinically significant image deterioration was found in 50% of the images on the 5M pixel monitor study, and in 30% of the images on the 2M pixel monitor. PSNR values larger than 44 dB were calculated for all the compressed images. The clinically acceptable image compression rate for long term archiving by the JPEG2000 compression protocol is 10:1 for MR and CT, and if this is applied to PACS, it would reduce the cost and responsibility of the system

  19. The role of visual similarity and memory in body model distortions.

    Science.gov (United States)

    Saulton, Aurelie; Longo, Matthew R; Wong, Hong Yu; Bülthoff, Heinrich H; de la Rosa, Stephan

    2016-02-01

    Several studies have shown that the perception of one's own hand size is distorted in proprioceptive localization tasks. It has been suggested that those distortions mirror somatosensory anisotropies. Recent research suggests that non-corporeal items also show some spatial distortions. In order to investigate the psychological processes underlying the localization task, we investigated the influences of visual similarity and memory on distortions observed on corporeal and non-corporeal items. In experiment 1, participants indicated the location of landmarks on: their own hand, a rubber hand (rated as most similar to the real hand), and a rake (rated as least similar to the real hand). Results show no significant differences between rake and rubber hand distortions but both items were significantly less distorted than the hand. Experiments 2 and 3 explored the role of memory in spatial distance judgments of the hand, the rake and the rubber hand. Spatial representations of items measured in experiments 2 and 3 were also distorted but showed the tendency to be smaller than in localization tasks. While memory and visual similarity seem to contribute to explain qualitative similarities in distortions between the hand and non-corporeal items, those factors cannot explain the larger magnitude observed in hand distortions. Copyright © 2015. Published by Elsevier B.V.

  20. REMOTE SENSING IMAGE QUALITY ASSESSMENT EXPERIMENT WITH POST-PROCESSING

    Directory of Open Access Journals (Sweden)

    W. Jiang

    2018-04-01

    Full Text Available This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.

  1. Refinement of the wedge bar technique for compression tests at intermediate strain rates

    Directory of Open Access Journals (Sweden)

    Stander M.

    2012-08-01

    Full Text Available A refined development of the wedge-bar technique [1] for compression tests at intermediate strain rates is presented. The concept uses a wedge mechanism to compress small cylindrical specimens at strain rates in the order of 10s−1 to strains of up to 0.3. Co-linear elastic impact principles are used to accelerate the actuation mechanism from rest to test speed in under 300μs while maintaining near uniform strain rates for up to 30 ms, i.e. the transient phase of the test is less than 1% of the total test duration. In particular, a new load frame, load cell and sliding anvil designs are presented and shown to significantly reduce the noise generated during testing. Typical dynamic test results for a selection of metals and polymers are reported and compared with quasistatic and split Hopkinson pressure bar results.

  2. Base Station Antenna Pattern Distortion in Practical Urban Deployment Scenarios

    DEFF Research Database (Denmark)

    Rodriguez Larrad, Ignacio; Nguyen, Huan Cong; Sørensen, Troels Bundgaard

    2014-01-01

    In real urban deployments, base station antennas are typically not placed in free space conditions. Therefore, the radiation pattern can be affected by mounting structures and nearby obstacles located in the proximity of the antenna (near-field), which are often not taken into consideration. Also...... presents a combination of near-field and far-field simulations aimed to provide an overview of the distortion experienced by the base station antenna pattern in two different urban deployment scenarios: rooftop and telecommunications tower. The study illustrates how, in comparison with the near...

  3. Reduction of false positives in the detection of architectural distortion in mammograms by using a geometrically constrained phase portrait model

    International Nuclear Information System (INIS)

    Ayres, Fabio J.; Rangayyan, Rangaraj M.

    2007-01-01

    Objective One of the commonly missed signs of breast cancer is architectural distortion. We have developed techniques for the detection of architectural distortion in mammograms, based on the analysis of oriented texture through the application of Gabor filters and a linear phase portrait model. In this paper, we propose constraining the shape of the general phase portrait model as a means to reduce the false-positive rate in the detection of architectural distortion. Material and methods The methods were tested with one set of 19 cases of architectural distortion and 41 normal mammograms, and with another set of 37 cases of architectural distortion. Results Sensitivity rates of 84% with 4.5 false positives per image and 81% with 10 false positives per image were obtained for the two sets of images. Conclusion The adoption of a constrained phase portrait model with a symmetric matrix and the incorporation of its condition number in the analysis resulted in a reduction in the false-positive rate in the detection of architectural distortion. The proposed techniques, dedicated for the detection and localization of architectural distortion, should lead to efficient detection of early signs of breast cancer. (orig.)

  4. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  5. The failure of brittle materials under overall compression: Effects of loading rate and defect distribution

    Science.gov (United States)

    Paliwal, Bhasker

    The constitutive behaviors and failure processes of brittle materials under far-field compressive loading are studied in this work. Several approaches are used: experiments to study the compressive failure behavior of ceramics, design of experimental techniques by means of finite element simulations, and the development of micro-mechanical damage models to analyze and predict mechanical response of brittle materials under far-field compression. Experiments have been conducted on various ceramics, (primarily on a transparent polycrystalline ceramic, aluminum oxynitride or AlON) under loading rates ranging from quasi-static (˜ 5X10-6) to dynamic (˜ 200 MPa/mus), using a servo-controlled hydraulic test machine and a modified compression Kolsky bar (MKB) technique respectively. High-speed photography has also been used with exposure times as low as 20 ns to observe the dynamic activation, growth and coalescence of cracks and resulting damage zones in the specimen. The photographs were correlated in time with measurements of the stresses in the specimen. Further, by means of 3D finite element simulations, an experimental technique has been developed to impose a controlled, homogeneous, planar confinement in the specimen. The technique can be used in conjunction with a high-speed camera to study the in situ dynamic failure behavior of materials under confinement. AlON specimens are used for the study. The statically pre-compressed specimen is subjected to axial dynamic compressive loading using the MKB. Results suggest that confinement not only increases the load carrying capacity, it also results in a non-linear stress evolution in the material. High-speed photographs also suggest an inelastic deformation mechanism in AlON under confinement which evolves more slowly than the typical brittle-cracking type of damage in the unconfined case. Next, an interacting micro-crack damage model is developed that explicitly accounts for the interaction among the micro-cracks in

  6. Deterministic integer multiple firing depending on initial state in Wang model

    Energy Technology Data Exchange (ETDEWEB)

    Xie Yong [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China)]. E-mail: yxie@mail.xjtu.edu.cn; Xu Jianxue [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China); Jiang Jun [Institute of Nonlinear Dynamics, MSSV, Department of Engineering Mechanics, Xi' an Jiaotong University, Xi' an 710049 (China)

    2006-12-15

    We investigate numerically dynamical behaviour of the Wang model, which describes the rhythmic activities of thalamic relay neurons. The model neuron exhibits Type I excitability from a global view, but Type II excitability from a local view. There exists a narrow range of bistability, in which a subthreshold oscillation and a suprathreshold firing behaviour coexist. A special firing pattern, integer multiple firing can be found in the certain part of the bistable range. The characteristic feature of such firing pattern is that the histogram of interspike intervals has a multipeaked structure, and the peaks are located at about integer multiples of a basic interspike interval. Since the Wang model is noise-free, the integer multiple firing is a deterministic firing pattern. The existence of bistability leads to the deterministic integer multiple firing depending on the initial state of the model neuron, i.e., the initial values of the state variables.

  7. Deterministic integer multiple firing depending on initial state in Wang model

    International Nuclear Information System (INIS)

    Xie Yong; Xu Jianxue; Jiang Jun

    2006-01-01

    We investigate numerically dynamical behaviour of the Wang model, which describes the rhythmic activities of thalamic relay neurons. The model neuron exhibits Type I excitability from a global view, but Type II excitability from a local view. There exists a narrow range of bistability, in which a subthreshold oscillation and a suprathreshold firing behaviour coexist. A special firing pattern, integer multiple firing can be found in the certain part of the bistable range. The characteristic feature of such firing pattern is that the histogram of interspike intervals has a multipeaked structure, and the peaks are located at about integer multiples of a basic interspike interval. Since the Wang model is noise-free, the integer multiple firing is a deterministic firing pattern. The existence of bistability leads to the deterministic integer multiple firing depending on the initial state of the model neuron, i.e., the initial values of the state variables

  8. 5th Conference on Non-integer Order Calculus and Its Applications

    CERN Document Server

    Kacprzyk, Janusz; Baranowski, Jerzy

    2013-01-01

    This volume presents various aspects of non-integer order systems, also known as fractional systems, which have recently attracted an increasing attention in the scientific community of systems science, applied mathematics, control theory. Non-integer systems have become relevant for many fields of science and technology exemplified by the modeling of signal transmission, electric noise, dielectric polarization, heat transfer, electrochemical reactions, thermal processes,  acoustics, etc. The content is divided into six parts, every of which considers one of the currently relevant problems. In the first part the Realization problem is discussed, with a special focus on positive systems. The second part considers stability of certain classes of non-integer order systems with and without delays. The third part is focused on such important aspects as controllability, observability and optimization especially in discrete time. The fourth part is focused on distributed systems where non-integer calculus leads to ...

  9. Performance Analysis for Cooperative Communication System with QC-LDPC Codes Constructed with Integer Sequences

    Directory of Open Access Journals (Sweden)

    Yan Zhang

    2015-01-01

    Full Text Available This paper presents four different integer sequences to construct quasi-cyclic low-density parity-check (QC-LDPC codes with mathematical theory. The paper introduces the procedure of the coding principle and coding. Four different integer sequences constructing QC-LDPC code are compared with LDPC codes by using PEG algorithm, array codes, and the Mackey codes, respectively. Then, the integer sequence QC-LDPC codes are used in coded cooperative communication. Simulation results show that the integer sequence constructed QC-LDPC codes are effective, and overall performance is better than that of other types of LDPC codes in the coded cooperative communication. The performance of Dayan integer sequence constructed QC-LDPC is the most excellent performance.

  10. An Enhanced Run-Length Encoding Compression Method for Telemetry Data

    Directory of Open Access Journals (Sweden)

    Shan Yanhu

    2017-09-01

    Full Text Available The telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.

  11. Searching for optimal integer solutions to set partitioning problems using column generation

    OpenAIRE

    Bredström, David; Jörnsten, Kurt; Rönnqvist, Mikael

    2007-01-01

    We describe a new approach to produce integer feasible columns to a set partitioning problem directly in solving the linear programming (LP) relaxation using column generation. Traditionally, column generation is aimed to solve the LP relaxation as quick as possible without any concern of the integer properties of the columns formed. In our approach we aim to generate the columns forming the optimal integer solution while simultaneously solving the LP relaxation. By this we can re...

  12. Design of Solar PV Cell Based Inverter for Unbalanced and Distorted Industrial Loads

    Directory of Open Access Journals (Sweden)

    Naga Ananth D

    2015-04-01

    Full Text Available PV cell is getting importance in low and medium power generation due to easy installation, low maintenance and subsidies in price from respective nation. Most of the loads in distribution system are unbalanced and distorted, due to which there will be unbalanced voltage and current occur at load and may disturb its overall performance. Due to these loads voltage unbalance, distorted voltage and current and variable power factors in each phase can be observed. An efficient algorithm to mitigate unbalanced and distorted load and source voltage and current in solar photo voltaic (PV inverter for isolated load system was considered. This solar PV system can be applicable to remote located industrial loads like heating, welding and small arc furnace type distorted loads and also for unbalanced loads. The PV inverter is designed such that it will maintain nearly constant voltage magnitude and can mitigate harmonics in voltage and current near the load terminals. A MATLAB/ SIMULINK based solar PV inverter was simulated and results are compared with standard AC three phase grid connected system. The proposed shows that the inverter is having very less voltage and current harmonic content and can maintain nearly constant voltage profile for highly unbalanced system.

  13. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  14. Comparison of peripheral compression estimates using auditory steady-state responses (ASSR) and distortion product otoacoustic emissions (DPOAE)

    DEFF Research Database (Denmark)

    Encina Llamas, Gerard; Epp, Bastian; Dau, Torsten

    The healthy auditory system shows a compressive input/output (I/O) function as a result of healthy outer-hair cell function. Hearing impairment often leads to a decrease in sensitivity and a reduction of compression, mainly caused by loss of inner and/or outer hair cells. Compression is commonly...... (DPOAEs) recordings. Results show compressive ASSR I/O functions for NH subjects. For HI subjects, ASSR reveal the loss of sensitivity at low stimulus levels. Growth slopes are smaller (more compressive) in ASSR than in DPOAE I/O functions....

  15. Optimum image compression rate maintaining diagnostic image quality of digital intraoral radiographs

    International Nuclear Information System (INIS)

    Song, Ju Seop; Koh, Kwang Joon

    2000-01-01

    The aims of the present study are to determine the optimum compression rate in terms of file size reduction and diagnostic quality of the images after compression and evaluate the transmission speed of original or each compressed images. The material consisted of 24 extracted human premolars and molars. The occlusal surfaces and proximal surfaces of the teeth had a clinical disease spectrum that ranged from sound to varying degrees of fissure discoloration and cavitation. The images from Digora system were exported in TIFF and the images from conventional intraoral film were scanned and digitalized in TIFF by Nikon SF-200 scanner(Nikon, Japan). And six compression factors were chosen and applied on the basis of the results from a pilot study. The total number of images to be assessed were 336. Three radiologists assessed the occlusal and proximal surfaces of the teeth with 5-rank scale. Finally diagnosed as either sound or carious lesion by one expert oral pathologist. And sensitivity and specificity and kappa value for diagnostic agreement was calculated. Also the area (Az) values under the ROC curve were calculated and paired t-test and oneway ANOVA test was performed. Thereafter, transmission time of the image files of the each compression level were compared with that of the original image files. No significant difference was found between original and the corresponding images up to 7% (1:14) compression ratio for both the occlusal and proximal caries (p<0.05). JPEG3 (1:14) image files are transmitted fast more than 10 times, maintained diagnostic information in image, compared with original image files. 1:14 compressed image file may be used instead of the original image and reduce storage needs and transmission time.

  16. Towards Merging Binary Integer Programming Techniques with Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Reza Zamani

    2017-01-01

    Full Text Available This paper presents a framework based on merging a binary integer programming technique with a genetic algorithm. The framework uses both lower and upper bounds to make the employed mathematical formulation of a problem as tight as possible. For problems whose optimal solutions cannot be obtained, precision is traded with speed through substituting the integrality constrains in a binary integer program with a penalty. In this way, instead of constraining a variable u with binary restriction, u is considered as real number between 0 and 1, with the penalty of Mu(1-u, in which M is a large number. Values not near to the boundary extremes of 0 and 1 make the component of Mu(1-u large and are expected to be avoided implicitly. The nonbinary values are then converted to priorities, and a genetic algorithm can use these priorities to fill its initial pool for producing feasible solutions. The presented framework can be applied to many combinatorial optimization problems. Here, a procedure based on this framework has been applied to a scheduling problem, and the results of computational experiments have been discussed, emphasizing the knowledge generated and inefficiencies to be circumvented with this framework in future.

  17. A Constitutive Model for Unsaturated soils based on a Compressibility Framework dependent on Suction and Degree of Saturation

    Directory of Open Access Journals (Sweden)

    Sitarenios Panagiotis

    2016-01-01

    Full Text Available The Modified Cam Clay model is extended to account for the behaviour of unsaturated soils using Bishop’s stress. To describe the Loading – Collapse behaviour, the model incorporates a compressibility framework with suction and degree of saturation dependent compression lines. For simplicity, the present paper describes the model in the triaxial stress space with characteristic simulations of constant suction compression and triaxial tests, as well as wetting tests. The model reproduces an evolving post yield compressibility under constant suction compression, and thus, can adequately describe a maximum of collapse.

  18. Direct comparison of fractional and integer quantized Hall resistance

    Science.gov (United States)

    Ahlers, Franz J.; Götz, Martin; Pierz, Klaus

    2017-08-01

    We present precision measurements of the fractional quantized Hall effect, where the quantized resistance {{R}≤ft[ 1/3 \\right]} in the fractional quantum Hall state at filling factor 1/3 was compared with a quantized resistance {{R}[2]} , represented by an integer quantum Hall state at filling factor 2. A cryogenic current comparator bridge capable of currents down to the nanoampere range was used to directly compare two resistance values of two GaAs-based devices located in two cryostats. A value of 1-(5.3  ±  6.3) 10-8 (95% confidence level) was obtained for the ratio ({{R}≤ft[ 1/3 \\right]}/6{{R}[2]} ). This constitutes the most precise comparison of integer resistance quantization (in terms of h/e 2) in single-particle systems and of fractional quantization in fractionally charged quasi-particle systems. While not relevant for practical metrology, such a test of the validity of the underlying physics is of significance in the context of the upcoming revision of the SI.

  19. Sabrewing: A lightweight architecture for combined floating-point and integer arithmetic

    NARCIS (Netherlands)

    Bruintjes, Tom; Walters, K.H.G.; Gerez, Sabih H.; Molenkamp, Egbert; Smit, Gerardus Johannes Maria

    In spite of the fact that floating-point arithmetic is costly in terms of silicon area, the joint design of hardware for floating-point and integer arithmetic is seldom considered. While components like multipliers and adders can potentially be shared, floating-point and integer units in

  20. Compressive sensing for feedback reduction in MIMO broadcast channels

    KAUST Repository

    Eltayeb, Mohammed E.

    2014-09-01

    In multi-antenna broadcast networks, the base stations (BSs) rely on the channel state information (CSI) of the users to perform user scheduling and downlink transmission. However, in networks with large number of users, obtaining CSI from all users is arduous, if not impossible, in practice. This paper proposes channel feedback reduction techniques based on the theory of compressive sensing (CS), which permits the BS to obtain CSI with acceptable recovery guarantees under substantially reduced feedback overhead. Additionally, assuming noisy CS measurements at the BS, inexpensive ways for improving post-CS detection are explored. The proposed techniques are shown to reduce the feedback overhead, improve CS detection at the BS, and achieve a sum-rate close to that obtained by noiseless dedicated feedback channels.

  1. Modified Three-Step Search Block Matching Motion Estimation and Weighted Finite Automata based Fractal Video Compression

    Directory of Open Access Journals (Sweden)

    Shailesh Kamble

    2017-08-01

    Full Text Available The major challenge with fractal image/video coding technique is that, it requires more encoding time. Therefore, how to reduce the encoding time is the research component remains in the fractal coding. Block matching motion estimation algorithms are used, to reduce the computations performed in the process of encoding. The objective of the proposed work is to develop an approach for video coding using modified three step search (MTSS block matching algorithm and weighted finite automata (WFA coding with a specific focus on reducing the encoding time. The MTSS block matching algorithm are used for computing motion vectors between the two frames i.e. displacement of pixels and WFA is used for the coding as it behaves like the Fractal Coding (FC. WFA represents an image (frame or motion compensated prediction error based on the idea of fractal that the image has self-similarity in itself. The self-similarity is sought from the symmetry of an image, so the encoding algorithm divides an image into multi-levels of quad-tree segmentations and creates an automaton from the sub-images. The proposed MTSS block matching algorithm is based on the combination of rectangular and hexagonal search pattern and compared with the existing New Three-Step Search (NTSS, Three-Step Search (TSS, and Efficient Three-Step Search (ETSS block matching estimation algorithm. The performance of the proposed MTSS block matching algorithm is evaluated on the basis of performance evaluation parameters i.e. mean absolute difference (MAD and average search points required per frame. Mean of absolute difference (MAD distortion function is used as the block distortion measure (BDM. Finally, developed approaches namely, MTSS and WFA, MTSS and FC, and Plane FC (applied on every frame are compared with each other. The experimentations are carried out on the standard uncompressed video databases, namely, akiyo, bus, mobile, suzie, traffic, football, soccer, ice etc. Developed

  2. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  3. Applications exponential approximation by integer shifts of Gaussian functions

    Directory of Open Access Journals (Sweden)

    S. M. Sitnik

    2013-01-01

    Full Text Available In this paper we consider approximations of functions using integer shifts of Gaussians – quadratic exponentials. A method is proposed to find coefficients of node functions by solving linear systems of equations. The explicit formula for the determinant of the system is found, based on it solvability of linear system under consideration is proved and uniqueness of its solution. We compare results with known ones and briefly indicate applications to signal theory.

  4. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    Science.gov (United States)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  5. Fracto-mechanoluminescent light emission of EuD4TEA-PDMS composites subjected to high strain-rate compressive loading

    Science.gov (United States)

    Ryu, Donghyeon; Castaño, Nicolas; Bhakta, Raj; Kimberley, Jamie

    2017-08-01

    The objective of this study is to understand light emission characteristics of fracto-mechanoluminescent (FML) europium tetrakis(dibenzoylmethide)-triethylammonium (EuD4TEA) crystals under high strain-rate compressive loading. As a sensing material that can play a pivotal role for the self-powered impact sensor technology, it is important to understand transformative light emission characteristics of the FML EuD4TEA crystals under high strain-rate compressive loading. First, EuD4TEA crystals were synthesized and embedded into polydimethylsiloxane (PDMS) elastomer to fabricate EuD4TEA-PDMS composite test specimens. Second, the prepared EuD4TEA-PDMS composites were tested using the modified Kolsky bar setup equipped with a high-speed camera. Third, FML light emission was captured to yield 12 bit grayscale video footage, which was processed to quantify the FML light emission. Finally, quantitative parameters were generated by taking into account pixel values and population of pixels of the 12 bit grayscale images to represent FML light intensity. The FML light intensity was correlated with high strain-rate compressive strain and strain rate to understand the FML light emission characteristics under high strain-rate compressive loading that can result from impact occurrences.

  6. Concentration of strain in a marginal rift zone of the Japan backarc during post-rift compression

    Science.gov (United States)

    Sato, H.; Ishiyama, T.; Kato, N.; Abe, S.; Shiraishi, K.; Inaba, M.; Kurashimo, E.; Iwasaki, T.; Van Horne, A.; No, T.; Sato, T.; Kodaira, S.; Matsubara, M.; Takeda, T.; Abe, S.; Kodaira, C.

    2015-12-01

    Late Cenozoic deformation zones in Japan may be divided into two types: (1) arc-arc collision zones like those of Izu and the Hokkaido axial zone, and (2) reactivated back-arc marginal rift (BMR) systems. A BMR develops during a secondary rifting event that follows the opening of a back-arc basin. It forms close to the volcanic front and distant from the spreading center of the basin. In Japan, a BMR system developed along the Sea of Japan coast following the opening of the Japan Sea. The BMR appears to be the weakest, most deformable part of the arc back-arc system. When active rifting in the marginal basins ended, thermal subsidence, and then mechanical subsidence related to the onset of a compressional stress regime, allowed deposition of up to 5 km of post-rift, deep-marine to fluvial sedimentation. Continued compression produced fault-related folds in the post-rift sediments, in thin-skin style deformation. Shortening reached a maximum in the BMR system compared to other parts of the back-arc, suggesting that it is the weakest part of the entire system. We examined the structure of the BMR system using active source seismic investigation and earthquake tomography. The velocity structure beneath the marginal rift basin shows higher P-wave velocity in the upper mantle/lower crust which suggests significant mafic intrusion and thinning of the upper continental crust. The syn-rift mafic intrusive forms a convex shape, and the boundary between the pre-rift crust and the mafic intrusive dips outward. In the post-rift compressional stress regime, the boundary of the mafic body reactivated as a reverse fault, forming a large-scale wedge thrust and causing further subsidence of the rift basin. The driver of the intense shortening event along the Sea of Japan coast in SW Japan was the arrival of a buoyant young (15 Ma) Shikoku basin at the Nankai Trough. Subduction stalled and the backarc was compressed. As the buoyant basin cooled, subduction resumed, and the rate of

  7. Visual Perception Based Rate Control Algorithm for HEVC

    Science.gov (United States)

    Feng, Zeqi; Liu, PengYu; Jia, Kebin

    2018-01-01

    For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.

  8. The effect of inlet distorted flow on steady and unsteady performance of a centrifugal compressor

    International Nuclear Information System (INIS)

    Park, Jae Hyoung; Kang, Shin Hyoung

    2005-01-01

    Effects of inlet distorted flow on performance, stall and surge are experimentally investigated for a high-speed centrifugal compressor. Tested results for the distorted inlet flow cases are compared with the result of the undistorted one. The performance of compressor is slightly deteriorated due to the inlet distortion. The inlet distortion does not affect the number of stall cell and the propagation velocity. It also does not change stall inception flow rate. However, as the distortion increases, stall starts at the higher flow rate for low speed at the lower flow rate for high speed. For 50,000 rpm stall occurs as the flow rate decreases, however disappears for the smaller flow rate. This is due to the interaction of surge and stall. After the stall and surge interact, the number of stall cell decreases

  9. Half-integer flux quantum effect in cuprate superconductors - a probe of pairing symmetry

    International Nuclear Information System (INIS)

    Tsuei, C.C.; Kirtley, J.R.; Gupta, A.; Sun, J.Z.; Moler, K.A.; Wang, J.H.

    1996-01-01

    Based on macroscopic quantum coherence effects arising from pair tunneling and flux quantization, a series of tricrystal experiments have been designed and carried out to test the order parameter symmetry in high-T c cuprate superconductors. By using a scanning SQUID microscope, we have directly and non-invasively observed the spontaneously generated half-integer flux quantum effect in controlled-orientation tricrystal cuprate superconducting systems. The presence or absence of the half-integer flux quantum effect as a function of the tricrystal geometry allows us to prove that the order parameter symmetry in the YBCO and Tl2201 systems is consistent with that of the d x 2 -y 2 pair state. (orig.)

  10. Unsteady Reynolds-averaged Navier-Stokes simulations of inlet distortion in the fan system of a gas-turbine aero-engine

    Science.gov (United States)

    Spotts, Nathan

    As modern trends in commercial aircraft design move toward high-bypass-ratio fan systems of increasing diameter with shorter, nonaxisymmetric nacelle geometries, inlet distortion is becoming common in all operating regimes. The distortion may induce aerodynamic instabilities within the fan system, leading to catastrophic damage to fan blades, should the surge margin be exceeded. Even in the absence of system instability, the heterogeneity of the flow affects aerodynamic performance significantly. Therefore, an understanding of fan-distortion interaction is critical to aircraft engine system design. This thesis research elucidates the complex fluid dynamics and fan-distortion interaction by means of computational fluid dynamics (CFD) modeling of a complete engine fan system; including rotor, stator, spinner, nacelle and nozzle; under conditions typical of those encountered by commercial aircraft. The CFD simulations, based on a Reynolds-averaged Navier-Stokes (RANS) approach, were unsteady, three-dimensional, and of a full-annulus geometry. A thorough, systematic validation has been performed for configurations from a single passage of a rotor to a full-annulus system by comparing the predicted flow characteristics and aerodynamic performance to those found in literature. The original contributions of this research include the integration of a complete engine fan system, based on the NASA rotor 67 transonic stage and representative of the propulsion systems in commercial aircraft, and a benchmark case for unsteady RANS simulations of distorted flow in such a geometry under realistic operating conditions. This study is unique in that the complex flow dynamics, resulting from fan-distortion interaction, were illustrated in a practical geometry under realistic operating conditions. For example, the compressive stage is shown to influence upstream static pressure distributions and thus suppress separation of flow on the nacelle. Knowledge of such flow physics is

  11. Optimal Chest Compression Rate and Compression to Ventilation Ratio in Delivery Room Resuscitation: Evidence from Newborn Piglets and Neonatal Manikins

    Science.gov (United States)

    Solevåg, Anne Lee; Schmölzer, Georg M.

    2017-01-01

    Cardiopulmonary resuscitation (CPR) duration until return of spontaneous circulation (ROSC) influences survival and neurologic outcomes after delivery room (DR) CPR. High quality chest compressions (CC) improve cerebral and myocardial perfusion. Improved myocardial perfusion increases the likelihood of a faster ROSC. Thus, optimizing CC quality may improve outcomes both by preserving cerebral blood flow during CPR and by reducing the recovery time. CC quality is determined by rate, CC to ventilation (C:V) ratio, and applied force, which are influenced by the CC provider. Thus, provider performance should be taken into account. Neonatal resuscitation guidelines recommend a 3:1 C:V ratio. CCs should be delivered at a rate of 90/min synchronized with ventilations at a rate of 30/min to achieve a total of 120 events/min. Despite a lack of scientific evidence supporting this, the investigation of alternative CC interventions in human neonates is ethically challenging. Also, the infrequent occurrence of extensive CPR measures in the DR make randomized controlled trials difficult to perform. Thus, many biomechanical aspects of CC have been investigated in animal and manikin models. Despite mathematical and physiological rationales that higher rates and uninterrupted CC improve CPR hemodynamics, studies indicate that provider fatigue is more pronounced when CC are performed continuously compared to when a pause is inserted after every third CC as currently recommended. A higher rate (e.g., 120/min) is also more fatiguing, which affects CC quality. In post-transitional piglets with asphyxia-induced cardiac arrest, there was no benefit of performing continuous CC at a rate of 90/min. Not only rate but duty cycle, i.e., the duration of CC/total cycle time, is a known determinant of CC effectiveness. However, duty cycle cannot be controlled with manual CC. Mechanical/automated CC in neonatal CPR has not been explored, and feedback systems are under-investigated in this

  12. Tension–compression asymmetry in an extruded Mg alloy AM30: Temperature and strain rate effects

    International Nuclear Information System (INIS)

    Zachariah, Z.; Tatiparti, Sankara Sarma V.; Mishra, S.K.; Ramakrishnan, N.; Ramamurty, U.

    2013-01-01

    The effect of strain rate, ε, and temperature, T, on the tension–compression asymmetry (TCA) in a dilute and wrought Mg alloy, AM30, over a temperature range that covers both twin accommodated deformation (below 250 °C in compression) as well as dislocation-mediated plasticity (above 250 °C) has been investigated. For this purpose, uniaxial tension and compression tests were conducted at T ranging from 25 to 400 °C with ε varying between 10 −2 and 10 s −1 . In most of the cases, the stress–strain responses in tension and compression are distinctly different; with compression responses ‘concaving upward,’ due to {101-bar 2} tensile twinning at lower plastic strains followed by slip and strain hardening at higher levels of deformation, for T below 250 °C. This results in significant levels of TCA at T −1 , suggesting that twin-mediated plastic deformation takes precedence at high rates of loading even at sufficiently high T. TCA becomes negligible at T=350 °C; however at T=400 °C, as ε increases TCA gets higher. Microscopy of the deformed samples, carried out by using electron back-scattered diffraction (EBSD), suggests that at T>250 °C dynamic recrystallization begins between accompanied by reduction in the twinned fraction that contributes to the decrease of the TCA

  13. Spatial and spectral image distortions caused by diffraction of an ordinary polarised light beam by an ultrasonic wave

    Energy Technology Data Exchange (ETDEWEB)

    Machikhin, A S; Pozhar, V E [Scientific and Technological Centre of Unique Instrumentation, Russian Academy of Sciences, Moscow (Russian Federation)

    2015-02-28

    We consider the problem of determining the spatial and spectral image distortions arising from anisotropic diffraction by ultrasonic waves in crystals with ordinary polarised light (o → e). By neglecting the small-birefringence approximation, we obtain analytical solutions that describe the dependence of the diffraction angles and wave mismatch on the acousto-optic (AO) interaction geometry and crystal parameters. The formulas derived allow one to calculate and analyse the magnitude of diffraction-induced spatial and spectral image distortions and to identify the main types of distortions: chromatic compression and trapezoidal deformation. A comparison of the values of these distortions in the diffraction of ordinary and extraordinary polarised light shows that they are almost equal in magnitude and opposite in signs, so that consistent diffraction (o → e → o or e → o → e) in two identical AO cells rotated through 180° in the plane of diffraction can compensate for these distortions. (diffraction of radiation)

  14. Is breast compression associated with breast cancer detection and other early performance measures in a population-based breast cancer screening program?

    Science.gov (United States)

    Moshina, Nataliia; Sebuødegård, Sofie; Hofvind, Solveig

    2017-06-01

    We aimed to investigate early performance measures in a population-based breast cancer screening program stratified by compression force and pressure at the time of mammographic screening examination. Early performance measures included recall rate, rates of screen-detected and interval breast cancers, positive predictive value of recall (PPV), sensitivity, specificity, and histopathologic characteristics of screen-detected and interval breast cancers. Information on 261,641 mammographic examinations from 93,444 subsequently screened women was used for analyses. The study period was 2007-2015. Compression force and pressure were categorized using tertiles as low, medium, or high. χ 2 test, t tests, and test for trend were used to examine differences between early performance measures across categories of compression force and pressure. We applied generalized estimating equations to identify the odds ratios (OR) of screen-detected or interval breast cancer associated with compression force and pressure, adjusting for fibroglandular and/or breast volume and age. The recall rate decreased, while PPV and specificity increased with increasing compression force (p for trend screen-detected cancer, PPV, sensitivity, and specificity decreased with increasing compression pressure (p for trend breast cancer compared with low compression pressure (1.89; 95% CI 1.43-2.48). High compression force and low compression pressure were associated with more favorable early performance measures in the screening program.

  15. StirMark Benchmark: audio watermarking attacks based on lossy compression

    Science.gov (United States)

    Steinebach, Martin; Lang, Andreas; Dittmann, Jana

    2002-04-01

    StirMark Benchmark is a well-known evaluation tool for watermarking robustness. Additional attacks are added to it continuously. To enable application based evaluation, in our paper we address attacks against audio watermarks based on lossy audio compression algorithms to be included in the test environment. We discuss the effect of different lossy compression algorithms like MPEG-2 audio Layer 3, Ogg or VQF on a selection of audio test data. Our focus is on changes regarding the basic characteristics of the audio data like spectrum or average power and on removal of embedded watermarks. Furthermore we compare results of different watermarking algorithms and show that lossy compression is still a challenge for most of them. There are two strategies for adding evaluation of robustness against lossy compression to StirMark Benchmark: (a) use of existing free compression algorithms (b) implementation of a generic lossy compression simulation. We discuss how such a model can be implemented based on the results of our tests. This method is less complex, as no real psycho acoustic model has to be applied. Our model can be used for audio watermarking evaluation of numerous application fields. As an example, we describe its importance for e-commerce applications with watermarking security.

  16. Curvelet-based compressive sensing for InSAR raw data

    Science.gov (United States)

    Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David

    2015-10-01

    The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications

  17. Dynamic restoration mechanism and physically based constitutive model of 2050 Al–Li alloy during hot compression

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Ruihua; Liu, Qing [School of Materials Science and Engineering, Central South University, Changsha 410083 (China); Li, Jinfeng, E-mail: lijinfeng@csu.edu.cn [School of Materials Science and Engineering, Central South University, Changsha 410083 (China); Xiang, Sheng [School of Materials Science and Engineering, Central South University, Changsha 410083 (China); Chen, Yonglai; Zhang, Xuhu [Aerospace Research Institute of Materials and Processing Technology, Beijing 100076 (China)

    2015-11-25

    Dynamic restoration mechanism of 2050 Al–Li alloy and its constitutive model were investigated by means of hot compression simulation in the deformation temperature ranging from 340 to 500 °C and at strain rates of 0.001–10 s{sup −1}. The microstructures of the compressed samples were observed using optical microscopy and transmission electron microscopy. On the base of dislocation density theory and Avrami kinetics, a physically based constitutive model was established. The results show that dynamic recovery (DRV) and dynamic recrystallization (DRX) are co-responsible for the dynamic restoration during the hot compression process under all compression conditions. The dynamic precipitation (DPN) of T1 and σ phases was observed after the deformation at 340 °C. This is the first experimental evidence for the DPN of σ phase in Al–Cu–Li alloys. The particle stimulated nucleation of DRX (PSN-DRX) due to the large Al–Cu–Mn particle was also observed. The error analysis suggests that the established constitutive model can adequately describe the flow stress dependence on strain rate, temperature and strain during the hot deformation process. - Highlights: • The experimental evidence for the DPN of σ phase in Al–Cu–Li alloys was found. • The PSN-DRX due to the large Al–Cu–Mn particle was observed. • A novel method was proposed to calculated the stress multiplier α.

  18. Stochastic integer programming by dynamic programming

    NARCIS (Netherlands)

    Lageweg, B.J.; Lenstra, J.K.; Rinnooy Kan, A.H.G.; Stougie, L.; Ermoliev, Yu.; Wets, R.J.B.

    1988-01-01

    Stochastic integer programming is a suitable tool for modeling hierarchical decision situations with combinatorial features. In continuation of our work on the design and analysis of heuristics for such problems, we now try to find optimal solutions. Dynamic programming techniques can be used to

  19. Sonographic ally Detected Architectural Distortion: Clinical Significance

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Shin Kee; Seo, Bo Kyoung; Yi, Ann; Cha, Sang Hoon; Kim, Baek Hyun; Cho, Kyu Ran; Kim, Young Sik; Son, Gil Soo; Kim, Young Soo; Kim, Hee Young [Korea University Ansan Hospital, Ansan (Korea, Republic of)

    2008-12-15

    Architectural distortion is a suspicious abnormality for the diagnosis of breast cancer. The aim of this study was to investigate the clinical significance of sonographic ally detected architectural distortion. From January 2006 to June 2008, 20 patients were identified who had sonographic ally detected architectural distortions without a history of trauma or surgery and abnormal mammographic findings related to an architectural distortion. All of the lesions were pathologically verified. We evaluated the clinical and pathological findings and then assessed the clinical significance of the sonographic ally detected architectural distortions. Based on the clinical findings, one (5%) of the 20 patients had a palpable lump and the remaining 19 patients had no symptoms. No patient had a family history of breast cancer. Based on the pathological findings, three (15%) patients had malignancies. The malignant lesions included invasive ductal carcinomas (n = 2) and ductal carcinoma in situ (n = 1). Four (20%) patients had high-risk lesions: atypical ductal hyperplasia (n = 3) and lobular carcinoma in situ (n = 1). The remaining 13 (65%) patients had benign lesions, however, seven (35%) out of 13 patients had mild-risk lesions (three intraductal papillomas, three moderate or florid epithelial hyperplasia and one sclerosing adenosis). Of the sonographic ally detected architectural distortions, 35% were breast cancers or high-risk lesions and 35% were mild-risk lesions. Thus, a biopsy might be needed for an architectural distortion without an associated mass as depicted on breast ultrasound, even though the mammographic findings are normal

  20. Sonographic ally Detected Architectural Distortion: Clinical Significance

    International Nuclear Information System (INIS)

    Kim, Shin Kee; Seo, Bo Kyoung; Yi, Ann; Cha, Sang Hoon; Kim, Baek Hyun; Cho, Kyu Ran; Kim, Young Sik; Son, Gil Soo; Kim, Young Soo; Kim, Hee Young

    2008-01-01

    Architectural distortion is a suspicious abnormality for the diagnosis of breast cancer. The aim of this study was to investigate the clinical significance of sonographic ally detected architectural distortion. From January 2006 to June 2008, 20 patients were identified who had sonographic ally detected architectural distortions without a history of trauma or surgery and abnormal mammographic findings related to an architectural distortion. All of the lesions were pathologically verified. We evaluated the clinical and pathological findings and then assessed the clinical significance of the sonographic ally detected architectural distortions. Based on the clinical findings, one (5%) of the 20 patients had a palpable lump and the remaining 19 patients had no symptoms. No patient had a family history of breast cancer. Based on the pathological findings, three (15%) patients had malignancies. The malignant lesions included invasive ductal carcinomas (n = 2) and ductal carcinoma in situ (n = 1). Four (20%) patients had high-risk lesions: atypical ductal hyperplasia (n = 3) and lobular carcinoma in situ (n = 1). The remaining 13 (65%) patients had benign lesions, however, seven (35%) out of 13 patients had mild-risk lesions (three intraductal papillomas, three moderate or florid epithelial hyperplasia and one sclerosing adenosis). Of the sonographic ally detected architectural distortions, 35% were breast cancers or high-risk lesions and 35% were mild-risk lesions. Thus, a biopsy might be needed for an architectural distortion without an associated mass as depicted on breast ultrasound, even though the mammographic findings are normal

  1. Neural underpinnings of distortions in the experience of time across senses

    Directory of Open Access Journals (Sweden)

    Deborah L. Harrington

    2011-07-01

    Full Text Available Auditory signals (A are perceived as lasting longer than visual signals (V of the same physical duration when they are compared together. Despite considerable debate about how this illusion arises psychologically, the neural underpinnings have not been studied. We used functional magnetic resonance imaging (fMRI to investigate the neural bases of audiovisual temporal distortions and more generally, intersensory timing. Adults underwent fMRI while judging the relative duration of successively presented standard interval (SI-comparison interval (CI pairs, which were unimodal (A-A, V-V or crossmodal (V-A, A-V. Mechanisms of time dilation and compression were identified by comparing the two crossmodal pairs. Mechanisms of intersensory timing were identified by comparing the unimodal and crossmodal conditions. The behavioral results showed that auditory CIs were perceived as lasting longer than visual CIs. There were three novel fMRI results. First, time dilation and compression were distinguished by differential activation of higher sensory areas (superior temporal, posterior insula, middle occipital, which typically showed stronger effective connectivity when time was dilated (V-A. Second, when time was compressed (A-V activation was greater in frontal cognitive-control centers, which guide decision making. These areas did not exhibit effective connectivity. Third, intrasensory timing was distinguished from intersensory timing partly by decreased striatal and increased superior parietal activation. These regions showed stronger connectivity with visual, memory, and cognitive-control centers during intersensory timing. Altogether, the results indicate that time dilation and compression arise from the connectivity strength of higher sensory systems with other areas. Conversely, more extensive network interactions are needed with core timing (striatum and attention (superior parietal centers to integrate time codes for intersensory signals.

  2. Research of Block-Based Motion Estimation Methods for Video Compression

    Directory of Open Access Journals (Sweden)

    Tropchenko Andrey

    2016-08-01

    Full Text Available This work is a review of the block-based algorithms used for motion estimation in video compression. It researches different types of block-based algorithms that range from the simplest named Full Search to the fast adaptive algorithms like Hierarchical Search. The algorithms evaluated in this paper are widely accepted by the video compressing community and have been used in implementing various standards, such as MPEG-4 Visual and H.264. The work also presents a very brief introduction to the entire flow of video compression.

  3. Combating Impairments in Multi-carrier Systems: A Compressed Sensing Approach

    KAUST Repository

    Al-Shuhail, Shamael

    2015-05-01

    Multi-carrier systems suffer from several impairments, and communication system engineers use powerful signal processing tools to combat these impairments and keep up with the capacity/rate demands. Compressed sensing (CS) is one such tool that allows recovering any sparse signal, requiring only a few measurements in a domain that is incoherent with the domain of sparsity. Almost all signals of interest have some degree of sparsity, and in this work we utilize the sparsity of impairments in orthogonal frequency division multiplexing (OFDM) and its variants (i.e., orthogonal frequency division multiplexing access (OFDMA) and single-carrier frequency-division multiple access (SC-FDMA)) to combat them using CS. We start with the problem of peak-to-average power ratio (PAPR) reduction in OFDM. OFDM signals suffer from high PAPR and clipping is the simplest PAPR reduction scheme. However, clipping introduces inband distortions that result in compromised performance and hence needs to be mitigated at the receiver. Due to the high PAPR nature of the OFDM signal, only a few instances are clipped, these clipping distortions can be recovered at the receiver by employing CS. We then extend the proposed clipping recovery scheme to an interleaved OFDMA system. Interleaved OFDMA presents a special structure that result in only self-inflicted clipping distortions. In this work, we prove that distortions do not spread over multiple users (while utilizing interleaved carrier assignment in OFDMA) and construct a CS system that recovers the clipping distortions on each user. Finally, we address the problem of narrowband interference (NBI) in SC-FDMA. Unlike OFDM and OFDMA systems, SC-FDMA does not suffer from high PAPR, but (as the data is encoded in time domain) is seriously vulnerable to information loss owing to NBI. Utilizing the sparse nature of NBI (in frequency domain) we combat its effect on SC-FDMA system by CS recovery.

  4. Population transfer HMQC for half-integer quadrupolar nuclei

    International Nuclear Information System (INIS)

    Wang, Qiang; Xu, Jun; Feng, Ningdong; Deng, Feng; Li, Yixuan; Trébosc, Julien; Lafon, Olivier; Hu, Bingwen; Chen, Qun; Amoureux, Jean-Paul

    2015-01-01

    This work presents a detailed analysis of a recently proposed nuclear magnetic resonance method [Wang et al., Chem. Commun. 49(59), 6653-6655 (2013)] for accelerating heteronuclear coherence transfers involving half-integer spin quadrupolar nuclei by manipulating their satellite transitions. This method, called Population Transfer Heteronuclear Multiple Quantum Correlation (PT-HMQC), is investigated in details by combining theoretical analyses, numerical simulations, and experimental investigations. We find that compared to instant inversion or instant saturation, continuous saturation is the most practical strategy to accelerate coherence transfers on half-integer quadrupolar nuclei. We further demonstrate that this strategy is efficient to enhance the sensitivity of J-mediated heteronuclear correlation experiments between two half-integer quadrupolar isotopes (e.g., 27 Al- 17 O). In this case, the build-up is strongly affected by relaxation for small T 2 ′ and J coupling values, and shortening the mixing time makes a huge signal enhancement. Moreover, this concept of population transfer can also be applied to dipolar-mediated HMQC experiments. Indeed, on the AlPO 4 -14 sample, one still observes experimentally a 2-fold shortening of the optimum mixing time albeit with no significant signal gain in the 31 P-( 27 Al) experiments

  5. Smart-Grid Backbone Network Real-Time Delay Reduction via Integer Programming.

    Science.gov (United States)

    Pagadrai, Sasikanth; Yilmaz, Muhittin; Valluri, Pratyush

    2016-08-01

    This research investigates an optimal delay-based virtual topology design using integer linear programming (ILP), which is applied to the current backbone networks such as smart-grid real-time communication systems. A network traffic matrix is applied and the corresponding virtual topology problem is solved using the ILP formulations that include a network delay-dependent objective function and lightpath routing, wavelength assignment, wavelength continuity, flow routing, and traffic loss constraints. The proposed optimization approach provides an efficient deterministic integration of intelligent sensing and decision making, and network learning features for superior smart grid operations by adaptively responding the time-varying network traffic data as well as operational constraints to maintain optimal virtual topologies. A representative optical backbone network has been utilized to demonstrate the proposed optimization framework whose simulation results indicate that superior smart-grid network performance can be achieved using commercial networks and integer programming.

  6. The possibilities of compressed sensing based migration

    KAUST Repository

    Aldawood, Ali

    2013-09-22

    Linearized waveform inversion or Least-square migration helps reduce migration artifacts caused by limited acquisition aperture, coarse sampling of sources and receivers, and low subsurface illumination. However, leastsquare migration, based on L2-norm minimization of the misfit function, tends to produce a smeared (smoothed) depiction of the true subsurface reflectivity. Assuming that the subsurface reflectivity distribution is a sparse signal, we use a compressed-sensing (Basis Pursuit) algorithm to retrieve this sparse distribution from a small number of linear measurements. We applied a compressed-sensing algorithm to image a synthetic fault model using dense and sparse acquisition geometries. Tests on synthetic data demonstrate the ability of compressed-sensing to produce highly resolved migrated images. We, also, studied the robustness of the Basis Pursuit algorithm in the presence of Gaussian random noise.

  7. The possibilities of compressed sensing based migration

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali

    2013-01-01

    Linearized waveform inversion or Least-square migration helps reduce migration artifacts caused by limited acquisition aperture, coarse sampling of sources and receivers, and low subsurface illumination. However, leastsquare migration, based on L2-norm minimization of the misfit function, tends to produce a smeared (smoothed) depiction of the true subsurface reflectivity. Assuming that the subsurface reflectivity distribution is a sparse signal, we use a compressed-sensing (Basis Pursuit) algorithm to retrieve this sparse distribution from a small number of linear measurements. We applied a compressed-sensing algorithm to image a synthetic fault model using dense and sparse acquisition geometries. Tests on synthetic data demonstrate the ability of compressed-sensing to produce highly resolved migrated images. We, also, studied the robustness of the Basis Pursuit algorithm in the presence of Gaussian random noise.

  8. Methods of compression of digital holograms, based on 1-level wavelet transform

    International Nuclear Information System (INIS)

    Kurbatova, E A; Cheremkhin, P A; Evtikhiev, N N

    2016-01-01

    To reduce the size of memory required for storing information about 3D-scenes and to decrease the rate of hologram transmission, digital hologram compression can be used. Compression of digital holograms by wavelet transforms is among most powerful methods. In the paper the most popular wavelet transforms are considered and applied to the digital hologram compression. Obtained values of reconstruction quality and hologram's diffraction efficiencies are compared. (paper)

  9. Compression-based inference on graph data

    NARCIS (Netherlands)

    Bloem, P.; van den Bosch, A.; Heskes, T.; van Leeuwen, D.

    2013-01-01

    We investigate the use of compression-based learning on graph data. General purpose compressors operate on bitstrings or other sequential representations. A single graph can be represented sequentially in many ways, which may in uence the performance of sequential compressors. Using Normalized

  10. Edge states and integer quantum Hall effect in topological insulator thin films.

    Science.gov (United States)

    Zhang, Song-Bo; Lu, Hai-Zhou; Shen, Shun-Qing

    2015-08-25

    The integer quantum Hall effect is a topological state of quantum matter in two dimensions, and has recently been observed in three-dimensional topological insulator thin films. Here we study the Landau levels and edge states of surface Dirac fermions in topological insulators under strong magnetic field. We examine the formation of the quantum plateaux of the Hall conductance and find two different patterns, in one pattern the filling number covers all integers while only odd integers in the other. We focus on the quantum plateau closest to zero energy and demonstrate the breakdown of the quantum spin Hall effect resulting from structure inversion asymmetry. The phase diagrams of the quantum Hall states are presented as functions of magnetic field, gate voltage and chemical potential. This work establishes an intuitive picture of the edge states to understand the integer quantum Hall effect for Dirac electrons in topological insulator thin films.

  11. Network interdiction and stochastic integer programming

    CERN Document Server

    2003-01-01

    On March 15, 2002 we held a workshop on network interdiction and the more general problem of stochastic mixed integer programming at the University of California, Davis. Jesús De Loera and I co-chaired the event, which included presentations of on-going research and discussion. At the workshop, we decided to produce a volume of timely work on the topics. This volume is the result. Each chapter represents state-of-the-art research and all of them were refereed by leading investigators in the respective fields. Problems - sociated with protecting and attacking computer, transportation, and social networks gain importance as the world becomes more dep- dent on interconnected systems. Optimization models that address the stochastic nature of these problems are an important part of the research agenda. This work relies on recent efforts to provide methods for - dressing stochastic mixed integer programs. The book is organized with interdiction papers first and the stochastic programming papers in the second part....

  12. A method of loss free compression for the data of nuclear spectrum

    International Nuclear Information System (INIS)

    Sun Mingshan; Wu Shiying; Chen Yantao; Xu Zurun

    2000-01-01

    A new method of loss free compression based on the feature of the data of nuclear spectrum is provided, from which a practicable algorithm is successfully derived. A compression rate varying from 0.50 to 0.25 is obtained and the distribution of the processed data becomes even more suitable to be reprocessed by another compression such as Huffman Code to improve the compression rate

  13. Logic integer programming models for signaling networks.

    Science.gov (United States)

    Haus, Utz-Uwe; Niermann, Kathrin; Truemper, Klaus; Weismantel, Robert

    2009-05-01

    We propose a static and a dynamic approach to model biological signaling networks, and show how each can be used to answer relevant biological questions. For this, we use the two different mathematical tools of Propositional Logic and Integer Programming. The power of discrete mathematics for handling qualitative as well as quantitative data has so far not been exploited in molecular biology, which is mostly driven by experimental research, relying on first-order or statistical models. The arising logic statements and integer programs are analyzed and can be solved with standard software. For a restricted class of problems the logic models reduce to a polynomial-time solvable satisfiability algorithm. Additionally, a more dynamic model enables enumeration of possible time resolutions in poly-logarithmic time. Computational experiments are included.

  14. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

    Science.gov (United States)

    Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

    2013-09-01

    According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

  15. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    Science.gov (United States)

    Hortos, William S.

    2008-04-01

    participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.

  16. Effect of temperature and strain rate on the compressive behaviour of supramolecular polyurethane

    Directory of Open Access Journals (Sweden)

    Tang Xuegang

    2015-01-01

    Full Text Available Supramolecular polyurethanes (SPUs possess thermoresponsive and thermoreversible properties, and those characteristics are highly desirable in both bulk commodity and value-added applications such as adhesives, shape-memory materials, healable coatings and lightweight, impact-resistant structures (e.g. protection for mobile electronics. A better understanding of the mechanical properties, especially the rate and temperature sensitivity, of these materials are required to assess their suitability for different applications. In this paper, a newly developed SPU with tuneable thermal properties was studied, and the response of this SPU to compressive loading over strain rates from 10−3 to 104 s−1 was presented. Furthermore, the effect of temperature on the mechanical response was also demonstrated. The sample was tested using an Instron mechanical testing machine for quasi-static loading, a home-made hydraulic system for moderate rates and a traditional split Hopkinson pressure bars (SHPBs for high strain rates. Results showed that the compression stress-strain behaviour was affected significantly by the thermoresponsive nature of SPU, but that, as expected for polymeric materials, the general trends of the temperature and the rate dependence mirror each other. However, this behaviour is more complicated than observed for many other polymeric materials, as a result of the richer range of transitions that influence the behaviour over the range of temperatures and strain rates tested.

  17. Bulk and microscale compressive behavior of a Zr-based metallic glass

    International Nuclear Information System (INIS)

    Lai, Y.H.; Lee, C.J.; Cheng, Y.T.; Chou, H.S.; Chen, H.M.; Du, X.H.; Chang, C.I.; Huang, J.C.; Jian, S.R.; Jang, J.S.C.; Nieh, T.G.

    2008-01-01

    Micropillars with diameters of 3.8, 1 and 0.7 μm were fabricated from a two-phase Zr-based metallic glass using focus ion beam (FIB), and then tested in compression at strain rates from 1 x 10 -4 to 1 x 10 -2 s -1 . The apparent yield strength of the micropillars ranges from 1992 to 2972 MPa, or 25-86% increase over that of the bulk specimens. This strength increase can be rationalized by the Weibull statistics for brittle materials

  18. Strain and rate-dependent neuronal injury in a 3D in vitro compression model of traumatic brain injury

    Science.gov (United States)

    Bar-Kochba, Eyal; Scimone, Mark T.; Estrada, Jonathan B.; Franck, Christian

    2016-01-01

    In the United States over 1.7 million cases of traumatic brain injury are reported yearly, but predictive correlation of cellular injury to impact tissue strain is still lacking, particularly for neuronal injury resulting from compression. Given the prevalence of compressive deformations in most blunt head trauma, this information is critically important for the development of future mitigation and diagnosis strategies. Using a 3D in vitro neuronal compression model, we investigated the role of impact strain and strain rate on neuronal lifetime, viability, and pathomorphology. We find that strain magnitude and rate have profound, yet distinctively different effects on the injury pathology. While strain magnitude affects the time of neuronal death, strain rate influences the pathomorphology and extent of population injury. Cellular injury is not initiated through localized deformation of the cytoskeleton but rather driven by excess strain on the entire cell. Furthermore we find that, mechanoporation, one of the key pathological trigger mechanisms in stretch and shear neuronal injuries, was not observed under compression. PMID:27480807

  19. The value of ultrasonography combined with compression technique in differentiation between benign and malignant breast masses

    International Nuclear Information System (INIS)

    Yoon, Seong Kuk; Lee, Ki Nam; Nam, Kyung Jin; Jung, Won Jung

    2001-01-01

    To determine whether the compression technique is a valuable additional method for differentiating between benign and malignant breast masses. The ultrasonographic findings of 95 benign and 53 malignant masses, all pathologically proven, were prospectively analyzed with regard to five diagnostic criteria: shape (regular/irregular), retrotumoral acoustic phenomena (posterior enhancement/posterior attenuation), internal echo pattern (homogeneous/inhomogeneous), compression effect on shape (distortion/no change), and compression effect on internal echo pattern (more homogeneous/no change). The number of cases of benign and malignant masses, respectively, was as follows: regular/irregular shape: 84/11, 9/44; posterior acoustic enhancement/posterior attenuation: 82/13, 16/37; homogeneous/inhomogeneous internal echo pattern: 78/17, 14/39; distortion/no change in shape: 76/19, 5/48; and more homogeneous/no change in internal echo pattern: 71/24, 3/50. For all diagnostic criteria for the differentiation of benign and malignant masses, the differences were statistically significant (p<.05). Ultrasonography is helpful for differentiating between benign and malignant breast masses. The compression technique is a valuable additional diagnostic method

  20. With the Advent of Tomosynthesis in the Workup of Mammographic Abnormality, is Spot Compression Mammography Now Obsolete? An Initial Clinical Experience.

    Science.gov (United States)

    Ni Mhuircheartaigh, Neasa; Coffey, Louise; Fleming, Hannah; O' Doherty, Ann; McNally, Sorcha

    2017-09-01

    To determine if the routine use of spot compression mammography is now obsolete in the assessment of screen detected masses, asymmetries and architectural distortion since the availability of digital breast tomosynthesis. We introduced breast tomosynthesis in the workup of screen detected abnormalities in our screening center in January 2015. During an initial learning period with tomosynthesis standard spot compression views were also performed. Three consultant breast radiologists retrospectively reviewed all screening mammograms recalled for assessment over the first 6-month period. We assessed retrospectively whether there was any additional diagnostic information obtained from spot compression views not already apparent on tomography. All cases were also reviewed for any additional lesions detected by tomosynthesis, not detected on routine 2-view screening mammography. 548 women screened with standard 2-view digital screening mammography were recalled for assessment in the selected period and a total of 565 lesions were assessed. 341 lesions were assessed by both tomosynthesis and routine spot compression mammography. The spot compression view was considered more helpful than tomosynthesis in only one patient. This was because the breast was inadequately positioned for tomosynthesis and the area in question was not adequately imaged. Apart from this technical error there was no asymmetry, distortion or mass where spot compression provided more diagnostic information than tomosynthesis alone. We detected three additional cancers on tomosynthesis, not detected by routine screening mammography. From our initial experience with tomosynthesis we conclude that spot compression mammography is now obsolete in the assessment of screen detected masses, asymmetries and distortions where tomosynthesis is available. © 2017 Wiley Periodicals, Inc.

  1. Mixed-integer nonlinear approach for the optimal scheduling of a head-dependent hydro chain

    Energy Technology Data Exchange (ETDEWEB)

    Catalao, J.P.S.; Pousinho, H.M.I. [Department of Electromechanical Engineering, University of Beira Interior, R. Fonte do Lameiro, 6201-001 Covilha (Portugal); Mendes, V.M.F. [Department of Electrical Engineering and Automation, Instituto Superior de Engenharia de Lisboa, R. Conselheiro Emidio Navarro, 1950-062 Lisbon (Portugal)

    2010-08-15

    This paper is on the problem of short-term hydro scheduling (STHS), particularly concerning a head-dependent hydro chain. We propose a novel mixed-integer nonlinear programming (MINLP) approach, considering hydroelectric power generation as a nonlinear function of water discharge and of the head. As a new contribution to earlier studies, we model the on-off behavior of the hydro plants using integer variables, in order to avoid water discharges at forbidden areas. Thus, an enhanced STHS is provided due to the more realistic modeling presented in this paper. Our approach has been applied successfully to solve a test case based on one of the Portuguese cascaded hydro systems with a negligible computational time requirement. (author)

  2. Conference on Commutative rings, integer-valued polynomials and polynomial functions

    CERN Document Server

    Frisch, Sophie; Glaz, Sarah; Commutative Algebra : Recent Advances in Commutative Rings, Integer-Valued Polynomials, and Polynomial Functions

    2014-01-01

    This volume presents a multi-dimensional collection of articles highlighting recent developments in commutative algebra. It also includes an extensive bibliography and lists a substantial number of open problems that point to future directions of research in the represented subfields. The contributions cover areas in commutative algebra that have flourished in the last few decades and are not yet well represented in book form. Highlighted topics and research methods include Noetherian and non- Noetherian ring theory as well as integer-valued polynomials and functions. Specific topics include: ·    Homological dimensions of Prüfer-like rings ·    Quasi complete rings ·    Total graphs of rings ·    Properties of prime ideals over various rings ·    Bases for integer-valued polynomials ·    Boolean subrings ·    The portable property of domains ·    Probabilistic topics in Intn(D) ·    Closure operations in Zariski-Riemann spaces of valuation domains ·    Stability of do...

  3. Factors associated with body image distortion in Korean adolescents

    Directory of Open Access Journals (Sweden)

    Hyun MY

    2014-05-01

    Full Text Available Mi-Yeul Hyun,1 Young-Eun Jung,2 Moon-Doo Kim,2 Young-Sook Kwak,2 Sung-Chul Hong,3 Won-Myong Bahk,4 Bo-Hyun Yoon,5 Hye Won Yoon,6 Bora Yoo61College of Nursing, Jeju National University, Jeju, Korea; 2Department of Psychiatry, School of Medicine, Jeju National University, Jeju, Korea; 3Department of Preventive Medicine, School of Medicine, Jeju National University, Jeju, Korea; 4Department of Psychiatry, Yeouido St Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea; 5Department of Psychiatry, Naju National Hospital, Naju, Korea; 6School of Medicine, Jeju National University, Jeju, KoreaPurpose: Body image incorporates cognitive and affective components as well as behaviors related to own body perception. This study evaluated the occurrence of body image distortion and its correlates in Korean adolescents.Methods: In a school-based cross-sectional survey, a total of 2,117 adolescents were recruited. They filled out self-completing questionnaires on body image distortion, eating attitudes, and behaviors (Eating Attitude Test-26 and related factors.Results: Body image distortions were found in 51.8 percent of adolescents. Univariate analyses showed that boys and older adolescents had higher rates of body image distortion. In the multivariate analyses, body image distortion was associated with high risk for eating disorders (odds ratio [OR] =1.69; 95% confidence interval [CI] 1.11–2.58; P=0.015 and being over weight (OR =33.27; 95% CI 15.51–71.35; P<0.001 or obese (OR =9.37; 95% CI 5.06–17.34; P<0.001.Conclusion: These results suggest that body image distortion is relatively common in Korean adolescents, which has implications for adolescents at risk of developing eating disorders.Keywords: body image distortion, high risk for eating disorders, Korean adolescent

  4. Elasticity of fractal materials using the continuum model with non-integer dimensional space

    Science.gov (United States)

    Tarasov, Vasily E.

    2015-01-01

    Using a generalization of vector calculus for space with non-integer dimension, we consider elastic properties of fractal materials. Fractal materials are described by continuum models with non-integer dimensional space. A generalization of elasticity equations for non-integer dimensional space, and its solutions for the equilibrium case of fractal materials are suggested. Elasticity problems for fractal hollow ball and cylindrical fractal elastic pipe with inside and outside pressures, for rotating cylindrical fractal pipe, for gradient elasticity and thermoelasticity of fractal materials are solved.

  5. Kyphoplasty for severe osteoporotic vertebral compression fractures

    International Nuclear Information System (INIS)

    Bao Zhaohua; Wang Genlin; Yang Huilin; Meng Bin; Chen Kangwu; Jiang Weimin

    2010-01-01

    Objective: To evaluate the clininal efficacy of kyphoplasty for severe osteoporotic vertebral compression fractures. Methods: Forty-five patients with severe osteoporotic compressive fractures were treated by kyphoplasty from Jan 2005 to Jan 2009. The compressive rate of the fractured vertebral bodies was more than 75%. According to the morphology of the vertebral compression fracture bodies the unilateral or bilateral balloon kyphoplasty were selected. The anterior vertebral height was measured on a standing lateral radiograph at pre-operative, post-operative (one day after operation) and final follow-up time. A visual analog scale(VAS) and the Oswestry disability index (ODI) were chosen to evaluate pain status and functional activity. Results: The mean follow-up was for 21.7 months (in range from 18 to 48 months). The anterior vertebral body height of fracture vertebra was restored from preoperative (18.7 ± 3.1)% to postoperative (51.4 ± 2.3)%, the follow-up period (50.2 ± 2.7)%. There was a significant improvement between preoperative and postoperative values (P 0.05). The VAS was 8.1 ± 1.4 at preoperative, 2.6 ± 0.9 at postoperative, 2.1 ± 0.5 at final follow-up time; and the ODI was preoperative 91.1 ± 2.3, postoperative 30.7 ± 7.1, follow-up period 26.1 ± 5.1. There was statistically significant improvement in the VAS and ODI in the post-operative assessment compared with the pre-operative assessment (P 0.05). Asymptomatic cement leakage occurred in three cases. New vertebral fracture occurred in one case. Conclusion: The study suggests that balloon kyphoplasty is a safe and effective procedure in the treatment of severe osteoporotic vertebral compression fractures. (authors)

  6. Specimen aspect ratio and progressive field strain development of sandstone under uniaxial compression by three-dimensional digital image correlation

    Directory of Open Access Journals (Sweden)

    H. Munoz

    2017-08-01

    Full Text Available The complete stress–strain characteristics of sandstone specimens were investigated in a series of quasi-static monotonic uniaxial compression tests. Strain patterns development during pre- and post-peak behaviours in specimens with different aspect ratios was also examined. Peak stress, post-peak portion of stress–strain, brittleness, characteristics of progressive localisation and field strain patterns development were affected at different extents by specimen aspect ratio. Strain patterns of the rocks were obtained by applying three-dimensional (3D digital image correlation (DIC technique. Unlike conventional strain measurement using strain gauges attached to specimen, 3D DIC allowed not only measuring large strains, but more importantly, mapping the development of field strain throughout the compression test, i.e. in pre- and post-peak regimes. Field strain development in the surface of rock specimen suggests that strain starts localising progressively and develops at a lower rate in pre-peak regime. However, in post-peak regime, strains increase at different rates as local deformations take place at different extents in the vicinity and outside the localised zone. The extent of localised strains together with the rate of strain localisation is associated with the increase in rate of strength degradation. Strain localisation and local inelastic unloading outside the localised zone both feature post-peak regime.

  7. Distorted Fingerprint Verification System

    Directory of Open Access Journals (Sweden)

    Divya KARTHIKAESHWARAN

    2011-01-01

    Full Text Available Fingerprint verification is one of the most reliable personal identification methods. Fingerprint matching is affected by non-linear distortion introduced in fingerprint impression during the image acquisition process. This non-linear deformation changes both the position and orientation of minutiae. The proposed system operates in three stages: alignment based fingerprint matching, fuzzy clustering and classifier framework. First, an enhanced input fingerprint image has been aligned with the template fingerprint image and matching score is computed. To improve the performance of the system, a fuzzy clustering based on distance and density has been used to cluster the feature set obtained from the fingerprint matcher. Finally a classifier framework has been developed and found that cost sensitive classifier produces better results. The system has been evaluated on fingerprint database and the experimental result shows that system produces a verification rate of 96%. This system plays an important role in forensic and civilian applications.

  8. Compression of Born ratio for fluorescence molecular tomography/x-ray computed tomography hybrid imaging: methodology and in vivo validation.

    Science.gov (United States)

    Mohajerani, Pouyan; Ntziachristos, Vasilis

    2013-07-01

    The 360° rotation geometry of the hybrid fluorescence molecular tomography/x-ray computed tomography modality allows for acquisition of very large datasets, which pose numerical limitations on the reconstruction. We propose a compression method that takes advantage of the correlation of the Born-normalized signal among sources in spatially formed clusters to reduce the size of system model. The proposed method has been validated using an ex vivo study and an in vivo study of a nude mouse with a subcutaneous 4T1 tumor, with and without inclusion of a priori anatomical information. Compression rates of up to two orders of magnitude with minimum distortion of reconstruction have been demonstrated, resulting in large reduction in weight matrix size and reconstruction time.

  9. A note on number fields having reciprocal integer generators | Zaïmi ...

    African Journals Online (AJOL)

    We prove that a totally complex algebraic number field K; having a conjugate which is not closed under complex conjugation, can be generated by a reciprocal integer, when the Galois group of its normal closure is contained in the hyperoctahedral group Bdeg(K)/2. Keywords: Reciprocal integers, unit primitive elements, ...

  10. Predecessor queries in dynamic integer sets

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    1997-01-01

    We consider the problem of maintaining a set of n integers in the range 0.2w–1 under the operations of insertion, deletion, predecessor queries, minimum queries and maximum queries on a unit cost RAM with word size w bits. Let f (n) be an arbitrary nondecreasing smooth function satisfying n...

  11. Tension–compression asymmetry in an extruded Mg alloy AM30: Temperature and strain rate effects

    Energy Technology Data Exchange (ETDEWEB)

    Zachariah, Z. [Department of Materials Engineering, Indian Institute of Science, Bangalore 560012 (India); Tatiparti, Sankara Sarma V.; Mishra, S.K.; Ramakrishnan, N. [General Motors Technical Center, ITPL, Whitefield, Bangalore 560066 (India); Ramamurty, U., E-mail: ramu@materials.iisc.ernet.in [Department of Materials Engineering, Indian Institute of Science, Bangalore 560012 (India)

    2013-06-10

    The effect of strain rate, ε, and temperature, T, on the tension–compression asymmetry (TCA) in a dilute and wrought Mg alloy, AM30, over a temperature range that covers both twin accommodated deformation (below 250 °C in compression) as well as dislocation-mediated plasticity (above 250 °C) has been investigated. For this purpose, uniaxial tension and compression tests were conducted at T ranging from 25 to 400 °C with ε varying between 10{sup −2} and 10 s{sup −1}. In most of the cases, the stress–strain responses in tension and compression are distinctly different; with compression responses ‘concaving upward,’ due to {101-bar 2} tensile twinning at lower plastic strains followed by slip and strain hardening at higher levels of deformation, for T below 250 °C. This results in significant levels of TCA at T<250 °C, reducing substantially at high temperatures. At T=150 and 250 °C, high ε leads to high TCA, in particular at T=250 °C and ε=10 s{sup −1}, suggesting that twin-mediated plastic deformation takes precedence at high rates of loading even at sufficiently high T. TCA becomes negligible at T=350 °C; however at T=400 °C, as ε increases TCA gets higher. Microscopy of the deformed samples, carried out by using electron back-scattered diffraction (EBSD), suggests that at T>250 °C dynamic recrystallization begins between accompanied by reduction in the twinned fraction that contributes to the decrease of the TCA.

  12. Assessment of compressive failure process of cortical bone materials using damage-based model.

    Science.gov (United States)

    Ng, Theng Pin; R Koloor, S S; Djuansjah, J R P; Abdul Kadir, M R

    2017-02-01

    The main failure factors of cortical bone are aging or osteoporosis, accident and high energy trauma or physiological activities. However, the mechanism of damage evolution coupled with yield criterion is considered as one of the unclear subjects in failure analysis of cortical bone materials. Therefore, this study attempts to assess the structural response and progressive failure process of cortical bone using a brittle damaged plasticity model. For this reason, several compressive tests are performed on cortical bone specimens made of bovine femur, in order to obtain the structural response and mechanical properties of the material. Complementary finite element (FE) model of the sample and test is prepared to simulate the elastic-to-damage behavior of the cortical bone using the brittle damaged plasticity model. The FE model is validated in a comparative method using the predicted and measured structural response as load-compressive displacement through simulation and experiment. FE results indicated that the compressive damage initiated and propagated at central region where maximum equivalent plastic strain is computed, which coincided with the degradation of structural compressive stiffness followed by a vast amount of strain energy dissipation. The parameter of compressive damage rate, which is a function dependent on damage parameter and the plastic strain is examined for different rates. Results show that considering a similar rate to the initial slope of the damage parameter in the experiment would give a better sense for prediction of compressive failure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Medical Image Compression Based on Region of Interest, With Application to Colon CT Images

    National Research Council Canada - National Science Library

    Gokturk, Salih

    2001-01-01

    ...., in diagnostically important regions. This paper discusses a hybrid model of lossless compression in the region of interest, with high-rate, motion-compensated, lossy compression in other regions...

  14. Vector calculus in non-integer dimensional space and its applications to fractal media

    Science.gov (United States)

    Tarasov, Vasily E.

    2015-02-01

    We suggest a generalization of vector calculus for the case of non-integer dimensional space. The first and second orders operations such as gradient, divergence, the scalar and vector Laplace operators for non-integer dimensional space are defined. For simplification we consider scalar and vector fields that are independent of angles. We formulate a generalization of vector calculus for rotationally covariant scalar and vector functions. This generalization allows us to describe fractal media and materials in the framework of continuum models with non-integer dimensional space. As examples of application of the suggested calculus, we consider elasticity of fractal materials (fractal hollow ball and fractal cylindrical pipe with pressure inside and outside), steady distribution of heat in fractal media, electric field of fractal charged cylinder. We solve the correspondent equations for non-integer dimensional space models.

  15. An overview of solution methods for multi-objective mixed integer linear programming programs

    DEFF Research Database (Denmark)

    Andersen, Kim Allan; Stidsen, Thomas Riis

    Multiple objective mixed integer linear programming (MOMIP) problems are notoriously hard to solve to optimality, i.e. finding the complete set of non-dominated solutions. We will give an overview of existing methods. Among those are interactive methods, the two phases method and enumeration...... methods. In particular we will discuss the existing branch and bound approaches for solving multiple objective integer programming problems. Despite the fact that branch and bound methods has been applied successfully to integer programming problems with one criterion only a few attempts has been made...

  16. Effect of Fiber Orientation on Dynamic Compressive Properties of an Ultra-High Performance Concrete

    Science.gov (United States)

    2017-08-01

    transient stress wave (Chen and Song 2011). A schematic of a modern SHPB is shown in Figure 2.3. On this SHPB, a compressed gas cannon is used to launch...1991. Compressive behaviour of concrete at high strain rates. Materials and Structures 24(6):425-450. Buzug, T. M. 2008. Computed tomography: From...SFRC. Journal of Materials Science 48(10):3745-3759. Empelmann, M., M. Teutsch, and G. Steven. 2008. Improvement of the post fracture behaviour of

  17. Effects of Visual Feedback Distortion on Gait Adaptation: Comparison of Implicit Visual Distortion Versus Conscious Modulation on Retention of Motor Learning.

    Science.gov (United States)

    Kim, Seung-Jae; Ogilvie, Mitchell; Shimabukuro, Nathan; Stewart, Trevor; Shin, Joon-Ho

    2015-09-01

    Visual feedback can be used during gait rehabilitation to improve the efficacy of training. We presented a paradigm called visual feedback distortion; the visual representation of step length was manipulated during treadmill walking. Our prior work demonstrated that an implicit distortion of visual feedback of step length entails an unintentional adaptive process in the subjects' spatial gait pattern. Here, we investigated whether the implicit visual feedback distortion, versus conscious correction, promotes efficient locomotor adaptation that relates to greater retention of a task. Thirteen healthy subjects were studied under two conditions: (1) we implicitly distorted the visual representation of their gait symmetry over 14 min, and (2) with help of visual feedback, subjects were told to walk on the treadmill with the intent of attaining the gait asymmetry observed during the first implicit trial. After adaptation, the visual feedback was removed while subjects continued walking normally. Over this 6-min period, retention of preserved asymmetric pattern was assessed. We found that there was a greater retention rate during the implicit distortion trial than that of the visually guided conscious modulation trial. This study highlights the important role of implicit learning in the context of gait rehabilitation by demonstrating that training with implicit visual feedback distortion may produce longer lasting effects. This suggests that using visual feedback distortion could improve the effectiveness of treadmill rehabilitation processes by influencing the retention of motor skills.

  18. Does peroperative external pneumatic leg muscle compression prevent post-operative venous thrombosis in neurosurgery?

    Science.gov (United States)

    Bynke, O; Hillman, J; Lassvik, C

    1987-01-01

    Post-operative deep venous thrombosis (DVT) is a frequent and potentially life-threatening complication in neurosurgery. In this field of surgery, with its special demands for exact haemostasis, prophylaxis against deep venous thrombosis with anticoagulant drugs has been utilized only reluctantly. Postoperative pneumatic muscle compression (EPC) has been shown to be effective, although there are several practical considerations involved with this method which limit its clinical applicability. In the present study per-operative EPC was evaluated and was found to provide good protection against DVT in patients with increased risk from this complication. This method has the advantage of being effective, safe, inexpensive and readily practicable.

  19. A new chest compression depth feedback algorithm for high-quality CPR based on smartphone.

    Science.gov (United States)

    Song, Yeongtak; Oh, Jaehoon; Chee, Youngjoon

    2015-01-01

    Although many smartphone application (app) programs provide education and guidance for basic life support, they do not commonly provide feedback on the chest compression depth (CCD) and rate. The validation of its accuracy has not been reported to date. This study was a feasibility assessment of use of the smartphone as a CCD feedback device. In this study, we proposed the concept of a new real-time CCD estimation algorithm using a smartphone and evaluated the accuracy of the algorithm. Using the double integration of the acceleration signal, which was obtained from the accelerometer in the smartphone, we estimated the CCD in real time. Based on its periodicity, we removed the bias error from the accelerometer. To evaluate this instrument's accuracy, we used a potentiometer as the reference depth measurement. The evaluation experiments included three levels of CCD (insufficient, adequate, and excessive) and four types of grasping orientations with various compression directions. We used the difference between the reference measurement and the estimated depth as the error. The error was calculated for each compression. When chest compressions were performed with adequate depth for the patient who was lying on a flat floor, the mean (standard deviation) of the errors was 1.43 (1.00) mm. When the patient was lying on an oblique floor, the mean (standard deviation) of the errors was 3.13 (1.88) mm. The error of the CCD estimation was tolerable for the algorithm to be used in the smartphone-based CCD feedback app to compress more than 51 mm, which is the 2010 American Heart Association guideline.

  20. A compressed sensing based method with support refinement for impulse noise cancelation in DSL

    KAUST Repository

    Quadeer, Ahmed Abdul

    2013-06-01

    This paper presents a compressed sensing based method to suppress impulse noise in digital subscriber line (DSL). The proposed algorithm exploits the sparse nature of the impulse noise and utilizes the carriers, already available in all practical DSL systems, for its estimation and cancelation. Specifically, compressed sensing is used for a coarse estimate of the impulse position, an a priori information based maximum aposteriori probability (MAP) metric for its refinement, followed by least squares (LS) or minimum mean square error (MMSE) estimation for estimating the impulse amplitudes. Simulation results show that the proposed scheme achieves higher rate as compared to other known sparse estimation algorithms in literature. The paper also demonstrates the superior performance of the proposed scheme compared to the ITU-T G992.3 standard that utilizes RS-coding for impulse noise refinement in DSL signals. © 2013 IEEE.

  1. Image Compression Based On Wavelet, Polynomial and Quadtree

    Directory of Open Access Journals (Sweden)

    Bushra A. SULTAN

    2011-01-01

    Full Text Available In this paper a simple and fast image compression scheme is proposed, it is based on using wavelet transform to decompose the image signal and then using polynomial approximation to prune the smoothing component of the image band. The architect of proposed coding scheme is high synthetic where the error produced due to polynomial approximation in addition to the detail sub-band data are coded using both quantization and Quadtree spatial coding. As a last stage of the encoding process shift encoding is used as a simple and efficient entropy encoder to compress the outcomes of the previous stage.The test results indicate that the proposed system can produce a promising compression performance while preserving the image quality level.

  2. Linear Independence of -Logarithms over the Eisenstein Integers

    Directory of Open Access Journals (Sweden)

    Peter Bundschuh

    2010-01-01

    Full Text Available For fixed complex with ||>1, the -logarithm is the meromorphic continuation of the series ∑>0/(−1,||1,≠,2,3,…. In 2004, Tachiya showed that this is true in the Subcase =ℚ, ∈ℤ, =−1, and the present authors extended this result to arbitrary integer from an imaginary quadratic number field , and provided a quantitative version. In this paper, the earlier method, in particular its arithmetical part, is further developed to answer the above question in the affirmative if is the Eisenstein number field √ℚ(−3, an integer from , and a primitive third root of unity. Under these conditions, the linear independence holds also for 1,(,(−1, and both results are quantitative.

  3. Application of a non-integer Bessel uniform approximation to inelastic molecular collisions

    International Nuclear Information System (INIS)

    Connor, J.N.L.; Mayne, H.R.

    1979-01-01

    A non-integer Bessel uniform approximation has been used to calculate transition probabilities for collinear atom-oscillator collisions. The collision systems used are a harmonic oscillator interacting via a Lennard-Jones potential and a Morse oscillator interacting via an exponential potential. Both classically allowed and classically forbidden transitions have been treated. The order of the Bessel function is chosen by a physical argument that makes use of information contained in the final-action initial-angle plot. Limitations of this procedure are discussed. It is shown that the non-integer Bessel approximation is accurate for elastic 0 → 0 collisions at high collision energies, where the integer Bessel approximation is inaccurate or inapplicable. (author)

  4. Hierarchical Hidden Markov Models for Multivariate Integer-Valued Time-Series

    DEFF Research Database (Denmark)

    Catania, Leopoldo; Di Mari, Roberto

    2018-01-01

    We propose a new flexible dynamic model for multivariate nonnegative integer-valued time-series. Observations are assumed to depend on the realization of two additional unobserved integer-valued stochastic variables which control for the time-and cross-dependence of the data. An Expectation......-Maximization algorithm for maximum likelihood estimation of the model's parameters is derived. We provide conditional and unconditional (cross)-moments implied by the model, as well as the limiting distribution of the series. A Monte Carlo experiment investigates the finite sample properties of our estimation...

  5. Integer factoring and modular square roots

    Czech Academy of Sciences Publication Activity Database

    Jeřábek, Emil

    2016-01-01

    Roč. 82, č. 2 (2016), s. 380-394 ISSN 0022-0000 R&D Projects: GA AV ČR IAA100190902; GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : integer factoring * quadratic residue * PPA Subject RIV: BA - General Mathematics Impact factor: 1.678, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022000015000768

  6. CPAC: Energy-Efficient Data Collection through Adaptive Selection of Compression Algorithms for Sensor Networks

    Science.gov (United States)

    Lee, HyungJune; Kim, HyunSeok; Chang, Ik Joon

    2014-01-01

    We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1) which sensor nodes should execute compression; and (2) which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP). More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol. PMID:24721763

  7. Correlations between quality indexes of chest compression.

    Science.gov (United States)

    Zhang, Feng-Ling; Yan, Li; Huang, Su-Fang; Bai, Xiang-Jun

    2013-01-01

    Cardiopulmonary resuscitation (CPR) is a kind of emergency treatment for cardiopulmonary arrest, and chest compression is the most important and necessary part of CPR. The American Heart Association published the new Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care in 2010 and demanded for better performance of chest compression practice, especially in compression depth and rate. The current study was to explore the relationship of quality indexes of chest compression and to identify the key points in chest compression training and practice. Totally 219 healthcare workers accepted chest compression training by using Laerdal ACLS advanced life support resuscitation model. The quality indexes of chest compression, including compression hands placement, compression rate, compression depth, and chest wall recoil as well as self-reported fatigue time were monitored by the Laerdal Computer Skills and Reporting System. The quality of chest compression was related to the gender of the compressor. The indexes in males, including self-reported fatigue time, the accuracy of compression depth and the compression rate, the accuracy of compression rate, were higher than those in females. However, the accuracy of chest recoil was higher in females than in males. The quality indexes of chest compression were correlated with each other. The self-reported fatigue time was related to all the indexes except the compression rate. It is necessary to offer CPR training courses regularly. In clinical practice, it might be better to change the practitioner before fatigue, especially for females or weak practitioners. In training projects, more attention should be paid to the control of compression rate, in order to delay the fatigue, guarantee enough compression depth and improve the quality of chest compression.

  8. Implementation and Application of PSF-Based EPI Distortion Correction to High Field Animal Imaging

    Directory of Open Access Journals (Sweden)

    Dominik Paul

    2009-01-01

    Full Text Available The purpose of this work is to demonstrate the functionality and performance of a PSF-based geometric distortion correction for high-field functional animal EPI. The EPI method was extended to measure the PSF and a postprocessing chain was implemented in Matlab for offline distortion correction. The correction procedure was applied to phantom and in vivo imaging of mice and rats at 9.4T using different SE-EPI and DWI-EPI protocols. Results show the significant improvement in image quality for single- and multishot EPI. Using a reduced FOV in the PSF encoding direction clearly reduced the acquisition time for PSF data by an acceleration factor of 2 or 4, without affecting the correction quality.

  9. A multiple objective mixed integer linear programming model for power generation expansion planning

    Energy Technology Data Exchange (ETDEWEB)

    Antunes, C. Henggeler; Martins, A. Gomes [INESC-Coimbra, Coimbra (Portugal); Universidade de Coimbra, Dept. de Engenharia Electrotecnica, Coimbra (Portugal); Brito, Isabel Sofia [Instituto Politecnico de Beja, Escola Superior de Tecnologia e Gestao, Beja (Portugal)

    2004-03-01

    Power generation expansion planning inherently involves multiple, conflicting and incommensurate objectives. Therefore, mathematical models become more realistic if distinct evaluation aspects, such as cost and environmental concerns, are explicitly considered as objective functions rather than being encompassed by a single economic indicator. With the aid of multiple objective models, decision makers may grasp the conflicting nature and the trade-offs among the different objectives in order to select satisfactory compromise solutions. This paper presents a multiple objective mixed integer linear programming model for power generation expansion planning that allows the consideration of modular expansion capacity values of supply-side options. This characteristic of the model avoids the well-known problem associated with continuous capacity values that usually have to be discretized in a post-processing phase without feedback on the nature and importance of the changes in the attributes of the obtained solutions. Demand-side management (DSM) is also considered an option in the planning process, assuming there is a sufficiently large portion of the market under franchise conditions. As DSM full costs are accounted in the model, including lost revenues, it is possible to perform an evaluation of the rate impact in order to further inform the decision process (Author)

  10. Obstacles and Affordances for Integer Reasoning: An Analysis of Children's Thinking and the History of Mathematics

    Science.gov (United States)

    Bishop, Jessica Pierson; Lamb, Lisa L.; Philipp, Randolph A.; Whitacre, Ian; Schappelle, Bonnie P.; Lewis, Melinda L.

    2014-01-01

    We identify and document 3 cognitive obstacles, 3 cognitive affordances, and 1 type of integer understanding that can function as either an obstacle or affordance for learners while they extend their numeric domains from whole numbers to include negative integers. In particular, we highlight 2 key subsets of integer reasoning: understanding or…

  11. Compressive sensing based ptychography image encryption

    Science.gov (United States)

    Rawat, Nitin

    2015-09-01

    A compressive sensing (CS) based ptychography combined with an optical image encryption is proposed. The diffraction pattern is recorded through ptychography technique further compressed by non-uniform sampling via CS framework. The system requires much less encrypted data and provides high security. The diffraction pattern as well as the lesser measurements of the encrypted samples serves as a secret key which make the intruder attacks more difficult. Furthermore, CS shows that the linearly projected few random samples have adequate information for decryption with a dramatic volume reduction. Experimental results validate the feasibility and effectiveness of our proposed technique compared with the existing techniques. The retrieved images do not reveal any information with the original information. In addition, the proposed system can be robust even with partial encryption and under brute-force attacks.

  12. Effect of the rate of chest compression familiarised in previous training on the depth of chest compression during metronome-guided cardiopulmonary resuscitation: a randomised crossover trial

    OpenAIRE

    Bae, Jinkun; Chung, Tae Nyoung; Je, Sang Mo

    2016-01-01

    Objectives To assess how the quality of metronome-guided cardiopulmonary resuscitation (CPR) was affected by the chest compression rate familiarised by training before the performance and to determine a possible mechanism for any effect shown. Design Prospective crossover trial of a simulated, one-person, chest-compression-only CPR. Setting Participants were recruited from a medical school and two paramedic schools of South Korea. Participants 42 senior students of a medical school and two pa...

  13. Review and comparison of geometric distortion correction schemes in MR images used in stereotactic radiosurgery applications

    Science.gov (United States)

    Pappas, E. P.; Dellios, D.; Seimenis, I.; Moutsatsos, A.; Georgiou, E.; Karaiskos, P.

    2017-11-01

    In Stereotactic Radiosurgery (SRS), MR-images are widely used for target localization and delineation in order to take advantage of the superior soft tissue contrast they exhibit. However, spatial dose delivery accuracy may be deteriorated due to geometric distortions which are partly attributed to static magnetic field inhomogeneity and patient/object-induced chemical shift and susceptibility related artifacts, known as sequence-dependent distortions. Several post-imaging sequence-dependent distortion correction schemes have been proposed which mainly employ the reversal of read gradient polarity. The scope of this work is to review, evaluate and compare the efficacy of two proposed correction approaches. A specially designed phantom which incorporates 947 control points (CPs) for distortion detection was utilized. The phantom was MR scanned at 1.5T using the head coil and the clinically employed pulse sequence for SRS treatment planning. An additional scan was performed with identical imaging parameters except for reversal of read gradient polarity. In-house MATLAB routines were developed for implementation of the signal integration and average-image distortion correction techniques. The mean CP locations of the two MR scans were regarded as the reference CP distribution. Residual distortion was assessed by comparing the corrected CP locations with corresponding reference positions. Mean absolute distortion on frequency encoding direction was reduced from 0.34mm (original images) to 0.15mm and 0.14mm following application of signal integration and average-image methods, respectively. However, a maximum residual distortion of 0.7mm was still observed for both techniques. The signal integration method relies on the accuracy of edge detection and requires 3-4 hours of post-imaging computational time. The average-image technique is a more efficient (processing time of the order of seconds) and easier to implement method to improve geometric accuracy in such

  14. THE PHENOMENON OF HALF-INTEGER SPIN, QUATERNIONS, AND PAULI MATRICES

    Directory of Open Access Journals (Sweden)

    FERNANDO R. GONZÁLEZ DÍAZ

    2017-01-01

    Full Text Available In this paper the phenomenon of half-integer spin exemplification Paul AM Dirac made with a pair of scissors, an elastic cord and chair play. Four examples in which the same phenomenon appears and the algebraic structure of quaternions is related to one of the examples are described. Mathematical proof of the phenomenon using known topological and algebraic results are explained. The basic results of algebraic structures are described quaternions H , and an intrinsic relationship with the phenomenon half-integer spin and the Pauli matrices is established.

  15. Study on conversion relationships of compressive strength indexes for recycled lightweight aggregate concrete

    Science.gov (United States)

    Zhang, Xiang-gang; Yang, Jian-hui; Kuang, Xiao-mei

    2017-01-01

    In order to study cube compressive strength and axial compressive strength of recycled lightweight aggregate concrete(RLAC), and conversion relationship between the two, with the replacement rate of recycled lightweight coarse aggregate as change parameters, 15 standard cube test specimens and 15 standard prism test specimens were produced to carry out the test. Then compressive strength of test specimens were measured, and the law of different replacement rate of recycled lightweight coarse aggregate influencing compressive strength of RLAC was analyzed, as the method of statistical regression adopted, the conversion relationships between of cube compressive strength and axial compressive strength of RLAC was obtained. It is shown that compressive strength of RLAC are lower than compressive strength of ordinary concrete; and that compressive strength of RLAC gradually decreases as replacement rate of recycled lightweight coarse aggregate increases; as well as, the conversion relationship between axial compressive strength and cube compressive strength of RLAC is different from ordinary concrete; based on the experimental data, conversion relationship formula between compressive strength indexes of RLAC was established. It is suggested that the replacement rate of recycled lightweight aggregate should be controlled within 25%.

  16. Enhancement of VUV emission from a coaxial xenon excimer ultraviolet lamp driven by distorted bipolar square voltages

    Energy Technology Data Exchange (ETDEWEB)

    Jou, S.Y.; Hung, C.T.; Chiu, Y.M.; Wu, J.S. [Department of Mechanical Engineering, National Chiao Tung University, Hsinchu (China); Wei, B.Y. [High-Efficiency Gas Discharge Lamps Group, Material and Chemical Research Laboratories, Hsinchu (China)

    2011-12-15

    Enhancement of vacuum UV emission (172 nm VUV) from a coaxial xenon excimer UV lamp (EUV) driven by distorted 50 kHz bipolar square voltages, as compared to that by sinusoidal voltages, is investigated numerically in this paper. A self-consistent radial one-dimensional fluid model, taking into consideration non-local electron energy balance, is employed to simulate the discharge physics and chemistry. The discharge is divided into two three-period portions; these include: the pre-discharge, the discharge (most intense at 172 nm VUV emission) and the post-discharge periods. The results show that the efficiency of VUV emission using the distorted bipolar square voltages is much greater than when using sinusoidal voltages; this is attributed to two major mechanisms. The first is the much larger rate of change of the voltage in bipolar square voltages, in which only the electrons can efficiently absorb the power in a very short period of time. Energetic electrons then generate a higher concentration of metastable (and also excited dimer) xenon that is distributed more uniformly across the gap, for a longer period of time during the discharge process. The second is the comparably smaller amount of ''wasted'' power deposition by Xe{sup +}{sub 2} in the post-discharge period, as driven by distorted bipolar square voltages, because of the nearly vanishing gap voltage caused by the shielding effect resulting from accumulated charges on both dielectric surfaces (copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  17. STUDY OF SOLUTION REPRESENTATION LANGUAGE INFLUENCE ON EFFICIENCY OF INTEGER SEQUENCES PREDICTION

    Directory of Open Access Journals (Sweden)

    A. S. Potapov

    2015-01-01

    Full Text Available Methods based on genetic programming for the problem solution of integer sequences extrapolation are the subjects for study in the paper. In order to check the hypothesis about the influence of language expression of program representation on the prediction effectiveness, the genetic programming method based on several limited languages for recurrent sequences has been developed. On the single sequence sample the implemented method with the use of more complete language has shown results, significantly better than the results of one of the current methods represented in literature based on artificial neural networks. Analysis of experimental comparison results for the realized method with the usage of different languages has shown that language extension increases the difficulty of consistent patterns search in languages, available for prediction in a simpler language though it makes new sequence classes accessible for prediction. This effect can be reduced but not eliminated completely at language extension by the constructions, which make solutions more compact. Carried out researches have drawn to the conclusion that alone the choice of an adequate language for solution representation is not enough for the full problem solution of integer sequences prediction (and, all the more, universal prediction problem. However, practically applied methods can be received by the usage of genetic programming.

  18. Mixed integer (0-1) fractional programming for decision support in paper production industry

    NARCIS (Netherlands)

    Claassen, G.D.H.

    2014-01-01

    This paper presents an effective and efficient method for solving a special class of mixed integer fractional programming (FP) problems. We take a classical reformulation approach for continuous FP as a starting point and extend it for solving a more general class of mixed integer (0–1) fractional

  19. A binary mixed integer coded genetic algorithm for multi-objective optimization of nuclear research reactor fuel reloading

    Energy Technology Data Exchange (ETDEWEB)

    Binh, Do Quang [University of Technical Education Ho Chi Minh City (Viet Nam); Huy, Ngo Quang [University of Industry Ho Chi Minh City (Viet Nam); Hai, Nguyen Hoang [Centre for Research and Development of Radiation Technology, Ho Chi Minh City (Viet Nam)

    2014-12-15

    This paper presents a new approach based on a binary mixed integer coded genetic algorithm in conjunction with the weighted sum method for multi-objective optimization of fuel loading patterns for nuclear research reactors. The proposed genetic algorithm works with two types of chromosomes: binary and integer chromosomes, and consists of two types of genetic operators: one working on binary chromosomes and the other working on integer chromosomes. The algorithm automatically searches for the most suitable weighting factors of the weighting function and the optimal fuel loading patterns in the search process. Illustrative calculations are implemented for a research reactor type TRIGA MARK II loaded with the Russian VVR-M2 fuels. Results show that the proposed genetic algorithm can successfully search for both the best weighting factors and a set of approximate optimal loading patterns that maximize the effective multiplication factor and minimize the power peaking factor while satisfying operational and safety constraints for the research reactor.

  20. A binary mixed integer coded genetic algorithm for multi-objective optimization of nuclear research reactor fuel reloading

    International Nuclear Information System (INIS)

    Binh, Do Quang; Huy, Ngo Quang; Hai, Nguyen Hoang

    2014-01-01

    This paper presents a new approach based on a binary mixed integer coded genetic algorithm in conjunction with the weighted sum method for multi-objective optimization of fuel loading patterns for nuclear research reactors. The proposed genetic algorithm works with two types of chromosomes: binary and integer chromosomes, and consists of two types of genetic operators: one working on binary chromosomes and the other working on integer chromosomes. The algorithm automatically searches for the most suitable weighting factors of the weighting function and the optimal fuel loading patterns in the search process. Illustrative calculations are implemented for a research reactor type TRIGA MARK II loaded with the Russian VVR-M2 fuels. Results show that the proposed genetic algorithm can successfully search for both the best weighting factors and a set of approximate optimal loading patterns that maximize the effective multiplication factor and minimize the power peaking factor while satisfying operational and safety constraints for the research reactor.

  1. Distortion and residual stresses in structures reinforced with titanium straps for improved damage tolerance

    International Nuclear Information System (INIS)

    Liljedahl, C.D.M.; Fitzpatrick, M.E.; Edwards, L.

    2008-01-01

    Distortion and residual stresses induced during the manufacturing process of bonded crack retarders have been investigated. Titanium alloy straps were adhesively bonded to an aluminium alloy SENT specimen to promote fatigue crack growth retardation. The effect of three different strap dimensions was investigated. The spring-back of a component when released from the autoclave and the residual stresses are important factors to take into account when designing a selective reinforcement, as this may alter the local aerodynamic characteristics and reduce the crack bridging effect of the strap. The principal problem with residual stresses is that the tensile nature of the residual stresses in the primary aluminium structure has a negative impact on the crack initiation and crack propagation behaviour in the aluminium. The residual stresses were measured with neutron diffraction and the distortion of the specimens was measured with a contour measurement machine. The bonding process was simulated with a three-dimensional FE model. The residual stresses were found to be tensile close to the strap and slightly compressive on the un-bonded side. Both the distortion and the residual stresses increased with the thickness and the width of the strap. Very good agreement between the measured stresses and the measured distortion and the FE simulation was found

  2. Assessing Fan Flutter Stability in Presence of Inlet Distortion Using One-Way and Two-Way Coupled Methods

    Science.gov (United States)

    Herrick, Gregory P.

    2014-01-01

    Concerns regarding noise, propulsive efficiency, and fuel burn are inspiring aircraft designs wherein the propulsive turbomachines are partially (or fully) embedded within the airframe; such designs present serious concerns with regard to aerodynamic and aeromechanic performance of the compression system in response to inlet distortion. Previously, a preliminary design of a forward-swept high-speed fan exhibited flutter concerns in clean-inlet flows, and the present author then studied this fan further in the presence of off-design distorted in-flows. Continuing this research, a three-dimensional, unsteady, Navier-Stokes computational fluid dynamics code is again applied to analyze and corroborate fan performance with clean inlet flow and now with a simplified, sinusoidal distortion of total pressure at the aerodynamic interface plane. This code, already validated in its application to assess aerodynamic damping of vibrating blades at various flow conditions using a one-way coupled energy-exchange approach, is modified to include a two-way coupled timemarching aeroelastic simulation capability. The two coupling methods are compared in their evaluation of flutter stability in the presence of distorted in-flows.

  3. Split diversity in constrained conservation prioritization using integer linear programming.

    Science.gov (United States)

    Chernomor, Olga; Minh, Bui Quang; Forest, Félix; Klaere, Steffen; Ingram, Travis; Henzinger, Monika; von Haeseler, Arndt

    2015-01-01

    Phylogenetic diversity (PD) is a measure of biodiversity based on the evolutionary history of species. Here, we discuss several optimization problems related to the use of PD, and the more general measure split diversity (SD), in conservation prioritization.Depending on the conservation goal and the information available about species, one can construct optimization routines that incorporate various conservation constraints. We demonstrate how this information can be used to select sets of species for conservation action. Specifically, we discuss the use of species' geographic distributions, the choice of candidates under economic pressure, and the use of predator-prey interactions between the species in a community to define viability constraints.Despite such optimization problems falling into the area of NP hard problems, it is possible to solve them in a reasonable amount of time using integer programming. We apply integer linear programming to a variety of models for conservation prioritization that incorporate the SD measure.We exemplarily show the results for two data sets: the Cape region of South Africa and a Caribbean coral reef community. Finally, we provide user-friendly software at http://www.cibiv.at/software/pda.

  4. Invertebrate post-segregation distorters: a new embryo-killing gene.

    Directory of Open Access Journals (Sweden)

    Steven P Sinkins

    2011-07-01

    Full Text Available Cytoplasmic incompatibility induced by inherited intracellular bacteria of arthropods, and Medea elements found in flour beetles, are both forms of postsegregation distortion involving the killing of embryos in order to increase the ratio of progeny that inherit them. The recently described peel-zeel element of Caenorhabditis elegans also uses this mechanism; like Medea the genes responsible are in the nuclear genome but it shares a paternal mode of action with the bacteria. The peel-1 gene has now been shown to encode a potent toxin that is delivered by sperm, and rescued by zygotic transcription of the linked zeel-1. The predominance of self-fertilization in C. elegans has produced an unusual distribution pattern for a selfish genetic element; further population and functional studies will shed light on its evolution. The element might also have potential for use in disease control.

  5. A theory of post-stall transients in axial compression systems. I - Development of equations

    Science.gov (United States)

    Moore, F. K.; Greitzer, E. M.

    1985-01-01

    An approximate theory is presented for post-stall transients in multistage axial compression systems. The theory leads to a set of three simultaneous nonlinear third-order partial differential equations for pressure rise, and average and disturbed values of flow coefficient, as functions of time and angle around the compressor. By a Galerkin procedure, angular dependence is averaged, and the equations become first order in time. These final equations are capable of describing the growth and possible decay of a rotating-stall cell during a compressor mass-flow transient. It is shown how rotating-stall-like and surgelike motions are coupled through these equations, and also how the instantaneous compressor pumping characteristic changes during the transient stall process.

  6. A new hyperspectral image compression paradigm based on fusion

    Science.gov (United States)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  7. An Improved Method for Solving Multiobjective Integer Linear Fractional Programming Problem

    Directory of Open Access Journals (Sweden)

    Meriem Ait Mehdi

    2014-01-01

    Full Text Available We describe an improvement of Chergui and Moulaï’s method (2008 that generates the whole efficient set of a multiobjective integer linear fractional program based on the branch and cut concept. The general step of this method consists in optimizing (maximizing without loss of generality one of the fractional objective functions over a subset of the original continuous feasible set; then if necessary, a branching process is carried out until obtaining an integer feasible solution. At this stage, an efficient cut is built from the criteria’s growth directions in order to discard a part of the feasible domain containing only nonefficient solutions. Our contribution concerns firstly the optimization process where a linear program that we define later will be solved at each step rather than a fractional linear program. Secondly, local ideal and nadir points will be used as bounds to prune some branches leading to nonefficient solutions. The computational experiments show that the new method outperforms the old one in all the treated instances.

  8. An Integer Programming Model for Multi-Echelon Supply Chain Decision Problem Considering Inventories

    Science.gov (United States)

    Harahap, Amin; Mawengkang, Herman; Siswadi; Effendi, Syahril

    2018-01-01

    In this paper we address a problem that is of significance to the industry, namely the optimal decision of a multi-echelon supply chain and the associated inventory systems. By using the guaranteed service approach to model the multi-echelon inventory system, we develop a mixed integer; programming model to simultaneously optimize the transportation, inventory and network structure of a multi-echelon supply chain. To solve the model we develop a direct search approach using a strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points.

  9. Design of problem-specific evolutionary algorithm/mixed-integer programming hybrids: two-stage stochastic integer programming applied to chemical batch scheduling

    Science.gov (United States)

    Urselmann, Maren; Emmerich, Michael T. M.; Till, Jochen; Sand, Guido; Engell, Sebastian

    2007-07-01

    Engineering optimization often deals with large, mixed-integer search spaces with a rigid structure due to the presence of a large number of constraints. Metaheuristics, such as evolutionary algorithms (EAs), are frequently suggested as solution algorithms in such cases. In order to exploit the full potential of these algorithms, it is important to choose an adequate representation of the search space and to integrate expert-knowledge into the stochastic search operators, without adding unnecessary bias to the search. Moreover, hybridisation with mathematical programming techniques such as mixed-integer programming (MIP) based on a problem decomposition can be considered for improving algorithmic performance. In order to design problem-specific EAs it is desirable to have a set of design guidelines that specify properties of search operators and representations. Recently, a set of guidelines has been proposed that gives rise to so-called Metric-based EAs (MBEAs). Extended by the minimal moves mutation they allow for a generalization of EA with self-adaptive mutation strength in discrete search spaces. In this article, a problem-specific EA for process engineering task is designed, following the MBEA guidelines and minimal moves mutation. On the background of the application, the usefulness of the design framework is discussed, and further extensions and corrections proposed. As a case-study, a two-stage stochastic programming problem in chemical batch process scheduling is considered. The algorithm design problem can be viewed as the choice of a hierarchical decision structure, where on different layers of the decision process symmetries and similarities can be exploited for the design of minimal moves. After a discussion of the design approach and its instantiation for the case-study, the resulting problem-specific EA/MIP is compared to a straightforward application of a canonical EA/MIP and to a monolithic mathematical programming algorithm. In view of the

  10. Holographic measurement of distortion during laser melting: Additive distortion from overlapping pulses

    Science.gov (United States)

    Haglund, Peter; Frostevarg, Jan; Powell, John; Eriksson, Ingemar; Kaplan, Alexander F. H.

    2018-03-01

    Laser - material interactions such as welding, heat treatment and thermal bending generate thermal gradients which give rise to thermal stresses and strains which often result in a permanent distortion of the heated object. This paper investigates the thermal distortion response which results from pulsed laser surface melting of a stainless steel sheet. Pulsed holography has been used to accurately monitor, in real time, the out-of-plane distortion of stainless steel samples melted on one face by with both single and multiple laser pulses. It has been shown that surface melting by additional laser pulses increases the out of plane distortion of the sample without significantly increasing the melt depth. The distortion differences between the primary pulse and subsequent pulses has also been analysed for fully and partially overlapping laser pulses.

  11. Bivium as a Mixed Integer Programming Problem

    DEFF Research Database (Denmark)

    Borghoff, Julia; Knudsen, Lars Ramkilde; Stolpe, Mathias

    2009-01-01

    over $GF(2)$ into a combinatorial optimization problem. We convert the Boolean equation system into an equation system over $\\mathbb{R}$ and formulate the problem of finding a $0$-$1$-valued solution for the system as a mixed-integer programming problem. This enables us to make use of several...

  12. An n -material thresholding method for improving integerness of solutions in topology optimization

    International Nuclear Information System (INIS)

    Watts, Seth; Engineering); Tortorelli, Daniel A.; Engineering)

    2016-01-01

    It is common in solving topology optimization problems to replace an integer-valued characteristic function design field with the material volume fraction field, a real-valued approximation of the design field that permits "fictitious" mixtures of materials during intermediate iterations in the optimization process. This is reasonable so long as one can interpolate properties for such materials and so long as the final design is integer valued. For this purpose, we present a method for smoothly thresholding the volume fractions of an arbitrary number of material phases which specify the design. This method is trivial for two-material design problems, for example, the canonical topology design problem of specifying the presence or absence of a single material within a domain, but it becomes more complex when three or more materials are used, as often occurs in material design problems. We take advantage of the similarity in properties between the volume fractions and the barycentric coordinates on a simplex to derive a thresholding, method which is applicable to an arbitrary number of materials. As we show in a sensitivity analysis, this method has smooth derivatives, allowing it to be used in gradient-based optimization algorithms. Finally, we present results, which show synergistic effects when used with Solid Isotropic Material with Penalty and Rational Approximation of Material Properties material interpolation functions, popular methods of ensuring integerness of solutions.

  13. System performance enhancement with pre-distorted OOFDM signal waveforms in DM/DD systems.

    Science.gov (United States)

    Sánchez, C; Ortega, B; Capmany, J

    2014-03-24

    In this work we propose a pre-distortion technique for the mitigation of the nonlinear distortion present in directly modulated/detected OOFDM systems and explore the system performance achieved under varying system parameters. Simulation results show that the proposed pre-distortion technique efficiently mitigates the nonlinear distortion, achieving transmission information rates around 40 Gbits/s and 18.5 Gbits/s over 40 km and 100 km of single mode fiber links, respectively, under optimum operating conditions. Moreover, the proposed pre-distortion technique can potentially provide higher system performance to that obtained with nonlinear equalization at the receiver.

  14. Optimal Chest Compression Rate and Compression to Ventilation Ratio in Delivery Room Resuscitation: Evidence from Newborn Piglets and Neonatal Manikins

    OpenAIRE

    Solev?g, Anne Lee; Schm?lzer, Georg M.

    2017-01-01

    Cardiopulmonary resuscitation (CPR) duration until return of spontaneous circulation (ROSC) influences survival and neurologic outcomes after delivery room (DR) CPR. High quality chest compressions (CC) improve cerebral and myocardial perfusion. Improved myocardial perfusion increases the likelihood of a faster ROSC. Thus, optimizing CC quality may improve outcomes both by preserving cerebral blood flow during CPR and by reducing the recovery time. CC quality is determined by rate, CC to vent...

  15. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  16. Distortion, Messianism, and Apocalyptic Time in The Satanic Verses

    Directory of Open Access Journals (Sweden)

    Clara Eisinger

    2013-05-01

    Full Text Available Salman Rushdie’s novel The Satanic Verses presents its readers with a striking perspective on apocalypse. Taking place in the context of a modernist, migrant worldview, this apocalypse works to unsettle its participating characters by teaching them how to create a world in which they might someday belong. Though often defined as destructive, the apocalypse as I define it involves a reaching for or gesture towards the impossible, which the Verses achieves through massive temporal distortion. Linear time finds itself subverted; characters’ narratives speed up or slow down, forcing them to question their various adventures in 1980s London. Rushdie’s protagonist Saladin Chamcha re-grasps and reinvents his world; his other protagonist, Gibreel Farishta, does not. For one man, apocalypse becomes a means of empowerment; for another, it develops into a black hole. Unlike real black holes, however, Rushdie’s apocalypse does not kill all who venture into it, but instead stretches its hardiest entrants both emotionally and intellectually before dropping them into a new universe. Apocalypse and the post-apocalyptic are not therefore to be feared but to be reached for: worthy achievements for those individuals who can survive the risk, the compression, and the disorientation to emerge in a ‘post’ that is not a wasteland but a realm of ceaseless energetic creation—a realm which allows migrants to construct for themselves better lives in the 21st century world.

  17. Compression and fast retrieval of SNP data.

    Science.gov (United States)

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Wireless Sensor Networks Data Processing Summary Based on Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Caiyun Huang

    2014-07-01

    Full Text Available As a newly proposed theory, compressive sensing (CS is commonly used in signal processing area. This paper investigates the applications of compressed sensing (CS in wireless sensor networks (WSNs. First, the development and research status of compressed sensing technology and wireless sensor networks are described, then a detailed investigation of WSNs research based on CS are conducted from aspects of data fusion, signal acquisition, signal routing transmission, and signal reconstruction. At the end of the paper, we conclude our survey and point out the possible future research directions.

  19. Evaluation of Salivary Flow Rate, pH and Buffer in Pre, Post & Post Menopausal Women on HRT.

    Science.gov (United States)

    D R, Mahesh; G, Komali; K, Jayanthi; D, Dinesh; T V, Saikavitha; Dinesh, Preeti

    2014-02-01

    Climateric is considered to be a natural phase of life which by definition is the period of life starting from decline in ovarian activity until after the end of ovarian function. It is accompanied by various health consequences that include the changes in saliva too. This study was carried out to evaluate the salivary flow rate, pH, buffering capacity in pre-menopausal, post-menopausal and post-menopausal women on HRT. (1) To evaluate the salivary flow rate, pH of resting saliva and stimulated saliva and buffer capacity of stimulated saliva in pre-menopausal, post-menopausal and post-menopausal women on Hormone Replacement Therapy (HRT). (2) To compare the above salivary findings between pre-menopausal, post-menopausal and post-menopausal women on HRT. The study was carried out on 60 patients. These patients were divided into three groups of 20 patients: Group 1: Pre-menopausal women (control), Group 2: post-menopausal women (case), Group 3: post-menopausal women on HRT (case). The control group consisted of 20 women volunteers, having regular ovulatory menstrual cycles with no known systemic illness and deleterious habits and Group 2 consists of 20 post-menopausal women and Group 3 will consist of 20 post-menopausal women on HRT. After clearing the mouth by swallowing, stimulated saliva was collected after chewing paraffin for 10 mins in to a glass centrifuge tube graded in 0.1 mL increments up to 10mL.in rare cases the collection time will be reduced or extended (5-15 min), salivary flow rate will be determined as ml/min, immediately after collection, pH was determined by dipping pH test paper directly into the sample of oral fluid, salivary buffer capacity was determined by using saliva check buffer kit (GC corporation). The data obtained was statistically evaluated using chi-square test, fisher exact test ANOVA analysis. In our study we found salivary flow rate significantly lower in the post-menopausal women in comparison with the menstruating women and also

  20. Winding numbers in homotopy theory from integers to reals

    International Nuclear Information System (INIS)

    Mekhfi, M.

    1993-07-01

    In Homotopy Theory (HT) we define paths on a given topological space. Closed paths prove to be construction elements of a group (the fundamental group) Π 1 and carry charges, the winding numbers. The charges are integers as they indicate how many times closed paths encircle a given hole (or set of holes). Open paths as they are defined in (HT) do not possess any groups structure and as such they are less useful in topology. In the present paper we enlarge the concept of a path in such a way that both types of paths do possess a group structure. In this broad sense we have two fundamental groups the Π i = Z group and the SO(2) group of rotations but the latter has the global property that there is no periodicity in the rotation angle. There is also two charge operators W and W λ whose eigenvalues are either integers or reals depending respectively on the paths being closed or open. Also the SO(2) group and the real charge operator W λ are not independently defined but directly related respectively to the Π i group and to the integer charge operator W. Thus well defined links can be established between seemingly different groups and charges. (author). 3 refs, 1 fig