WorldWideScience

Sample records for decomposition meets compressed

  1. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    Science.gov (United States)

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  2. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  3. Compression of magnetohydrodynamic simulation data using singular value decomposition

    International Nuclear Information System (INIS)

    Castillo Negrete, D. del; Hirshman, S.P.; Spong, D.A.; D'Azevedo, E.F.

    2007-01-01

    Numerical calculations of magnetic and flow fields in magnetohydrodynamic (MHD) simulations can result in extensive data sets. Particle-based calculations in these MHD fields, needed to provide closure relations for the MHD equations, will require communication of this data to multiple processors and rapid interpolation at numerous particle orbit positions. To facilitate this analysis it is advantageous to compress the data using singular value decomposition (SVD, or principal orthogonal decomposition, POD) methods. As an example of the compression technique, SVD is applied to magnetic field data arising from a dynamic nonlinear MHD code. The performance of the SVD compression algorithm is analyzed by calculating Poincare plots for electron orbits in a three-dimensional magnetic field and comparing the results with uncompressed data

  4. Dynamic mode decomposition for compressive system identification

    Science.gov (United States)

    Bai, Zhe; Kaiser, Eurika; Proctor, Joshua L.; Kutz, J. Nathan; Brunton, Steven L.

    2017-11-01

    Dynamic mode decomposition has emerged as a leading technique to identify spatiotemporal coherent structures from high-dimensional data. In this work, we integrate and unify two recent innovations that extend DMD to systems with actuation and systems with heavily subsampled measurements. When combined, these methods yield a novel framework for compressive system identification, where it is possible to identify a low-order model from limited input-output data and reconstruct the associated full-state dynamic modes with compressed sensing, providing interpretability of the state of the reduced-order model. When full-state data is available, it is possible to dramatically accelerate downstream computations by first compressing the data. We demonstrate this unified framework on simulated data of fluid flow past a pitching airfoil, investigating the effects of sensor noise, different types of measurements (e.g., point sensors, Gaussian random projections, etc.), compression ratios, and different choices of actuation (e.g., localized, broadband, etc.). This example provides a challenging and realistic test-case for the proposed method, and results indicate that the dominant coherent structures and dynamics are well characterized even with heavily subsampled data.

  5. VELOCITY FIELD OF COMPRESSIBLE MAGNETOHYDRODYNAMIC TURBULENCE: WAVELET DECOMPOSITION AND MODE SCALINGS

    International Nuclear Information System (INIS)

    Kowal, Grzegorz; Lazarian, A.

    2010-01-01

    We study compressible magnetohydrodynamic turbulence, which holds the key to many astrophysical processes, including star formation and cosmic-ray propagation. To account for the variations of the magnetic field in the strongly turbulent fluid, we use wavelet decomposition of the turbulent velocity field into Alfven, slow, and fast modes, which presents an extension of the Cho and Lazarian decomposition approach based on Fourier transforms. The wavelets allow us to follow the variations of the local direction of the magnetic field and therefore improve the quality of the decomposition compared to the Fourier transforms, which are done in the mean field reference frame. For each resulting component, we calculate the spectra and two-point statistics such as longitudinal and transverse structure functions as well as higher order intermittency statistics. In addition, we perform a Helmholtz- Hodge decomposition of the velocity field into incompressible and compressible parts and analyze these components. We find that the turbulence intermittency is different for different components, and we show that the intermittency statistics depend on whether the phenomenon was studied in the global reference frame related to the mean magnetic field or in the frame defined by the local magnetic field. The dependencies of the measures we obtained are different for different components of the velocity; for instance, we show that while the Alfven mode intermittency changes marginally with the Mach number, the intermittency of the fast mode is substantially affected by the change.

  6. Compression of Multispectral Images with Comparatively Few Bands Using Posttransform Tucker Decomposition

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Up to now, data compression for the multispectral charge-coupled device (CCD images with comparatively few bands (MSCFBs is done independently on each multispectral channel. This compression codec is called a “monospectral compressor.” The monospectral compressor does not have a removing spectral redundancy stage. To fill this gap, we propose an efficient compression approach for MSCFBs. In our approach, the one dimensional discrete cosine transform (1D-DCT is performed on spectral dimension to exploit the spectral information, and the posttransform (PT in 2D-DWT domain is performed on each spectral band to exploit the spatial information. A deep coupling approach between the PT and Tucker decomposition (TD is proposed to remove residual spectral redundancy between bands and residual spatial redundancy of each band. Experimental results on multispectral CCD camera data set show that the proposed compression algorithm can obtain a better compression performance and significantly outperforms the traditional compression algorithm-based TD in 2D-DWT and 3D-DCT domain.

  7. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  8. A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition With Security Guarantee in e-Health Applications.

    Science.gov (United States)

    Ma, JiaLi; Zhang, TanTan; Dong, MingChui

    2015-05-01

    This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.

  9. Optimization of wavelet decomposition for image compression and feature preservation.

    Science.gov (United States)

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T

    2003-09-01

    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  10. Development of a ReaxFF reactive force field for ammonium nitrate and application to shock compression and thermal decomposition.

    Science.gov (United States)

    Shan, Tzu-Ray; van Duin, Adri C T; Thompson, Aidan P

    2014-02-27

    We have developed a new ReaxFF reactive force field parametrization for ammonium nitrate. Starting with an existing nitramine/TATB ReaxFF parametrization, we optimized it to reproduce electronic structure calculations for dissociation barriers, heats of formation, and crystal structure properties of ammonium nitrate phases. We have used it to predict the isothermal pressure-volume curve and the unreacted principal Hugoniot states. The predicted isothermal pressure-volume curve for phase IV solid ammonium nitrate agreed with electronic structure calculations and experimental data within 10% error for the considered range of compression. The predicted unreacted principal Hugoniot states were approximately 17% stiffer than experimental measurements. We then simulated thermal decomposition during heating to 2500 K. Thermal decomposition pathways agreed with experimental findings.

  11. Multi-dimensional medical images compressed and filtered with wavelets

    International Nuclear Information System (INIS)

    Boyen, H.; Reeth, F. van; Flerackers, E.

    2002-01-01

    Full text: Using the standard wavelet decomposition methods, multi-dimensional medical images can be compressed and filtered by repeating the wavelet-algorithm on 1D-signals in an extra loop per extra dimension. In the non-standard decomposition for multi-dimensional images the areas that must be zero-filled in case of band- or notch-filters are more complex than geometric areas such as rectangles or cubes. Adding an additional dimension in this algorithm until 4D (e.g. a 3D beating heart) increases the geometric complexity of those areas even more. The aim of our study was to calculate the boundaries of the formed complex geometric areas, so we can use the faster non-standard decomposition to compress and filter multi-dimensional medical images. Because a lot of 3D medical images taken by PET- or SPECT-cameras have only a few layers in the Z-dimension and compressing images in a dimension with a few voxels is usually not worthwhile, we provided a solution in which one can choose which dimensions will be compressed or filtered. With the proposal of non-standard decomposition on Daubechies' wavelets D2 to D20 by Steven Gollmer in 1992, 1D data can be compressed and filtered. Each additional level works only on the smoothed data, so the transformation-time halves per extra level. Zero-filling a well-defined area alter the wavelet-transform and then performing the inverse transform will do the filtering. To be capable to compress and filter up to 4D-Images with the faster non-standard wavelet decomposition method, we have investigated a new method for calculating the boundaries of the areas which must be zero-filled in case of filtering. This is especially true for band- and notch filtering. Contrary to the standard decomposition method, the areas are no longer rectangles in 2D or cubes in 3D or a row of cubes in 4D: they are rectangles expanded with a half-sized rectangle in the other direction for 2D, cubes expanded with half cubes in one and quarter cubes in the

  12. A Parallel Non-Overlapping Domain-Decomposition Algorithm for Compressible Fluid Flow Problems on Triangulated Domains

    Science.gov (United States)

    Barth, Timothy J.; Chan, Tony F.; Tang, Wei-Pai

    1998-01-01

    This paper considers an algebraic preconditioning algorithm for hyperbolic-elliptic fluid flow problems. The algorithm is based on a parallel non-overlapping Schur complement domain-decomposition technique for triangulated domains. In the Schur complement technique, the triangulation is first partitioned into a number of non-overlapping subdomains and interfaces. This suggests a reordering of triangulation vertices which separates subdomain and interface solution unknowns. The reordering induces a natural 2 x 2 block partitioning of the discretization matrix. Exact LU factorization of this block system yields a Schur complement matrix which couples subdomains and the interface together. The remaining sections of this paper present a family of approximate techniques for both constructing and applying the Schur complement as a domain-decomposition preconditioner. The approximate Schur complement serves as an algebraic coarse space operator, thus avoiding the known difficulties associated with the direct formation of a coarse space discretization. In developing Schur complement approximations, particular attention has been given to improving sequential and parallel efficiency of implementations without significantly degrading the quality of the preconditioner. A computer code based on these developments has been tested on the IBM SP2 using MPI message passing protocol. A number of 2-D calculations are presented for both scalar advection-diffusion equations as well as the Euler equations governing compressible fluid flow to demonstrate performance of the preconditioning algorithm.

  13. Energetic materials under high pressures and temperatures: stability, polymorphism and decomposition of RDX

    International Nuclear Information System (INIS)

    Dreger, Z A

    2012-01-01

    A recent progress in understanding the response of energetic crystal of cyclotrimethylene trinitramine (RDX) to high pressures and temperatures is summarized. The optical spectroscopy and imaging studies under static compression and high temperatures provided new insight into phase diagram, polymorphism and decomposition mechanisms at pressures and temperatures relevant to those under shock compression. These results have been used to aid the understanding of processes under shock compression, including the shock-induced phase transition and identification of the crystal phase at decomposition. This work demonstrates that studies under static compression and high temperatures provide important complementary route for elucidating the physical and chemical processes in shocked energetic crystals.

  14. Image splitting and remapping method for radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  15. Multiresolution signal decomposition transforms, subbands, and wavelets

    CERN Document Server

    Akansu, Ali N; Haddad, Paul R

    2001-01-01

    The uniqueness of this book is that it covers such important aspects of modern signal processing as block transforms from subband filter banks and wavelet transforms from a common unifying standpoint, thus demonstrating the commonality among these decomposition techniques. In addition, it covers such ""hot"" areas as signal compression and coding, including particular decomposition techniques and tables listing coefficients of subband and wavelet filters and other important properties.The field of this book (Electrical Engineering/Computer Science) is currently booming, which is, of course

  16. Tensor decompositions for the analysis of atomic resolution electron energy loss spectra

    Energy Technology Data Exchange (ETDEWEB)

    Spiegelberg, Jakob; Rusz, Ján [Department of Physics and Astronomy, Uppsala University, Box 516, S-751 20 Uppsala (Sweden); Pelckmans, Kristiaan [Department of Information Technology, Uppsala University, Box 337, S-751 05 Uppsala (Sweden)

    2017-04-15

    A selection of tensor decomposition techniques is presented for the detection of weak signals in electron energy loss spectroscopy (EELS) data. The focus of the analysis lies on the correct representation of the simulated spatial structure. An analysis scheme for EEL spectra combining two-dimensional and n-way decomposition methods is proposed. In particular, the performance of robust principal component analysis (ROBPCA), Tucker Decompositions using orthogonality constraints (Multilinear Singular Value Decomposition (MLSVD)) and Tucker decomposition without imposed constraints, canonical polyadic decomposition (CPD) and block term decompositions (BTD) on synthetic as well as experimental data is examined. - Highlights: • A scheme for compression and analysis of EELS or EDX data is proposed. • Several tensor decomposition techniques are presented for BSS on hyperspectral data. • Robust PCA and MLSVD are discussed for denoising of raw data.

  17. Sparse Localization with a Mobile Beacon Based on LU Decomposition in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chunhui Zhao

    2015-09-01

    Full Text Available Node localization is the core in wireless sensor network. It can be solved by powerful beacons, which are equipped with global positioning system devices to know their location information. In this article, we present a novel sparse localization approach with a mobile beacon based on LU decomposition. Our scheme firstly translates node localization problem into a 1-sparse vector recovery problem by establishing sparse localization model. Then, LU decomposition pre-processing is adopted to solve the problem that measurement matrix does not meet the re¬stricted isometry property. Later, the 1-sparse vector can be exactly recovered by compressive sensing. Finally, as the 1-sparse vector is approximate sparse, weighted Cen¬troid scheme is introduced to accurately locate the node. Simulation and analysis show that our scheme has better localization performance and lower requirement for the mobile beacon than MAP+GC, MAP-M, and MAP-MN schemes. In addition, the obstacles and DOI have little effect on the novel scheme, and it has great localization performance under low SNR, thus, the scheme proposed is robust.

  18. Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

    Directory of Open Access Journals (Sweden)

    Kan Ren

    2014-01-01

    Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

  19. Intelligent transportation systems data compression using wavelet decomposition technique.

    Science.gov (United States)

    2009-12-01

    Intelligent Transportation Systems (ITS) generates massive amounts of traffic data, which posts : challenges for data storage, transmission and retrieval. Data compression and reconstruction technique plays an : important role in ITS data procession....

  20. Optimization and Assessment of Wavelet Packet Decompositions with Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    Schell Thomas

    2003-01-01

    Full Text Available In image compression, the wavelet transformation is a state-of-the-art component. Recently, wavelet packet decomposition has received quite an interest. A popular approach for wavelet packet decomposition is the near-best-basis algorithm using nonadditive cost functions. In contrast to additive cost functions, the wavelet packet decomposition of the near-best-basis algorithm is only suboptimal. We apply methods from the field of evolutionary computation (EC to test the quality of the near-best-basis results. We observe a phenomenon: the results of the near-best-basis algorithm are inferior in terms of cost-function optimization but are superior in terms of rate/distortion performance compared to EC methods.

  1. 32Still Image Compression Algorithm Based on Directional Filter Banks

    OpenAIRE

    Chunling Yang; Duanwu Cao; Li Ma

    2010-01-01

    Hybrid wavelet and directional filter banks (HWD) is an effective multi-scale geometrical analysis method. Compared to wavelet transform, it can better capture the directional information of images. But the ringing artifact, which is caused by the coefficient quantization in transform domain, is the biggest drawback of image compression algorithms in HWD domain. In this paper, by researching on the relationship between directional decomposition and ringing artifact, an improved decomposition ...

  2. X-Ray Thomson Scattering Without the Chihara Decomposition

    Science.gov (United States)

    Magyar, Rudolph; Baczewski, Andrew; Shulenburger, Luke; Hansen, Stephanie B.; Desjarlais, Michael P.; Sandia National Laboratories Collaboration

    X-Ray Thomson Scattering is an important experimental technique used in dynamic compression experiments to measure the properties of warm dense matter. The fundamental property probed in these experiments is the electronic dynamic structure factor that is typically modeled using an empirical three-term decomposition (Chihara, J. Phys. F, 1987). One of the crucial assumptions of this decomposition is that the system's electrons can be either classified as bound to ions or free. This decomposition may not be accurate for materials in the warm dense regime. We present unambiguous first principles calculations of the dynamic structure factor independent of the Chihara decomposition that can be used to benchmark these assumptions. Results are generated using a finite-temperature real-time time-dependent density functional theory applied for the first time in these conditions. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94AL85000.

  3. Image compression using the W-transform

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, W.D. Jr. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1995-12-31

    The authors present the W-transform for a multiresolution signal decomposition. One of the differences between the wavelet transform and W-transform is that the W-transform leads to a nonorthogonal signal decomposition. Another difference between the two is the manner in which the W-transform handles the endpoints (boundaries) of the signal. This approach does not restrict the length of the signal to be a power of two. Furthermore, it does not call for the extension of the signal thus, the W-transform is a convenient tool for image compression. They present the basic theory behind the W-transform and include experimental simulations to demonstrate its capabilities.

  4. 22nd International Conference on Domain Decomposition Methods

    CERN Document Server

    Gander, Martin; Halpern, Laurence; Krause, Rolf; Pavarino, Luca

    2016-01-01

    These are the proceedings of the 22nd International Conference on Domain Decomposition Methods, which was held in Lugano, Switzerland. With 172 participants from over 24 countries, this conference continued a long-standing tradition of internationally oriented meetings on Domain Decomposition Methods. The book features a well-balanced mix of established and new topics, such as the manifold theory of Schwarz Methods, Isogeometric Analysis, Discontinuous Galerkin Methods, exploitation of modern HPC architectures, and industrial applications. As the conference program reflects, the growing capabilities in terms of theory and available hardware allow increasingly complex non-linear and multi-physics simulations, confirming the tremendous potential and flexibility of the domain decomposition concept.

  5. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  6. Multilevel index decomposition analysis: Approaches and application

    International Nuclear Information System (INIS)

    Xu, X.Y.; Ang, B.W.

    2014-01-01

    With the growing interest in using the technique of index decomposition analysis (IDA) in energy and energy-related emission studies, such as to analyze the impacts of activity structure change or to track economy-wide energy efficiency trends, the conventional single-level IDA may not be able to meet certain needs in policy analysis. In this paper, some limitations of single-level IDA studies which can be addressed through applying multilevel decomposition analysis are discussed. We then introduce and compare two multilevel decomposition procedures, which are referred to as the multilevel-parallel (M-P) model and the multilevel-hierarchical (M-H) model. The former uses a similar decomposition procedure as in the single-level IDA, while the latter uses a stepwise decomposition procedure. Since the stepwise decomposition procedure is new in the IDA literature, the applicability of the popular IDA methods in the M-H model is discussed and cases where modifications are needed are explained. Numerical examples and application studies using the energy consumption data of the US and China are presented. - Highlights: • We discuss the limitations of single-level decomposition in IDA applied to energy study. • We introduce two multilevel decomposition models, study their features and discuss how they can address the limitations. • To extend from single-level to multilevel analysis, necessary modifications to some popular IDA methods are discussed. • We further discuss the practical significance of the multilevel models and present examples and cases to illustrate

  7. Wavelet/scalar quantization compression standard for fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C.M.

    1996-06-12

    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

  8. Lagrangian statistics in compressible isotropic homogeneous turbulence

    Science.gov (United States)

    Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi

    2011-11-01

    In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.

  9. 46 CFR 112.50-7 - Compressed air starting.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Compressed air starting. 112.50-7 Section 112.50-7... air starting. A compressed air starting system must meet the following: (a) The starting, charging... air compressors addressed in paragraph (c)(3)(i) of this section. (b) The compressed air starting...

  10. A review of lossless audio compression standards and algorithms

    Science.gov (United States)

    Muin, Fathiah Abdul; Gunawan, Teddy Surya; Kartiwi, Mira; Elsheikh, Elsheikh M. A.

    2017-09-01

    Over the years, lossless audio compression has gained popularity as researchers and businesses has become more aware of the need for better quality and higher storage demand. This paper will analyse various lossless audio coding algorithm and standards that are used and available in the market focusing on Linear Predictive Coding (LPC) specifically due to its popularity and robustness in audio compression, nevertheless other prediction methods are compared to verify this. Advanced representation of LPC such as LSP decomposition techniques are also discussed within this paper.

  11. The FBI compression standard for digitized fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C.M.; Bradley, J.N. [Los Alamos National Lab., NM (United States); Onyshczak, R.J. [National Inst. of Standards and Technology, Gaithersburg, MD (United States); Hopper, T. [Federal Bureau of Investigation, Washington, DC (United States)

    1996-10-01

    The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

  12. Non-US data compression and coding research. FASAC Technical Assessment Report

    Energy Technology Data Exchange (ETDEWEB)

    Gray, R.M.; Cohn, M.; Craver, L.W.; Gersho, A.; Lookabaugh, T.; Pollara, F.; Vetterli, M.

    1993-11-01

    This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity, though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.

  13. Multiple Shooting and Time Domain Decomposition Methods

    CERN Document Server

    Geiger, Michael; Körkel, Stefan; Rannacher, Rolf

    2015-01-01

    This book offers a comprehensive collection of the most advanced numerical techniques for the efficient and effective solution of simulation and optimization problems governed by systems of time-dependent differential equations. The contributions present various approaches to time domain decomposition, focusing on multiple shooting and parareal algorithms.  The range of topics covers theoretical analysis of the methods, as well as their algorithmic formulation and guidelines for practical implementation. Selected examples show that the discussed approaches are mandatory for the solution of challenging practical problems. The practicability and efficiency of the presented methods is illustrated by several case studies from fluid dynamics, data compression, image processing and computational biology, giving rise to possible new research topics.  This volume, resulting from the workshop Multiple Shooting and Time Domain Decomposition Methods, held in Heidelberg in May 2013, will be of great interest to applied...

  14. The wavelet/scalar quantization compression standard for digital fingerprint images

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1994-04-01

    A new digital image compression standard has been adopted by the US Federal Bureau of Investigation for use on digitized gray-scale fingerprint images. The algorithm is based on adaptive uniform scalar quantization of a discrete wavelet transform image decomposition and is referred to as the wavelet/scalar quantization standard. The standard produces archival quality images at compression ratios of around 20:1 and will allow the FBI to replace their current database of paper fingerprint cards with digital imagery.

  15. Mixed raster content segmentation, compression, transmission

    CERN Document Server

    Pavlidis, George

    2017-01-01

    This book presents the main concepts in handling digital images of mixed content, traditionally referenced as mixed raster content (MRC), in two main parts. The first includes introductory chapters covering the scientific and technical background aspects, whereas the second presents a set of research and development approaches to tackle key issues in MRC segmentation, compression and transmission. The book starts with a review of color theory and the mechanism of color vision in humans. In turn, the second chapter reviews data coding and compression methods so as to set the background and demonstrate the complexity involved in dealing with MRC. Chapter three addresses the segmentation of images through an extensive literature review, which highlights the various approaches used to tackle MRC segmentation. The second part of the book focuses on the segmentation of color images for optimized compression, including multi-layered decomposition and representation of MRC and the processes that can be employed to op...

  16. Modelling for Fuel Optimal Control of a Variable Compression Engine

    OpenAIRE

    Nilsson, Ylva

    2007-01-01

    Variable compression engines are a mean to meet the demand on lower fuel consumption. A high compression ratio results in high engine efficiency, but also increases the knock tendency. On conventional engines with fixed compression ratio, knock is avoided by retarding the ignition angle. The variable compression engine offers an extra dimension in knock control, since both ignition angle and compression ratio can be adjusted. The central question is thus for what combination of compression ra...

  17. Projection decomposition algorithm for dual-energy computed tomography via deep neural network.

    Science.gov (United States)

    Xu, Yifu; Yan, Bin; Chen, Jian; Zeng, Lei; Li, Lei

    2018-03-15

    Dual-energy computed tomography (DECT) has been widely used to improve identification of substances from different spectral information. Decomposition of the mixed test samples into two materials relies on a well-calibrated material decomposition function. This work aims to establish and validate a data-driven algorithm for estimation of the decomposition function. A deep neural network (DNN) consisting of two sub-nets is proposed to solve the projection decomposition problem. The compressing sub-net, substantially a stack auto-encoder (SAE), learns a compact representation of energy spectrum. The decomposing sub-net with a two-layer structure fits the nonlinear transform between energy projection and basic material thickness. The proposed DNN not only delivers image with lower standard deviation and higher quality in both simulated and real data, and also yields the best performance in cases mixed with photon noise. Moreover, DNN costs only 0.4 s to generate a decomposition solution of 360 × 512 size scale, which is about 200 times faster than the competing algorithms. The DNN model is applicable to the decomposition tasks with different dual energies. Experimental results demonstrated the strong function fitting ability of DNN. Thus, the Deep learning paradigm provides a promising approach to solve the nonlinear problem in DECT.

  18. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  19. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  20. Parallel Tensor Compression for Large-Scale Scientific Data.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara G. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ballard, Grey [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Austin, Woody Nathan [Univ. of Texas, Austin, TX (United States)

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  1. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  2. SVD application in image and data compression - Some case studies in oceanography (Developed in MATLAB)

    Digital Repository Service at National Institute of Oceanography (India)

    Murty, T.V.R.; Rao, M.M.M.; SuryaPrakash, S.; Chandramouli, P.; Murthy, K.S.R.

    An integrated friendly-user interactive multiple Ocean Application Pacakage has been developed utilizing the well known statistical technique called Singular Value Decomposition (SVD) to achieve image and data compression in MATLAB environment...

  3. An Image Compression Scheme in Wireless Multimedia Sensor Networks Based on NMF

    Directory of Open Access Journals (Sweden)

    Shikang Kong

    2017-02-01

    Full Text Available With the goal of addressing the issue of image compression in wireless multimedia sensor networks with high recovered quality and low energy consumption, an image compression and transmission scheme based on non-negative matrix factorization (NMF is proposed in this paper. First, the NMF algorithm theory is studied. Then, a collaborative mechanism of image capture, block, compression and transmission is completed. Camera nodes capture images and send them to ordinary nodes which use an NMF algorithm for image compression. Compressed images are transmitted to the station by the cluster head node and received from ordinary nodes. The station takes on the image restoration. Simulation results show that, compared with the JPEG2000 and singular value decomposition (SVD compression schemes, the proposed scheme has a higher quality of recovered images and lower total node energy consumption. It is beneficial to reduce the burden of energy consumption and prolong the life of the whole network system, which has great significance for practical applications of WMSNs.

  4. 2nd International MATHEON Conference on Compressed Sensing and its Applications

    CERN Document Server

    Caire, Giuseppe; Calderbank, Robert; März, Maximilian; Kutyniok, Gitta; Mathar, Rudolf

    2017-01-01

    This contributed volume contains articles written by the plenary and invited speakers from the second international MATHEON Workshop 2015 that focus on applications of compressed sensing. Article authors address their techniques for solving the problems of compressed sensing, as well as connections to related areas like detecting community-like structures in graphs, curbatures on Grassmanians, and randomized tensor train singular value decompositions. Some of the novel applications covered include dimensionality reduction, information theory, random matrices, sparse approximation, and sparse recovery.  This book is aimed at both graduate students and researchers in the areas of applied mathematics, computer science, and engineering, as well as other applied scientists exploring the potential applications for the novel methodology of compressed sensing. An introduction to the subject of compressed sensing is also provided for researchers interested in the field who are not as familiar with it. .

  5. A stable penalty method for the compressible Navier-Stokes equations: II: One-dimensional domain decomposition schemes

    DEFF Research Database (Denmark)

    Hesthaven, Jan

    1997-01-01

    This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions and as a res......This paper presents asymptotically stable schemes for patching of nonoverlapping subdomains when approximating the compressible Navier-Stokes equations given on conservation form. The scheme is a natural extension of a previously proposed scheme for enforcing open boundary conditions...... and as a result the patching of subdomains is local in space. The scheme is studied in detail for Burgers's equation and developed for the compressible Navier-Stokes equations in general curvilinear coordinates. The versatility of the proposed scheme for the compressible Navier-Stokes equations is illustrated...

  6. Selecting a general-purpose data compression algorithm

    Science.gov (United States)

    Mathews, Gary Jason

    1995-01-01

    The National Space Science Data Center's Common Data Formate (CDF) is capable of storing many types of data such as scalar data items, vectors, and multidimensional arrays of bytes, integers, or floating point values. However, regardless of the dimensionality and data type, the data break down into a sequence of bytes that can be fed into a data compression function to reduce the amount of data without losing data integrity and thus remaining fully reconstructible. Because of the diversity of data types and high performance speed requirements, a general-purpose, fast, simple data compression algorithm is required to incorporate data compression into CDF. The questions to ask are how to evaluate and compare compression algorithms, and what compression algorithm meets all requirements. The object of this paper is to address these questions and determine the most appropriate compression algorithm to use within the CDF data management package that would be applicable to other software packages with similar data compression needs.

  7. Fast multidimensional ensemble empirical mode decomposition for the analysis of big spatio-temporal datasets.

    Science.gov (United States)

    Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min

    2016-04-13

    In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. © 2016 The Authors.

  8. Subband directional vector quantization in radiological image compression

    Science.gov (United States)

    Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel

    1992-05-01

    The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

  9. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M. [Los Alamos National Lab., NM (United States); Hopper, T. [Federal Bureau of Investigation, Washington, DC (United States)

    1993-05-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI`s Integrated Automated Fingerprint Identification System.

  10. Effects of image compression and degradation on an automatic diabetic retinopathy screening algorithm

    Science.gov (United States)

    Agurto, C.; Barriga, S.; Murray, V.; Pattichis, M.; Soliz, P.

    2010-03-01

    Diabetic retinopathy (DR) is one of the leading causes of blindness among adult Americans. Automatic methods for detection of the disease have been developed in recent years, most of them addressing the segmentation of bright and red lesions. In this paper we present an automatic DR screening system that does approach the problem through the segmentation of features. The algorithm determines non-diseased retinal images from those with pathology based on textural features obtained using multiscale Amplitude Modulation-Frequency Modulation (AM-FM) decompositions. The decomposition is represented as features that are the inputs to a classifier. The algorithm achieves 0.88 area under the ROC curve (AROC) for a set of 280 images from the MESSIDOR database. The algorithm is then used to analyze the effects of image compression and degradation, which will be present in most actual clinical or screening environments. Results show that the algorithm is insensitive to illumination variations, but high rates of compression and large blurring effects degrade its performance.

  11. Optimal design of compressed air energy storage systems

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, F. W.; Sharma, A.; Ragsdell, K. M.

    1979-01-01

    Compressed air energy storage (CAES) power systems are currently being considered by various electric utilities for load-leveling applications. Models of CAES systems which employ natural underground aquifer formations, and present an optimal design methodology which demonstrates their economic viability are developed. This approach is based upon a decomposition of the CAES plant and utility grid system into three partially-decoupled subsystems. Numerical results are given for a plant employing the Media, Illinois Galesville aquifer formation.

  12. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  13. A hybrid data compression approach for online backup service

    Science.gov (United States)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  14. SPECTRUM analysis of multispectral imagery in conjunction with wavelet/KLT data compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-12-01

    The data analysis program, SPECTRUM, is used for fusion, visualization, and classification of multi-spectral imagery. The raw data used in this study is Landsat Thematic Mapper (TM) 7-channel imagery, with 8 bits of dynamic range per channel. To facilitate data transmission and storage, a compression algorithm is proposed based on spatial wavelet transform coding and KLT decomposition of interchannel spectral vectors, followed by adaptive optimal multiband scalar quantization. The performance of SPECTRUM clustering and visualization is evaluated on compressed multispectral data. 8-bit visualizations of 56-bit data show little visible distortion at 50:1 compression and graceful degradation at higher compression ratios. Two TM images were processed in this experiment: a 1024 x 1024-pixel scene of the region surrounding the Chernobyl power plant, taken a few months before the reactor malfunction, and a 2048 x 2048 image of Moscow and surrounding countryside.

  15. Decomposing changes in life expectancy: Compression versus shifting mortality

    Directory of Open Access Journals (Sweden)

    Marie-Pier Bergeron-Boucher

    2015-09-01

    Full Text Available Background: In most developed countries, mortality reductions in the first half of the 20th century were highly associated with changes in lifespan disparities. In the second half of the 20th century, changes in mortality are best described by a shift in the mortality schedule, with lifespan variability remaining nearly constant. These successive mortality dynamics are known as compression and shifting mortality, respectively. Objective: To understand the effect of compression and shifting dynamics on mortality changes, we quantify the gains in life expectancy due to changes in lifespan variability and changes in the mortality schedule, respectively. Methods: We introduce a decomposition method using newly developed parametric expressions of the force of mortality that include the modal age at death as one of their parameters. Our approach allows us to differentiate between the two underlying processes in mortality and their dynamics. Results: An application of our methodology to the mortality of Swedish females shows that, since the mid-1960s, shifts in the mortality schedule were responsible for more than 70Š of the increase in life expectancy. Conclusions: The decomposition method allows differentiation between both underlying mortality processes and their respective impact on life expectancy, and also determines when and how one process has replaced the other.

  16. Real-Time Mobile Device-Assisted Chest Compression During Cardiopulmonary Resuscitation.

    Science.gov (United States)

    Sarma, Satyam; Bucuti, Hakiza; Chitnis, Anurag; Klacman, Alex; Dantu, Ram

    2017-07-15

    Prompt administration of high-quality cardiopulmonary resuscitation (CPR) is a key determinant of survival from cardiac arrest. Strategies to improve CPR quality at point of care could improve resuscitation outcomes. We tested whether a low cost and scalable mobile phone- or smart watch-based solution could provide accurate measures of compression depth and rate during simulated CPR. Fifty health care providers (58% intensive care unit nurses) performed simulated CPR on a calibrated training manikin (Resusci Anne, Laerdal) while wearing both devices. Subjects received real-time audiovisual feedback from each device sequentially. Primary outcome was accuracy of compression depth and rate compared with the calibrated training manikin. Secondary outcome was improvement in CPR quality as defined by meeting both guideline-recommend compression depth (5 to 6 cm) and rate (100 to 120/minute). Compared with the training manikin, typical error for compression depth was mobile device feedback (60% vs 50%; p = 0.3). Sessions that did not meet guideline recommendations failed primarily because of inadequate compression depth (46 ± 2 mm). In conclusion, a mobile device application-guided CPR can accurately track compression depth and rate during simulation in a practice environment in accordance with resuscitation guidelines. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings

    Directory of Open Access Journals (Sweden)

    Huaqing Wang

    2016-09-01

    Full Text Available The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples.

  18. The FBI wavelet/scalar quantization standard for gray-scale fingerprint image compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M. (Los Alamos National Lab., NM (United States)); Hopper, T. (Federal Bureau of Investigation, Washington, DC (United States))

    1993-01-01

    The FBI has recently adopted a standard for the compression of digitized 8-bit gray-scale fingerprint images. The standard is based on scalar quantization of a 64-subband discrete wavelet transform decomposition of the images, followed by Huffman coding. Novel features of the algorithm include the use of symmetric boundary conditions for transforming finite-length signals and a subband decomposition tailored for fingerprint images scanned at 500 dpi. The standard is intended for use in conjunction with ANSI/NBS-CLS 1-1993, American National Standard Data Format for the Interchange of Fingerprint Information, and the FBI's Integrated Automated Fingerprint Identification System.

  19. COMPOSITE POLYMERICADDITIVESDESIGNATED FORCONCRETEMIXES BASED ONPOLYACRYLATES, PRODUCTS OF THERMAL DECOMPOSITION OF POLYAMIDE-6 AND LOW-MOLECULAR POLYETHYLENE

    Directory of Open Access Journals (Sweden)

    Polyakov Vyacheslav Sergeevich

    2012-07-01

    4 the optimal composite additive that increases the time period of stiffening of the cement grout , improves the water resistance and the compressive strength of concrete, represents the composition of polyacrylates and polymethacrylates, products of thermal decomposition of polyamide-6 and low-molecular polyethylene in the weight ratio of 1:1:0.5.

  20. Estimates of post-acceleration longitudinal bunch compression

    International Nuclear Information System (INIS)

    Judd, D.L.

    1977-01-01

    A simple analytic method is developed, based on physical approximations, for treating transient implosive longitudinal compression of bunches of heavy ions in an accelerator system for ignition of inertial-confinement fusion pellet targets. Parametric dependences of attainable compressions and of beam path lengths and times during compression are indicated for ramped pulsed-gap lines, rf systems in storage and accumulator rings, and composite systems, including sections of free drift. It appears that for high-confidence pellets in a plant producing 1000 MW of electric power the needed pulse lengths cannot be obtained with rings alone unless an unreasonably large number of them are used, independent of choice of rf harmonic number. In contrast, pulsed-gap lines alone can meet this need. The effects of an initial inward compressive drift and of longitudinal emittance are included

  1. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Science.gov (United States)

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  2. Revisiting the Fundamentals and Capabilities of the Stack Compression Test

    DEFF Research Database (Denmark)

    Alves, L.M.; Nielsen, Chris Valentin; Martin, P.A.F.

    2011-01-01

    performance by comparing the flow curves obtained from its utilisation with those determined by means of compressive testing carried out on solid cylinder specimens of the same material. Results show that mechanical testing of materials by means of the stack compression test is capable of meeting...... the increasing demand of accurate and reliable flow curves for sheet metals....

  3. Video steganography based on bit-plane decomposition of wavelet-transformed video

    Science.gov (United States)

    Noda, Hideki; Furuta, Tomofumi; Niimi, Michiharu; Kawaguchi, Eiji

    2004-06-01

    This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.

  4. Ozone decomposition

    Directory of Open Access Journals (Sweden)

    Batakliev Todor

    2014-06-01

    Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates

  5. A Robust Color Image Watermarking Scheme Using Entropy and QR Decomposition

    Directory of Open Access Journals (Sweden)

    L. Laur

    2015-12-01

    Full Text Available Internet has affected our everyday life drastically. Expansive volumes of information are exchanged over the Internet consistently which causes numerous security concerns. Issues like content identification, document and image security, audience measurement, ownership, copyrights and others can be settled by using digital watermarking. In this work, robust and imperceptible non-blind color image watermarking algorithm is proposed, which benefit from the fact that watermark can be hidden in different color channel which results into further robustness of the proposed technique to attacks. Given method uses some algorithms such as entropy, discrete wavelet transform, Chirp z-transform, orthogonal-triangular decomposition and Singular value decomposition in order to embed the watermark in a color image. Many experiments are performed using well-known signal processing attacks such as histogram equalization, adding noise and compression. Experimental results show that proposed scheme is imperceptible and robust against common signal processing attacks.

  6. Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression

    KAUST Repository

    Halim Boukaram, Wajih

    2017-09-14

    We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.

  7. Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression

    KAUST Repository

    Halim Boukaram, Wajih; Turkiyyah, George; Ltaief, Hatem; Keyes, David E.

    2017-01-01

    We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.

  8. Full-frame compression of discrete wavelet and cosine transforms

    Science.gov (United States)

    Lo, Shih-Chung B.; Li, Huai; Krasner, Brian; Freedman, Matthew T.; Mun, Seong K.

    1995-04-01

    At the foreground of computerized radiology and the filmless hospital are the possibilities for easy image retrieval, efficient storage, and rapid image communication. This paper represents the authors' continuous efforts in compression research on full-frame discrete wavelet (FFDWT) and full-frame discrete cosine transforms (FFDCT) for medical image compression. Prior to the coding, it is important to evaluate the global entropy in the decomposed space. It is because of the minimum entropy, that a maximum compression efficiency can be achieved. In this study, each image was split into the top three most significant bit (MSB) and the remaining remapped least significant bit (RLSB) images. The 3MSB image was compressed by an error-free contour coding and received an average of 0.1 bit/pixel. The RLSB image was either transformed to a multi-channel wavelet or the cosine transform domain for entropy evaluation. Ten x-ray chest radiographs and ten mammograms were randomly selected from our clinical database and were used for the study. Our results indicated that the coding scheme in the FFDCT domain performed better than in FFDWT domain for high-resolution digital chest radiographs and mammograms. From this study, we found that decomposition efficiency in the DCT domain for relatively smooth images is higher than that in the DWT. However, both schemes worked just as well for low resolution digital images. We also found that the image characteristics of the `Lena' image commonly used in the compression literature are very different from those of radiological images. The compression outcome of the radiological images can not be extrapolated from the compression result based on the `Lena.'

  9. A practical material decomposition method for x-ray dual spectral computed tomography.

    Science.gov (United States)

    Hu, Jingjing; Zhao, Xing

    2016-03-17

    X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.

  10. Decomposition techniques

    Science.gov (United States)

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  11. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  12. History of the APS Topical Group on Shock Compression of Condensed Matter

    International Nuclear Information System (INIS)

    Forbes, J W

    2001-01-01

    In order to provide broader scientific recognition and to advance the science of shock compressed condensed matter, a group of American Physical Society (APS) members worked within the Society to make this field an active part of the APS. Individual papers were presented at APS meetings starting in the 1940's and shock wave sessions were organized starting with the 1967 Pasadena meeting. Shock wave topical conferences began in 1979 in Pullman, WA. Signatures were obtained on a petition in 1984 from a balanced cross-section of the shock wave community to form an APS Topical Group (TG). The APS Council officially accepted the formation of the Shock Compression of Condensed Matter (SCCM) TG at its October 1984 meeting. This action firmly aligned the shock wave field with a major physical science organization. Most early topical conferences were sanctioned by the APS while those held after 1992 were official APS meetings. The topical group organizes a shock wave topical conference in odd numbered years while participating in shock wavehigh pressure sessions at APS general meetings in even numbered years

  13. Coresident sensor fusion and compression using the wavelet transform

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.

    1996-03-11

    Imagery from coresident sensor platforms, such as unmanned aerial vehicles, can be combined using, multiresolution decomposition of the sensor images by means of the two-dimensional wavelet transform. The wavelet approach uses the combination of spatial/spectral information at multiple scales to create a fused image. This can be done in both an ad hoc or model-based approach. We compare results from commercial ``fusion`` software and the ad hoc, wavelet approach. Results show the wavelet approach outperforms the commercial algorithms and also supports efficient compression of the fused image.

  14. Managing Soil Biota-Mediated Decomposition and Nutrient Mineralization in Sustainable Agroecosystems

    Directory of Open Access Journals (Sweden)

    Joann K. Whalen

    2014-01-01

    Full Text Available Transformation of organic residues into plant-available nutrients occurs through decomposition and mineralization and is mediated by saprophytic microorganisms and fauna. Of particular interest is the recycling of the essential plant elements—N, P, and S—contained in organic residues. If organic residues can supply sufficient nutrients during crop growth, a reduction in fertilizer use is possible. The challenge is synchronizing nutrient release from organic residues with crop nutrient demands throughout the growing season. This paper presents a conceptual model describing the pattern of nutrient release from organic residues in relation to crop nutrient uptake. Next, it explores experimental approaches to measure the physical, chemical, and biological barriers to decomposition and nutrient mineralization. Methods are proposed to determine the rates of decomposition and nutrient release from organic residues. Practically, this information can be used by agricultural producers to determine if plant-available nutrient supply is sufficient to meet crop demands at key growth stages or whether additional fertilizer is needed. Finally, agronomic practices that control the rate of soil biota-mediated decomposition and mineralization, as well as those that facilitate uptake of plant-available nutrients, are identified. Increasing reliance on soil biological activity could benefit crop nutrition and health in sustainable agroecosystems.

  15. Theoretical models for describing longitudinal bunch compression in the neutralized drift compression experiment

    Directory of Open Access Journals (Sweden)

    Adam B. Sefkow

    2006-09-01

    Full Text Available Heavy ion drivers for warm dense matter and heavy ion fusion applications use intense charge bunches which must undergo transverse and longitudinal compression in order to meet the requisite high current densities and short pulse durations desired at the target. The neutralized drift compression experiment (NDCX at the Lawrence Berkeley National Laboratory is used to study the longitudinal neutralized drift compression of a space-charge-dominated ion beam, which occurs due to an imposed longitudinal velocity tilt and subsequent neutralization of the beam’s space charge by background plasma. Reduced theoretical models have been used in order to describe the realistic propagation of an intense charge bunch through the NDCX device. A warm-fluid model is presented as a tractable computational tool for investigating the nonideal effects associated with the experimental acceleration gap geometry and voltage waveform of the induction module, which acts as a means to pulse shape both the velocity and line density profiles. Self-similar drift compression solutions can be realized in order to transversely focus the entire charge bunch to the same focal plane in upcoming simultaneous transverse and longitudinal focusing experiments. A kinetic formalism based on the Vlasov equation has been employed in order to show that the peaks in the experimental current profiles are a result of the fact that only the central portion of the beam contributes effectively to the main compressed pulse. Significant portions of the charge bunch reside in the nonlinearly compressing part of the ion beam because of deviations between the experimental and ideal velocity tilts. Those regions form a pedestal of current around the central peak, thereby decreasing the amount of achievable longitudinal compression and increasing the pulse durations achieved at the focal plane. A hybrid fluid-Vlasov model which retains the advantages of both the fluid and kinetic approaches has been

  16. Exact Theory of Compressible Fluid Turbulence

    Science.gov (United States)

    Drivas, Theodore; Eyink, Gregory

    2017-11-01

    We obtain exact results for compressible turbulence with any equation of state, using coarse-graining/filtering. We find two mechanisms of turbulent kinetic energy dissipation: scale-local energy cascade and ``pressure-work defect'', or pressure-work at viscous scales exceeding that in the inertial-range. Planar shocks in an ideal gas dissipate all kinetic energy by pressure-work defect, but the effect is omitted by standard LES modeling of pressure-dilatation. We also obtain a novel inverse cascade of thermodynamic entropy, injected by microscopic entropy production, cascaded upscale, and removed by large-scale cooling. This nonlinear process is missed by the Kovasznay linear mode decomposition, treating entropy as a passive scalar. For small Mach number we recover the incompressible ``negentropy cascade'' predicted by Obukhov. We derive exact Kolmogorov 4/5th-type laws for energy and entropy cascades, constraining scaling exponents of velocity, density, and internal energy to sub-Kolmogorov values. Although precise exponents and detailed physics are Mach-dependent, our exact results hold at all Mach numbers. Flow realizations at infinite Reynolds are ``dissipative weak solutions'' of compressible Euler equations, similarly as Onsager proposed for incompressible turbulence.

  17. Implementation of domain decomposition and data decomposition algorithms in RMC code

    International Nuclear Information System (INIS)

    Liang, J.G.; Cai, Y.; Wang, K.; She, D.

    2013-01-01

    The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced

  18. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    Science.gov (United States)

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  19. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  20. Thermal decomposition of biphenyl (1963); Decomposition thermique du biphenyle (1963)

    Energy Technology Data Exchange (ETDEWEB)

    Clerc, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1962-06-15

    The rates of formation of the decomposition products of biphenyl; hydrogen, methane, ethane, ethylene, as well as triphenyl have been measured in the vapour and liquid phases at 460 deg. C. The study of the decomposition products of biphenyl at different temperatures between 400 and 460 deg. C has provided values of the activation energies of the reactions yielding the main products of pyrolysis in the vapour phase. Product and Activation energy: Hydrogen 73 {+-} 2 kCal/Mole; Benzene 76 {+-} 2 kCal/Mole; Meta-triphenyl 53 {+-} 2 kCal/Mole; Biphenyl decomposition 64 {+-} 2 kCal/Mole; The rate of disappearance of biphenyl is only very approximately first order. These results show the major role played at the start of the decomposition by organic impurities which are not detectable by conventional physico-chemical analysis methods and the presence of which accelerates noticeably the decomposition rate. It was possible to eliminate these impurities by zone-melting carried out until the initial gradient of the formation curves for the products became constant. The composition of the high-molecular weight products (over 250) was deduced from the mean molecular weight and the dosage of the aromatic C - H bonds by infrared spectrophotometry. As a result the existence in tars of hydrogenated tetra, penta and hexaphenyl has been demonstrated. (author) [French] Les vitesses de formation des produits de decomposition du biphenyle: hydrogene, methane, ethane, ethylene, ainsi que des triphenyles, ont ete mesurees en phase vapeur et en phase liquide a 460 deg. C. L'etude des produits de decomposition du biphenyle a differentes temperatures comprises entre 400 et 460 deg. C, a fourni les valeurs des energies d'activation des reactions conduisant aux principaux produits de la pyrolyse en phase vapeur. Produit et Energie d'activation: Hydrogene 73 {+-} 2 kcal/Mole; Benzene 76 {+-} 2 kcal/Mole; Metatriphenyle, 53 {+-} 2 kcal/Mole; Decomposition du biphenyle 64 {+-} 2 kcal/Mole; La

  1. Simulation of two-phase flows by domain decomposition

    International Nuclear Information System (INIS)

    Dao, T.H.

    2013-01-01

    This thesis deals with numerical simulations of compressible fluid flows by implicit finite volume methods. Firstly, we studied and implemented an implicit version of the Roe scheme for compressible single-phase and two-phase flows. Thanks to Newton method for solving nonlinear systems, our schemes are conservative. Unfortunately, the resolution of nonlinear systems is very expensive. It is therefore essential to use an efficient algorithm to solve these systems. For large size matrices, we often use iterative methods whose convergence depends on the spectrum. We have studied the spectrum of the linear system and proposed a strategy, called Scaling, to improve the condition number of the matrix. Combined with the classical ILU pre-conditioner, our strategy has reduced significantly the GMRES iterations for local systems and the computation time. We also show some satisfactory results for low Mach-number flows using the implicit centered scheme. We then studied and implemented a domain decomposition method for compressible fluid flows. We have proposed a new interface variable which makes the Schur complement method easy to build and allows us to treat diffusion terms. Using GMRES iterative solver rather than Richardson for the interface system also provides a better performance compared to other methods. We can also decompose the computational domain into any number of sub-domains. Moreover, the Scaling strategy for the interface system has improved the condition number of the matrix and reduced the number of GMRES iterations. In comparison with the classical distributed computing, we have shown that our method is more robust and efficient. (author) [fr

  2. Metronome Use for Coordination of Breaths and Cardiac Compressions Delivered by Minimally-Trained Caregivers During Two-Person CPR

    Science.gov (United States)

    Hurst, Victor, IV; West, Sarah; Austin, Paul; Branson, Richard; Beck, George

    2005-01-01

    Astronaut crew medical officers (CMO) aboard the International Space Station (ISS) receive 40 hours of medical training over 18 months before each mission, including two-person cardiopulmonary resuscitation (2CPR) as recommended by the American Heart Association (AHA). Recent studies have concluded that the use of metronomic tones improves the coordination of 2CPR by trained clinicians. 2CPR performance data for minimally-trained caregivers has been limited. The goal of this study was to determine whether use of a metronome by minimally-trained caregivers (CMO analogues) would improve 2CPR performance. 20 pairs of minimally-trained caregivers certified in 2CPR via AHA guidelines performed 2CPR for 4 minutes on an instrumented manikin using 3 interventions: 1) Standard 2CPR without a metronome [NONE], 2) Standard 2CPR plus a metronome for coordinating compression rate only [MET], 3) Standard 2CPR plus a metronome for coordinating both the compression rate and ventilation rate [BOTH]. Caregivers were evaluated for their ability to meet the AHA guideline of 32 breaths-240 compressions in 4 minutes. All (100%) caregivers using the BOTH intervention provided the required number of ventilation breaths as compared with the NONE caregivers (10%) and MET caregivers (0%). For compressions, 97.5% of the BOTH caregivers were not successful in meeting the AHA compression guideline; however, an average of 238 compressions of the desired 240 were completed. None of the caregivers were successful in meeting the compression guideline using the NONE and MET interventions. This study demonstrates that use of metronomic tones by minimally-trained caregivers for coordinating both compressions and breaths improves 2CPR performance. Meeting the breath guideline is important to minimize air entering the stomach, thus decreasing the likelihood of gastric aspiration. These results suggest that manifesting a metronome for the ISS may augment the performance of 2CPR on orbit and thus may

  3. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT)

    Energy Technology Data Exchange (ETDEWEB)

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-12-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  4. Edge compression techniques for visualization of dense directed graphs.

    Science.gov (United States)

    Dwyer, Tim; Henry Riche, Nathalie; Marriott, Kim; Mears, Christopher

    2013-12-01

    We explore the effectiveness of visualizing dense directed graphs by replacing individual edges with edges connected to 'modules'-or groups of nodes-such that the new edges imply aggregate connectivity. We only consider techniques that offer a lossless compression: that is, where the entire graph can still be read from the compressed version. The techniques considered are: a simple grouping of nodes with identical neighbor sets; Modular Decomposition which permits internal structure in modules and allows them to be nested; and Power Graph Analysis which further allows edges to cross module boundaries. These techniques all have the same goal--to compress the set of edges that need to be rendered to fully convey connectivity--but each successive relaxation of the module definition permits fewer edges to be drawn in the rendered graph. Each successive technique also, we hypothesize, requires a higher degree of mental effort to interpret. We test this hypothetical trade-off with two studies involving human participants. For Power Graph Analysis we propose a novel optimal technique based on constraint programming. This enables us to explore the parameter space for the technique more precisely than could be achieved with a heuristic. Although applicable to many domains, we are motivated by--and discuss in particular--the application to software dependency analysis.

  5. Data compression techniques and the ACR-NEMA digital interface communications standard

    International Nuclear Information System (INIS)

    Zielonka, J.S.; Blume, H.; Hill, D.; Horil, S.C.; Lodwick, G.S.; Moore, J.; Murphy, L.L.; Wake, R.; Wallace, G.

    1987-01-01

    Data compression offers the possibility of achieving high, effective information transfer rates between devices and of efficient utilization of digital storge devices in meeting department-wide archiving needs. Accordingly, the ARC-NEMA Digital Imaging and Communications Standards Committee established a Working Group to develop a means to incorporate the optimal use of a wide variety of current compression techniques while remaining compatible with the standard. This proposed method allows the use of public domain techniques, predetermined methods between devices already aware of the selected algorithm, and the ability for the originating device to specify algorithms and parameters prior to transmitting compressed data. Because of the latter capability, the technique has the potential for supporting many compression algorithms not yet developed or in common use. Both lossless and lossy methods can be implemented. In addition to description of the overall structure of this proposal, several examples using current compression algorithms are given

  6. A convergent overlapping domain decomposition method for total variation minimization

    KAUST Repository

    Fornasier, Massimo

    2010-06-22

    In this paper we are concerned with the analysis of convergent sequential and parallel overlapping domain decomposition methods for the minimization of functionals formed by a discrepancy term with respect to the data and a total variation constraint. To our knowledge, this is the first successful attempt of addressing such a strategy for the nonlinear, nonadditive, and nonsmooth problem of total variation minimization. We provide several numerical experiments, showing the successful application of the algorithm for the restoration of 1D signals and 2D images in interpolation/inpainting problems, respectively, and in a compressed sensing problem, for recovering piecewise constant medical-type images from partial Fourier ensembles. © 2010 Springer-Verlag.

  7. High-speed and high-ratio referential genome compression.

    Science.gov (United States)

    Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan

    2017-11-01

    The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. On low-rank updates to the singular value and Tucker decompositions

    Energy Technology Data Exchange (ETDEWEB)

    O' Hara, M J

    2009-10-06

    The singular value decomposition is widely used in signal processing and data mining. Since the data often arrives in a stream, the problem of updating matrix decompositions under low-rank modification has been widely studied. Brand developed a technique in 2006 that has many advantages. However, the technique does not directly approximate the updated matrix, but rather its previous low-rank approximation added to the new update, which needs justification. Further, the technique is still too slow for large information processing problems. We show that the technique minimizes the change in error per update, so if the error is small initially it remains small. We show that an updating algorithm for large sparse matrices should be sub-linear in the matrix dimension in order to be practical for large problems, and demonstrate a simple modification to the original technique that meets the requirements.

  9. SVD compression for magnetic resonance fingerprinting in the time domain.

    Science.gov (United States)

    McGivney, Debra F; Pierre, Eric; Ma, Dan; Jiang, Yun; Saybasili, Haris; Gulani, Vikas; Griswold, Mark A

    2014-12-01

    Magnetic resonance (MR) fingerprinting is a technique for acquiring and processing MR data that simultaneously provides quantitative maps of different tissue parameters through a pattern recognition algorithm. A predefined dictionary models the possible signal evolutions simulated using the Bloch equations with different combinations of various MR parameters and pattern recognition is completed by computing the inner product between the observed signal and each of the predicted signals within the dictionary. Though this matching algorithm has been shown to accurately predict the MR parameters of interest, one desires a more efficient method to obtain the quantitative images. We propose to compress the dictionary using the singular value decomposition, which will provide a low-rank approximation. By compressing the size of the dictionary in the time domain, we are able to speed up the pattern recognition algorithm, by a factor of between 3.4-4.8, without sacrificing the high signal-to-noise ratio of the original scheme presented previously.

  10. Compressed natural gas for vehicles and how we can develop and meet the market

    International Nuclear Information System (INIS)

    Pinkerton, W.E.

    1992-01-01

    This paper reports that state and federal legislation have mandated the use of clean burning fuels. Clean fuels include: compressed natural gas (CNG), ethanol, methanol, liquefied petroleum gas (LPG), electricity, and reformulated gasoline. The Clean Air Amendments 1990 have created support for the rapid utilization of the compressed natural gas (CNG). Responsively, diverse occupations related to this industry are emerging. A coordinated infrastructure is vital to the successful promotion of clean fuels and synchronized endorsement of the law

  11. The increase of compressive strength of natural polymer modified concrete with Moringa oleifera

    Science.gov (United States)

    Susilorini, Rr. M. I. Retno; Santosa, Budi; Rejeki, V. G. Sri; Riangsari, M. F. Devita; Hananta, Yan's. Dianaga

    2017-03-01

    Polymer modified concrete is one of some concrete technology innovations to meet the need of strong and durable concrete. Previous research found that Moringa oleifera can be applied as natural polymer modifiers into mortars. Natural polymer modified mortar using Moringa oleifera is proven to increase their compressive strength significantly. In this resesearch, Moringa oleifera seeds have been grinded and added into concrete mix for natural polymer modified concrete, based on the optimum composition of previous research. The research investigated the increase of compressive strength of polymer modified concrete with Moringa oleifera as natural polymer modifiers. There were 3 compositions of natural polymer modified concrete with Moringa oleifera referred to previous research optimum compositions. Several cylinder of 10 cm x 20 cm specimens were produced and tested for compressive strength at age 7, 14, and, 28 days. The research meets conclusions: (1) Natural polymer modified concrete with Moringa oleifera, with and without skin, has higher compressive strength compared to natural polymer modified mortar with Moringa oleifera and also control specimens; (2) Natural polymer modified concrete with Moringa oleifera without skin is achieved by specimens contains Moringa oleifera that is 0.2% of cement weight; and (3) The compressive strength increase of natural polymer modified concrete with Moringa oleifera without skin is about 168.11-221.29% compared to control specimens

  12. Compressed sensing for energy-efficient wireless telemonitoring of noninvasive fetal ECG via block sparse Bayesian learning.

    Science.gov (United States)

    Zhang, Zhilin; Jung, Tzyy-Ping; Makeig, Scott; Rao, Bhaskar D

    2013-02-01

    Fetal ECG (FECG) telemonitoring is an important branch in telemedicine. The design of a telemonitoring system via a wireless body area network with low energy consumption for ambulatory use is highly desirable. As an emerging technique, compressed sensing (CS) shows great promise in compressing/reconstructing data with low energy consumption. However, due to some specific characteristics of raw FECG recordings such as nonsparsity and strong noise contamination, current CS algorithms generally fail in this application. This paper proposes to use the block sparse Bayesian learning framework to compress/reconstruct nonsparse raw FECG recordings. Experimental results show that the framework can reconstruct the raw recordings with high quality. Especially, the reconstruction does not destroy the interdependence relation among the multichannel recordings. This ensures that the independent component analysis decomposition of the reconstructed recordings has high fidelity. Furthermore, the framework allows the use of a sparse binary sensing matrix with much fewer nonzero entries to compress recordings. Particularly, each column of the matrix can contain only two nonzero entries. This shows that the framework, compared to other algorithms such as current CS algorithms and wavelet algorithms, can greatly reduce code execution in CPU in the data compression stage.

  13. Decompositions of manifolds

    CERN Document Server

    Daverman, Robert J

    2007-01-01

    Decomposition theory studies decompositions, or partitions, of manifolds into simple pieces, usually cell-like sets. Since its inception in 1929, the subject has become an important tool in geometric topology. The main goal of the book is to help students interested in geometric topology to bridge the gap between entry-level graduate courses and research at the frontier as well as to demonstrate interrelations of decomposition theory with other parts of geometric topology. With numerous exercises and problems, many of them quite challenging, the book continues to be strongly recommended to eve

  14. From aerodynamics towards aeroacoustics: a novel natural velocity decomposition for the Navier-Stokes equations

    CERN Document Server

    Morino, Luigi

    2015-01-01

    A novel formulation for the analysis of viscous incompressible and compressible aerodynamics/aeroacoustics fields is presented. The paper is primarily of a theoretical nature, and presents the transition path from aerodynamics towards aeroacoustics. The basis of the paper is a variant of the so-called natural velocity decomposition, as v = ▿φ + w, where w is obtained from its own governing equation and not from the vorticity. With the novel decomposition, the governing equation for w and the generalized Bernoulli theorem for viscous fields assume a very elegant form. Another improvement pertains to the so-called material covariant components of w: For inviscid incompressible flows, they remain constant in time; minor modifications occur when we deal with viscous flows. In addition, interesting simplifications of the formulation are presented for almost-potential flows, namely for flows that are irrotational everywhere except for thin vortex layers, such as boundary layers and wakes. It is shown that, if th...

  15. Thermal decomposition of pyrite

    International Nuclear Information System (INIS)

    Music, S.; Ristic, M.; Popovic, S.

    1992-01-01

    Thermal decomposition of natural pyrite (cubic, FeS 2 ) has been investigated using X-ray diffraction and 57 Fe Moessbauer spectroscopy. X-ray diffraction analysis of pyrite ore from different sources showed the presence of associated minerals, such as quartz, szomolnokite, stilbite or stellerite, micas and hematite. Hematite, maghemite and pyrrhotite were detected as thermal decomposition products of natural pyrite. The phase composition of the thermal decomposition products depends on the terature, time of heating and starting size of pyrite chrystals. Hematite is the end product of the thermal decomposition of natural pyrite. (author) 24 refs.; 6 figs.; 2 tabs

  16. Danburite decomposition by sulfuric acid

    International Nuclear Information System (INIS)

    Mirsaidov, U.; Mamatov, E.D.; Ashurov, N.A.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by sulfuric acid. The process of decomposition of danburite concentrate by sulfuric acid was studied. The chemical nature of decomposition process of boron containing ore was determined. The influence of temperature on the rate of extraction of boron and iron oxides was defined. The dependence of decomposition of boron and iron oxides on process duration, dosage of H 2 SO 4 , acid concentration and size of danburite particles was determined. The kinetics of danburite decomposition by sulfuric acid was studied as well. The apparent activation energy of the process of danburite decomposition by sulfuric acid was calculated. The flowsheet of danburite processing by sulfuric acid was elaborated.

  17. Compressed-sensing wavenumber-scanning interferometry

    Science.gov (United States)

    Bai, Yulei; Zhou, Yanzhou; He, Zhaoshui; Ye, Shuangli; Dong, Bo; Xie, Shengli

    2018-01-01

    The Fourier transform (FT), the nonlinear least-squares algorithm (NLSA), and eigenvalue decomposition algorithm (EDA) are used to evaluate the phase field in depth-resolved wavenumber-scanning interferometry (DRWSI). However, because the wavenumber series of the laser's output is usually accompanied by nonlinearity and mode-hop, FT, NLSA, and EDA, which are only suitable for equidistant interference data, often lead to non-negligible phase errors. In this work, a compressed-sensing method for DRWSI (CS-DRWSI) is proposed to resolve this problem. By using the randomly spaced inverse Fourier matrix and solving the underdetermined equation in the wavenumber domain, CS-DRWSI determines the nonuniform sampling and spectral leakage of the interference spectrum. Furthermore, it can evaluate interference data without prior knowledge of the object. The experimental results show that CS-DRWSI improves the depth resolution and suppresses sidelobes. It can replace the FT as a standard algorithm for DRWSI.

  18. Azimuthal decomposition of optical modes

    CSIR Research Space (South Africa)

    Dudley, Angela L

    2012-07-01

    Full Text Available This presentation analyses the azimuthal decomposition of optical modes. Decomposition of azimuthal modes need two steps, namely generation and decomposition. An azimuthally-varying phase (bounded by a ring-slit) placed in the spatial frequency...

  19. Compressive strength of concrete and mortar containing fly ash

    Science.gov (United States)

    Liskowitz, John W.; Wecharatana, Methi; Jaturapitakkul, Chai; Cerkanowicz, deceased, Anthony E.

    1997-01-01

    The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention includes a method for predicting the compressive strength of such a hardenable mixture, which is very important for planning a project. The invention also relates to hardenable mixtures comprising cement and fly ash which can achieve greater compressive strength than hardenable mixtures containing only concrete over the time period relevant for construction. In a specific embodiment, a formula is provided that accurately predicts compressive strength of concrete containing fly ash out to 180 days. In other specific examples, concrete and mortar containing about 15% to 25% fly ash as a replacement for cement, which are capable of meeting design specifications required for building and highway construction, are provided. Such materials can thus significantly reduce construction costs.

  20. Thermal decomposition of lutetium propionate

    DEFF Research Database (Denmark)

    Grivel, Jean-Claude

    2010-01-01

    The thermal decomposition of lutetium(III) propionate monohydrate (Lu(C2H5CO2)3·H2O) in argon was studied by means of thermogravimetry, differential thermal analysis, IR-spectroscopy and X-ray diffraction. Dehydration takes place around 90 °C. It is followed by the decomposition of the anhydrous...... °C. Full conversion to Lu2O3 is achieved at about 1000 °C. Whereas the temperatures and solid reaction products of the first two decomposition steps are similar to those previously reported for the thermal decomposition of lanthanum(III) propionate monohydrate, the final decomposition...... of the oxycarbonate to the rare-earth oxide proceeds in a different way, which is here reminiscent of the thermal decomposition path of Lu(C3H5O2)·2CO(NH2)2·2H2O...

  1. Efficient Simulation of Compressible, Viscous Fluids using Multi-rate Time Integration

    Science.gov (United States)

    Mikida, Cory; Kloeckner, Andreas; Bodony, Daniel

    2017-11-01

    In the numerical simulation of problems of compressible, viscous fluids with single-rate time integrators, the global timestep used is limited to that of the finest mesh point or fastest physical process. This talk discusses the application of multi-rate Adams-Bashforth (MRAB) integrators to an overset mesh framework to solve compressible viscous fluid problems of varying scale with improved efficiency, with emphasis on the strategy of timescale separation and the application of the resulting numerical method to two sample problems: subsonic viscous flow over a cylinder and a viscous jet in crossflow. The results presented indicate the numerical efficacy of MRAB integrators, outline a number of outstanding code challenges, demonstrate the expected reduction in time enabled by MRAB, and emphasize the need for proper load balancing through spatial decomposition in order for parallel runs to achieve the predicted time-saving benefit. This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number DE-NA0002374.

  2. Three-pattern decomposition of global atmospheric circulation: part I—decomposition model and theorems

    Science.gov (United States)

    Hu, Shujuan; Chou, Jifan; Cheng, Jianbo

    2018-04-01

    In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.

  3. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    Science.gov (United States)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  4. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  5. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  6. Particle Engineering of Excipients for Direct Compression: Understanding the Role of Material Properties.

    Science.gov (United States)

    Mangal, Sharad; Meiser, Felix; Morton, David; Larson, Ian

    2015-01-01

    Tablets represent the preferred and most commonly dispensed pharmaceutical dosage form for administering active pharmaceutical ingredients (APIs). Minimizing the cost of goods and improving manufacturing output efficiency has motivated companies to use direct compression as a preferred method of tablet manufacturing. Excipients dictate the success of direct compression, notably by optimizing powder formulation compactability and flow, thus there has been a surge in creating excipients specifically designed to meet these needs for direct compression. Greater scientific understanding of tablet manufacturing coupled with effective application of the principles of material science and particle engineering has resulted in a number of improved direct compression excipients. Despite this, significant practical disadvantages of direct compression remain relative to granulation, and this is partly due to the limitations of direct compression excipients. For instance, in formulating high-dose APIs, a much higher level of excipient is required relative to wet or dry granulation and so tablets are much bigger. Creating excipients to enable direct compression of high-dose APIs requires the knowledge of the relationship between fundamental material properties and excipient functionalities. In this paper, we review the current understanding of the relationship between fundamental material properties and excipient functionality for direct compression.

  7. Global decomposition experiment shows soil animal impacts on decomposition are climate-dependent

    Czech Academy of Sciences Publication Activity Database

    Wall, D.H.; Bradford, M.A.; John, M.G.St.; Trofymow, J.A.; Behan-Pelletier, V.; Bignell, D.E.; Dangerfield, J.M.; Parton, W.J.; Rusek, Josef; Voigt, W.; Wolters, V.; Gardel, H.Z.; Ayuke, F. O.; Bashford, R.; Beljakova, O.I.; Bohlen, P.J.; Brauman, A.; Flemming, S.; Henschel, J.R.; Johnson, D.L.; Jones, T.H.; Kovářová, Marcela; Kranabetter, J.M.; Kutny, L.; Lin, K.-Ch.; Maryati, M.; Masse, D.; Pokarzhevskii, A.; Rahman, H.; Sabará, M.G.; Salamon, J.-A.; Swift, M.J.; Varela, A.; Vasconcelos, H.L.; White, D.; Zou, X.

    2008-01-01

    Roč. 14, č. 11 (2008), s. 2661-2677 ISSN 1354-1013 Institutional research plan: CEZ:AV0Z60660521; CEZ:AV0Z60050516 Keywords : climate decomposition index * decomposition * litter Subject RIV: EH - Ecology, Behaviour Impact factor: 5.876, year: 2008

  8. Decomposition methods for unsupervised learning

    DEFF Research Database (Denmark)

    Mørup, Morten

    2008-01-01

    This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...

  9. Dictionary-Based Tensor Canonical Polyadic Decomposition

    Science.gov (United States)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  10. Cellular decomposition in vikalloys

    International Nuclear Information System (INIS)

    Belyatskaya, I.S.; Vintajkin, E.Z.; Georgieva, I.Ya.; Golikov, V.A.; Udovenko, V.A.

    1981-01-01

    Austenite decomposition in Fe-Co-V and Fe-Co-V-Ni alloys at 475-600 deg C is investigated. The cellular decomposition in ternary alloys results in the formation of bcc (ordered) and fcc structures, and in quaternary alloys - bcc (ordered) and 12R structures. The cellular 12R structure results from the emergence of stacking faults in the fcc lattice with irregular spacing in four layers. The cellular decomposition results in a high-dispersion structure and magnetic properties approaching the level of well-known vikalloys [ru

  11. Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications

    Science.gov (United States)

    Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew

    2009-01-01

    Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.

  12. Multiresolution signal decomposition schemes

    NARCIS (Netherlands)

    J. Goutsias (John); H.J.A.M. Heijmans (Henk)

    1998-01-01

    textabstract[PNA-R9810] Interest in multiresolution techniques for signal processing and analysis is increasing steadily. An important instance of such a technique is the so-called pyramid decomposition scheme. This report proposes a general axiomatic pyramid decomposition scheme for signal analysis

  13. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT). FINAL REPORT

    International Nuclear Information System (INIS)

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-01-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  14. Symmetric Tensor Decomposition

    DEFF Research Database (Denmark)

    Brachat, Jerome; Comon, Pierre; Mourrain, Bernard

    2010-01-01

    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....

  15. Thermal decomposition of beryllium perchlorate tetrahydrate

    International Nuclear Information System (INIS)

    Berezkina, L.G.; Borisova, S.I.; Tamm, N.S.; Novoselova, A.V.

    1975-01-01

    Thermal decomposition of Be(ClO 4 ) 2 x4H 2 O was studied by the differential flow technique in the helium stream. The kinetics was followed by an exchange reaction of the perchloric acid appearing by the decomposition with potassium carbonate. The rate of CO 2 liberation in this process was recorded by a heat conductivity detector. The exchange reaction yielding CO 2 is quantitative, it is not the limiting one and it does not distort the kinetics of the process of perchlorate decomposition. The solid products of decomposition were studied by infrared and NMR spectroscopy, roentgenography, thermography and chemical analysis. A mechanism suggested for the decomposition involves intermediate formation of hydroxyperchlorate: Be(ClO 4 ) 2 x4H 2 O → Be(OH)ClO 4 +HClO 4 +3H 2 O; Be(OH)ClO 4 → BeO+HClO 4 . Decomposition is accompained by melting of the sample. The mechanism of decomposition is hydrolytic. At room temperature the hydroxyperchlorate is a thick syrup-like compound crystallizing after long storing

  16. JANNAF 18th Propulsion Systems Hazards Subcommittee Meeting. Volume 1

    Science.gov (United States)

    Cocchiaro, James E. (Editor); Gannaway, Mary T. (Editor)

    1999-01-01

    This volume, the first of two volumes is a compilation of 18 unclassified/unlimited-distribution technical papers presented at the Joint Army-Navy-NASA-Air Force (JANNAF) 18th Propulsion Systems Hazards Subcommittee (PSHS) meeting held jointly with the 36th Combustion Subcommittee (CS) and 24th Airbreathing Propulsion Subcommittee (APS) meetings. The meeting was held 18-21 October 1999 at NASA Kennedy Space Center and The DoubleTree Oceanfront Hotel, Cocoa Beach, Florida. Topics covered at the PSHS meeting include: shaped charge jet and kinetic energy penetrator impact vulnerability of gun propellants; thermal decomposition and cookoff behavior of energetic materials; violent reaction; detonation phenomena of solid energetic materials subjected to shock and impact stimuli; and hazard classification, insensitive munitions, and propulsion systems safety.

  17. Establishing physical criteria to stop the losing compression of digital medical imaging

    International Nuclear Information System (INIS)

    Perez Diaz, M

    2008-01-01

    Full text: A key to store and/or transmit digital medical images obtained from modern technologies is the size in bytes they occupy difficulty. One way to solve the above is the implementation of compression algorithms (codecs) with or without losses. Particularly the latter do allow significant reductions in the size of the images, but if not applied on solid scientific criteria can lead to useful diagnostic information is lost. This talk takes a description and assessment of the quality of image obtained after the application of current compression codecs from analysis of physical parameters such as: Spatial resolution, random noise , contrast and image generation devices. Open for Medical Physics and Image Processing, directed toward establishing objective criteria to stop losing compression, based on the implementation of Univariate and bivariate traditional metrics such as mean square error introduced by each issue focuses rate compression, Signal to Noise peak to peak noise and contrast ratio , and other metrics, more modern, such as Structural Similarity Index and, Measures Distance , singular value decomposition of the image matrix and Correlation and Spectral Measurements. It also makes a review of physical approaches for predicting image quality from use mathematical observers as the Hotelling and Hotelling Pipeline with Gabor functions or Laguerre - Gauss polynomials . Finally the correlation of these objective methods with subjective assessment of image quality made ​​from ROC analysis based on Diagnostic Performance Curves is analyzed. (author)

  18. Does increasing pressure always accelerate the condensed material decay initiated through bimolecular reactions? A case of the thermal decomposition of TKX-50 at high pressures.

    Science.gov (United States)

    Lu, Zhipeng; Zeng, Qun; Xue, Xianggui; Zhang, Zengming; Nie, Fude; Zhang, Chaoyang

    2017-08-30

    Performances and behaviors under high temperature-high pressure conditions are fundamentals for many materials. We study in the present work the pressure effect on the thermal decomposition of a new energetic ionic salt (EIS), TKX-50, by confining samples in a diamond anvil cell, using Raman spectroscopy measurements and ab initio simulations. As a result, we find a quadratic increase in decomposition temperature (T d ) of TKX-50 with increasing pressure (P) (T d = 6.28P 2 + 12.94P + 493.33, T d and P in K and GPa, respectively, and R 2 = 0.995) and the decomposition under various pressures initiated by an intermolecular H-transfer reaction (a bimolecular reaction). Surprisingly, this finding is contrary to a general observation about the pressure effect on the decomposition of common energetic materials (EMs) composed of neutral molecules: increasing pressure will impede the decomposition if it starts from a bimolecular reaction. Our results also demonstrate that increasing pressure impedes the H-transfer via the enhanced long-range electrostatic repulsion of H +δ H +δ of neighboring NH 3 OH + , with blue shifts of the intermolecular H-bonds. And the subsequent decomposition of the H-transferred intermediates is also suppressed, because the decomposition proceeds from a bimolecular reaction to a unimolecular one, which is generally prevented by compression. These two factors are the basic root for which the decomposition retarded with increasing pressure of TKX-50. Therefore, our finding breaks through the previously proposed concept that, for the condensed materials, increasing pressure will accelerate the thermal decomposition initiated by bimolecular reactions, and reveals a distinct mechanism of the pressure effect on thermal decomposition. That is to say, increasing pressure does not always promote the condensed material decay initiated through bimolecular reactions. Moreover, such a mechanism may be feasible to other EISs due to the similar intermolecular

  19. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  20. The Speech multi features fusion perceptual hash algorithm based on tensor decomposition

    Science.gov (United States)

    Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.

    2018-03-01

    With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.

  1. Some nonlinear space decomposition algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Tai, Xue-Cheng; Espedal, M. [Univ. of Bergen (Norway)

    1996-12-31

    Convergence of a space decomposition method is proved for a general convex programming problem. The space decomposition refers to methods that decompose a space into sums of subspaces, which could be a domain decomposition or a multigrid method for partial differential equations. Two algorithms are proposed. Both can be used for linear as well as nonlinear elliptic problems and they reduce to the standard additive and multiplicative Schwarz methods for linear elliptic problems. Two {open_quotes}hybrid{close_quotes} algorithms are also presented. They converge faster than the additive one and have better parallelism than the multiplicative method. Numerical tests with a two level domain decomposition for linear, nonlinear and interface elliptic problems are presented for the proposed algorithms.

  2. Decomposition of Multi-player Games

    Science.gov (United States)

    Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael

    Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.

  3. A solution approach based on Benders decomposition for the preventive maintenance scheduling problem of a stochastic large-scale energy system

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Muller, Laurent Flindt; Petersen, Bjørn

    2013-01-01

    This paper describes a Benders decomposition-based framework for solving the large scale energy management problem that was posed for the ROADEF 2010 challenge. The problem was taken from the power industry and entailed scheduling the outage dates for a set of nuclear power plants, which need...... to be regularly taken down for refueling and maintenance, in such away that the expected cost of meeting the power demand in a number of potential scenarios is minimized. We show that the problem structure naturally lends itself to Benders decomposition; however, not all constraints can be included in the mixed...

  4. Inertia and compressibility effects on density waves and Ledinegg phenomena in two-phase flow systems

    International Nuclear Information System (INIS)

    Ruspini, L.C.

    2012-01-01

    Highlights: ► The stability influence of piping fluid inertia on two-phase instabilities is studied. ► Inlet inertia stabilizes the system while outlet inertia destabilizes it. ► High-order modes oscillations are found and analyzed. ► The effect of compressible volumes in the system is studied. ► Inlet compressibility destabilizes the system while outlet comp. stabilizes it. - Abstract: The most common kind of static and dynamic two-phase flow instabilities namely Ledinegg and density wave oscillations are studied. A new model to study two-phase flow instabilities taking into account general parameters from real systems is proposed. The stability influence of external parameters such as the fluid inertia and the presence of compressible gases in the system is analyzed. High-order oscillation modes are found to be related with the fluid inertia of external piping. The occurrence of high-order modes in experimental works is analyzed with focus on the results presented in this work. Moreover, both inertia and compressibility are proven to have a high impact on the stability limits of the systems. The performed study is done by modeling the boiling channel using a one dimensional equilibrium model. An incompressible transient model describes the evolution of the flow and pressure in the non-heated regions and an ideal gas model is used to simulate the compressible volumes in the system. The use of wavelet decomposition analysis is proven to be an efficient tool in stability analysis of several frequencies oscillations.

  5. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  6. Thermophysical properties of liquid carbon dioxide under shock compressions: quantum molecular dynamic simulations.

    Science.gov (United States)

    Wang, Cong; Zhang, Ping

    2010-10-07

    Quantum molecular dynamics were used to calculate the equation of state, electrical, and optical properties of liquid carbon dioxide along the Hugoniot at shock pressures up to 74 GPa. The principal Hugoniot derived from the calculated equation of state is in good agreement with experimental results. Molecular dissociation and recombination are investigated through pair correlation functions and decomposition of carbon dioxide is found to be between 40 and 50 GPa along the Hugoniot, where nonmetal-metal transition is observed. In addition, the optical properties of shock compressed carbon dioxide are also theoretically predicted along the Hugoniot.

  7. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  8. Decomposition of diesel oil by various microorganisms

    Energy Technology Data Exchange (ETDEWEB)

    Suess, A; Netzsch-Lehner, A

    1969-01-01

    Previous experiments demonstrated the decomposition of diesel oil in different soils. In this experiment the decomposition of /sup 14/C-n-Hexadecane labelled diesel oil by special microorganisms was studied. The results were as follows: (1) In the experimental soils the microorganisms Mycoccus ruber, Mycobacterium luteum and Trichoderma hamatum are responsible for the diesel oil decomposition. (2) By adding microorganisms to the soil an increase of the decomposition rate was found only in the beginning of the experiments. (3) Maximum decomposition of diesel oil was reached 2-3 weeks after incubation.

  9. Operability test procedure for 241-U compressed air system and heat pump

    International Nuclear Information System (INIS)

    Freeman, R.D.

    1994-01-01

    The 241-U-701 compressed air system supplies instrument quality compressed air to Tank Farm 241-U. The supply piping to the 241-U Tank Farm is not included in the modification. Modifications to the 241-U-701 compressed air system include installation of a 15 HP Reciprocating Air Compressor, Ingersoll-Rand Model 10T3NLM-E15; an air dryer, Hankinson, Model DH-45; and miscellaneous system equipment and piping (valves, filters, etc.) to meet the design. A newly installed heat pump allows the compressor to operate within an enclosed relatively dust free atmosphere and keeps the compressor room within a standard acceptable temperature range, which makes possible efficient compressor operation, reduces maintenance, and maximizes compressor operating life. This document is an Operability Test Procedure (OTP) which will further verify (in addition to the Acceptance Test Procedure) that the 241-U-701 compressed air system and heat pump operate within their intended design parameters. The activities defined in this OTP will be performed to ensure the performance of the new compressed air system will be adequate, reliable and efficient. Completion of this OTP and sign off of the OTP Acceptance of Test Results is necessary for turnover of the compressed air system from Engineering to Operations

  10. Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach.

    Science.gov (United States)

    Elgendi, Mohamed; Al-Ali, Abdulla; Mohamed, Amr; Ward, Rabab

    2018-01-16

    Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG) signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B / K are evaluated based on compression ratio (CR), percentage root-mean-square difference (PRD), and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects), the new algorithm demonstrates the highest compression performance ( CR = 6 and PRD = 1.88 ) and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity) over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring.

  11. Improving Remote Health Monitoring: A Low-Complexity ECG Compression Approach

    Directory of Open Access Journals (Sweden)

    Mohamed Elgendi

    2018-01-01

    Full Text Available Recent advances in mobile technology have created a shift towards using battery-driven devices in remote monitoring settings and smart homes. Clinicians are carrying out diagnostic and screening procedures based on the electrocardiogram (ECG signals collected remotely for outpatients who need continuous monitoring. High-speed transmission and analysis of large recorded ECG signals are essential, especially with the increased use of battery-powered devices. Exploring low-power alternative compression methodologies that have high efficiency and that enable ECG signal collection, transmission, and analysis in a smart home or remote location is required. Compression algorithms based on adaptive linear predictors and decimation by a factor B / K are evaluated based on compression ratio (CR, percentage root-mean-square difference (PRD, and heartbeat detection accuracy of the reconstructed ECG signal. With two databases (153 subjects, the new algorithm demonstrates the highest compression performance ( CR = 6 and PRD = 1.88 and overall detection accuracy (99.90% sensitivity, 99.56% positive predictivity over both databases. The proposed algorithm presents an advantage for the real-time transmission of ECG signals using a faster and more efficient method, which meets the growing demand for more efficient remote health monitoring.

  12. Multilinear operators for higher-order decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara Gibson

    2006-04-01

    We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to concisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very concise expression of the PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.

  13. Economic and technical feasibility study of compressed air storage

    Energy Technology Data Exchange (ETDEWEB)

    1976-03-01

    The results of a study of the economic and technical feasibility of compressed air energy storage (CAES) are presented. The study, which concentrated primarily on the application of underground air storage with combustion turbines, consisted of two phases. In the first phase a general assessment of the technical alternatives, economic characteristics and the institutional constraints associated with underground storage of compressed air for utility peaking application was carried out. The goal of this assessment was to identify potential barrier problems and to define the incentive for the implementation of compressed air storage. In the second phase, the general conclusions of the assessment were tested by carrying out the conceptual design of a CAES plant at two specific sites, and a program of further work indicated by the assessment study was formulated. The conceptual design of a CAES plant employing storage in an aquifer and that of a plant employing storage in a conventionally excavated cavern employing a water leg to maintain constant pressure are shown. Recommendations for further work, as well as directions of future turbo-machinery development, are made. It is concluded that compressed air storage is technically feasible for off-peak energy storage, and, depending on site conditions, CAES plants may be favored over simple cycle turbine plants to meet peak demands. (LCL)

  14. GPU Lossless Hyperspectral Data Compression System for Space Applications

    Science.gov (United States)

    Keymeulen, Didier; Aranki, Nazeeh; Hopson, Ben; Kiely, Aaron; Klimesh, Matthew; Benkrid, Khaled

    2012-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. At JPL, a novel, adaptive and predictive technique for lossless compression of hyperspectral data, named the Fast Lossless (FL) algorithm, was recently developed. This technique uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. Because of its outstanding performance and suitability for real-time onboard hardware implementation, the FL compressor is being formalized as the emerging CCSDS Standard for Lossless Multispectral & Hyperspectral image compression. The FL compressor is well-suited for parallel hardware implementation. A GPU hardware implementation was developed for FL targeting the current state-of-the-art GPUs from NVIDIA(Trademark). The GPU implementation on a NVIDIA(Trademark) GeForce(Trademark) GTX 580 achieves a throughput performance of 583.08 Mbits/sec (44.85 MSamples/sec) and an acceleration of at least 6 times a software implementation running on a 3.47 GHz single core Intel(Trademark) Xeon(Trademark) processor. This paper describes the design and implementation of the FL algorithm on the GPU. The massively parallel implementation will provide in the future a fast and practical real-time solution for airborne and space applications.

  15. Decomposition of tetrachloroethylene by ionizing radiation

    International Nuclear Information System (INIS)

    Hakoda, T.; Hirota, K.; Hashimoto, S.

    1998-01-01

    Decomposition of tetrachloroethylene and other chloroethenes by ionizing radiation were examined to get information on treatment of industrial off-gas. Model gases, airs containing chloroethenes, were confined in batch reactors and irradiated with electron beam and gamma ray. The G-values of decomposition were larger in the order of tetrachloro- > trichloro- > trans-dichloro- > cis-dichloro- > monochloroethylene in electron beam irradiation and tetrachloro-, trichloro-, trans-dichloro- > cis-dichloro- > monochloroethylene in gamma ray irradiation. For tetrachloro-, trichloro- and trans-dichloroethylene, G-values of decomposition in EB irradiation increased with increase of chlorine atom in a molecule, while those in gamma ray irradiation were almost kept constant. The G-value of decomposition for tetrachloroethylene in EB irradiation was the largest of those for all chloroethenes. In order to examine the effect of the initial concentration on G-value of decomposition, airs containing 300 to 1,800 ppm of tetrachloroethylene were irradiated with electron beam and gamma ray. The G-values of decomposition in both irradiation increased with the initial concentration. Those in electron beam irradiation were two times larger than those in gamma ray irradiation

  16. Decomposition of Sodium Tetraphenylborate

    International Nuclear Information System (INIS)

    Barnes, M.J.

    1998-01-01

    The chemical decomposition of aqueous alkaline solutions of sodium tetraphenylborate (NaTPB) has been investigated. The focus of the investigation is on the determination of additives and/or variables which influence NaTBP decomposition. This document describes work aimed at providing better understanding into the relationship of copper (II), solution temperature, and solution pH to NaTPB stability

  17. Thermal decomposition of γ-irradiated lead nitrate

    International Nuclear Information System (INIS)

    Nair, S.M.K.; Kumar, T.S.S.

    1990-01-01

    The thermal decomposition of unirradiated and γ-irradiated lead nitrate was studied by the gas evolution method. The decomposition proceeds through initial gas evolution, a short induction period, an acceleratory stage and a decay stage. The acceleratory and decay stages follow the Avrami-Erofeev equation. Irradiation enhances the decomposition but does not affect the shape of the decomposition curve. (author) 10 refs.; 7 figs.; 2 tabs

  18. Decomposing Nekrasov decomposition

    International Nuclear Information System (INIS)

    Morozov, A.; Zenkevich, Y.

    2016-01-01

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  19. Decomposing Nekrasov decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, A. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); Institute for Information Transmission Problems,19-1 Bolshoy Karetniy, Moscow, 127051 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Zenkevich, Y. [ITEP,25 Bolshaya Cheremushkinskaya, Moscow, 117218 (Russian Federation); National Research Nuclear University MEPhI,31 Kashirskoe highway, Moscow, 115409 (Russian Federation); Institute for Nuclear Research of Russian Academy of Sciences,6a Prospekt 60-letiya Oktyabrya, Moscow, 117312 (Russian Federation)

    2016-02-16

    AGT relations imply that the four-point conformal block admits a decomposition into a sum over pairs of Young diagrams of essentially rational Nekrasov functions — this is immediately seen when conformal block is represented in the form of a matrix model. However, the q-deformation of the same block has a deeper decomposition — into a sum over a quadruple of Young diagrams of a product of four topological vertices. We analyze the interplay between these two decompositions, their properties and their generalization to multi-point conformal blocks. In the latter case we explain how Dotsenko-Fateev all-with-all (star) pair “interaction” is reduced to the quiver model nearest-neighbor (chain) one. We give new identities for q-Selberg averages of pairs of generalized Macdonald polynomials. We also translate the slicing invariance of refined topological strings into the language of conformal blocks and interpret it as abelianization of generalized Macdonald polynomials.

  20. Freeman-Durden Decomposition with Oriented Dihedral Scattering

    Directory of Open Access Journals (Sweden)

    Yan Jian

    2014-10-01

    Full Text Available In this paper, when the azimuth direction of polarimetric Synthetic Aperature Radars (SAR differs from the planting direction of crops, the double bounce of the incident electromagnetic waves from the terrain surface to the growing crops is investigated and compared with the normal double bounce. Oriented dihedral scattering model is developed to explain the investigated double bounce and is introduced into the Freeman-Durden decomposition. The decomposition algorithm corresponding to the improved decomposition is then proposed. The airborne polarimetric SAR data for agricultural land covering two flight tracks are chosen to validate the algorithm; the decomposition results show that for agricultural vegetated land, the improved Freeman-Durden decomposition has the advantage of increasing the decomposition coherency among the polarimetric SAR data along the different flight tracks.

  1. Danburite decomposition by hydrochloric acid

    International Nuclear Information System (INIS)

    Mamatov, E.D.; Ashurov, N.A.; Mirsaidov, U.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by hydrochloric acid. The interaction of boron containing ores of Ak-Arkhar Deposit of Tajikistan with mineral acids, including hydrochloric acid was studied. The optimal conditions of extraction of valuable components from danburite composition were determined. The chemical composition of danburite of Ak-Arkhar Deposit was determined as well. The kinetics of decomposition of calcined danburite by hydrochloric acid was studied. The apparent activation energy of the process of danburite decomposition by hydrochloric acid was calculated.

  2. LMDI decomposition approach: A guide for implementation

    International Nuclear Information System (INIS)

    Ang, B.W.

    2015-01-01

    Since it was first used by researchers to analyze industrial electricity consumption in the early 1980s, index decomposition analysis (IDA) has been widely adopted in energy and emission studies. Lately its use as the analytical component of accounting frameworks for tracking economy-wide energy efficiency trends has attracted considerable attention and interest among policy makers. The last comprehensive literature review of IDA was reported in 2000 which is some years back. After giving an update and presenting the key trends in the last 15 years, this study focuses on the implementation issues of the logarithmic mean Divisia index (LMDI) decomposition methods in view of their dominance in IDA in recent years. Eight LMDI models are presented and their origin, decomposition formulae, and strengths and weaknesses are summarized. Guidelines on the choice among these models are provided to assist users in implementation. - Highlights: • Guidelines for implementing LMDI decomposition approach are provided. • Eight LMDI decomposition models are summarized and compared. • The development of the LMDI decomposition approach is presented. • The latest developments of index decomposition analysis are briefly reviewed.

  3. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. FDG decomposition products

    International Nuclear Information System (INIS)

    Macasek, F.; Buriova, E.

    2004-01-01

    In this presentation authors present the results of analysis of decomposition products of [ 18 ]fluorodexyglucose. It is concluded that the coupling of liquid chromatography - mass spectrometry with electrospray ionisation is a suitable tool for quantitative analysis of FDG radiopharmaceutical, i.e. assay of basic components (FDG, glucose), impurities (Kryptofix) and decomposition products (gluconic and glucuronic acids etc.); 2-[ 18 F]fluoro-deoxyglucose (FDG) is sufficiently stable and resistant towards autoradiolysis; the content of radiochemical impurities (2-[ 18 F]fluoro-gluconic and 2-[ 18 F]fluoro-glucuronic acids in expired FDG did not exceed 1%

  5. Simplified Eigen-structure decomposition solver for the simulation of two-phase flow systems

    International Nuclear Information System (INIS)

    Kumbaro, Anela

    2012-01-01

    This paper discusses the development of a new solver for a system of first-order non-linear differential equations that model the dynamics of compressible two-phase flow. The solver presents a lower-complexity alternative to Roe-type solvers because it only makes use of a partial Eigen-structure information while maintaining its accuracy: the outcome is hence a good complexity-tractability trade-off to consider as relevant in a large number of situations in the scope of two-phase flow numerical simulation. A number of numerical and physical benchmarks are presented to assess the solver. Comparison between the computational results from the simplified Eigen-structure decomposition solver and the conventional Roe-type solver gives insight upon the issues of accuracy, robustness and efficiency. (authors)

  6. Efficient non-linear model reduction via a least-squares Petrov-Galerkin projection and compressive tensor approximations

    KAUST Repository

    Carlberg, Kevin

    2010-10-28

    A Petrov-Galerkin projection method is proposed for reducing the dimension of a discrete non-linear static or dynamic computational model in view of enabling its processing in real time. The right reduced-order basis is chosen to be invariant and is constructed using the Proper Orthogonal Decomposition method. The left reduced-order basis is selected to minimize the two-norm of the residual arising at each Newton iteration. Thus, this basis is iteration-dependent, enables capturing of non-linearities, and leads to the globally convergent Gauss-Newton method. To avoid the significant computational cost of assembling the reduced-order operators, the residual and action of the Jacobian on the right reduced-order basis are each approximated by the product of an invariant, large-scale matrix, and an iteration-dependent, smaller one. The invariant matrix is computed using a data compression procedure that meets proposed consistency requirements. The iteration-dependent matrix is computed to enable the least-squares reconstruction of some entries of the approximated quantities. The results obtained for the solution of a turbulent flow problem and several non-linear structural dynamics problems highlight the merit of the proposed consistency requirements. They also demonstrate the potential of this method to significantly reduce the computational cost associated with high-dimensional non-linear models while retaining their accuracy. © 2010 John Wiley & Sons, Ltd.

  7. Efficient non-linear model reduction via a least-squares Petrov-Galerkin projection and compressive tensor approximations

    KAUST Repository

    Carlberg, Kevin; Bou-Mosleh, Charbel; Farhat, Charbel

    2010-01-01

    A Petrov-Galerkin projection method is proposed for reducing the dimension of a discrete non-linear static or dynamic computational model in view of enabling its processing in real time. The right reduced-order basis is chosen to be invariant and is constructed using the Proper Orthogonal Decomposition method. The left reduced-order basis is selected to minimize the two-norm of the residual arising at each Newton iteration. Thus, this basis is iteration-dependent, enables capturing of non-linearities, and leads to the globally convergent Gauss-Newton method. To avoid the significant computational cost of assembling the reduced-order operators, the residual and action of the Jacobian on the right reduced-order basis are each approximated by the product of an invariant, large-scale matrix, and an iteration-dependent, smaller one. The invariant matrix is computed using a data compression procedure that meets proposed consistency requirements. The iteration-dependent matrix is computed to enable the least-squares reconstruction of some entries of the approximated quantities. The results obtained for the solution of a turbulent flow problem and several non-linear structural dynamics problems highlight the merit of the proposed consistency requirements. They also demonstrate the potential of this method to significantly reduce the computational cost associated with high-dimensional non-linear models while retaining their accuracy. © 2010 John Wiley & Sons, Ltd.

  8. Dynamic compressive response of wrought and additive manufactured 304L stainless steels

    Directory of Open Access Journals (Sweden)

    Nishida Erik

    2015-01-01

    Full Text Available Additive manufacturing (AM technology has been developed to fabricate metal components that include complex prototype fabrication, small lot production, precision repair or feature addition, and tooling. However, the mechanical response of the AM materials is a concern to meet requirements for specific applications. Differences between AM materials as compared to wrought materials might be expected, due to possible differences in porosity (voids, grain size, and residual stress levels. When the AM materials are designed for impact applications, the dynamic mechanical properties in both compression and tension need to be fully characterized and understood for reliable designs. In this study, a 304L stainless steel was manufactured with AM technology. For comparison purposes, both the AM and wrought 304L stainless steels were dynamically characterized in compression Kolsky bar techniques. They dynamic compressive stress-strain curves were obtained and the strain rate effects were determined for both the AM and wrought 304L stainless steels. A comprehensive comparison of dynamic compressive response between the AM and wrought 304L stainless steels was performed. SAND2015-0993 C.

  9. Management intensity alters decomposition via biological pathways

    Science.gov (United States)

    Wickings, Kyle; Grandy, A. Stuart; Reed, Sasha; Cleveland, Cory

    2011-01-01

    Current conceptual models predict that changes in plant litter chemistry during decomposition are primarily regulated by both initial litter chemistry and the stage-or extent-of mass loss. Far less is known about how variations in decomposer community structure (e.g., resulting from different ecosystem management types) could influence litter chemistry during decomposition. Given the recent agricultural intensification occurring globally and the importance of litter chemistry in regulating soil organic matter storage, our objectives were to determine the potential effects of agricultural management on plant litter chemistry and decomposition rates, and to investigate possible links between ecosystem management, litter chemistry and decomposition, and decomposer community composition and activity. We measured decomposition rates, changes in litter chemistry, extracellular enzyme activity, microarthropod communities, and bacterial versus fungal relative abundance in replicated conventional-till, no-till, and old field agricultural sites for both corn and grass litter. After one growing season, litter decomposition under conventional-till was 20% greater than in old field communities. However, decomposition rates in no-till were not significantly different from those in old field or conventional-till sites. After decomposition, grass residue in both conventional- and no-till systems was enriched in total polysaccharides relative to initial litter, while grass litter decomposed in old fields was enriched in nitrogen-bearing compounds and lipids. These differences corresponded with differences in decomposer communities, which also exhibited strong responses to both litter and management type. Overall, our results indicate that agricultural intensification can increase litter decomposition rates, alter decomposer communities, and influence litter chemistry in ways that could have important and long-term effects on soil organic matter dynamics. We suggest that future

  10. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    /topology components of the generated models. The highly efficient triangular mesh compression compacts the connectivity information at the rate of 1.5-4 bits per vertex (on average for triangle meshes), while reducing the 3D geometry by 40-50 percent. Finally, taking into consideration the characteristics of 3D terrain data, and using the innovative, regularized binary decomposition mesh modeling, a multistage, pattern-drive modeling, and compression technique has been developed to provide an effective framework for compressing digital elevation model (DEM) surfaces, high-resolution aerial imagery, and other types of NASA data.

  11. Photochemical decomposition of catecholamines

    International Nuclear Information System (INIS)

    Mol, N.J. de; Henegouwen, G.M.J.B. van; Gerritsma, K.W.

    1979-01-01

    During photochemical decomposition (lambda=254 nm) adrenaline, isoprenaline and noradrenaline in aqueous solution were converted to the corresponding aminochrome for 65, 56 and 35% respectively. In determining this conversion, photochemical instability of the aminochromes was taken into account. Irradiations were performed in such dilute solutions that the neglect of the inner filter effect is permissible. Furthermore, quantum yields for the decomposition of the aminochromes in aqueous solution are given. (Author)

  12. Investigating hydrogel dosimeter decomposition by chemical methods

    International Nuclear Information System (INIS)

    Jordan, Kevin

    2015-01-01

    The chemical oxidative decomposition of leucocrystal violet micelle hydrogel dosimeters was investigated using the reaction of ferrous ions with hydrogen peroxide or sodium bicarbonate with hydrogen peroxide. The second reaction is more effective at dye decomposition in gelatin hydrogels. Additional chemical analysis is required to determine the decomposition products

  13. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  14. Three-dimensional decomposition models for carbon productivity

    International Nuclear Information System (INIS)

    Meng, Ming; Niu, Dongxiao

    2012-01-01

    This paper presents decomposition models for the change in carbon productivity, which is considered a key indicator that reflects the contributions to the control of greenhouse gases. Carbon productivity differential was used to indicate the beginning of decomposition. After integrating the differential equation and designing the Log Mean Divisia Index equations, a three-dimensional absolute decomposition model for carbon productivity was derived. Using this model, the absolute change of carbon productivity was decomposed into a summation of the absolute quantitative influences of each industrial sector, for each influence factor (technological innovation and industrial structure adjustment) in each year. Furthermore, the relative decomposition model was built using a similar process. Finally, these models were applied to demonstrate the decomposition process in China. The decomposition results reveal several important conclusions: (a) technological innovation plays a far more important role than industrial structure adjustment; (b) industry and export trade exhibit great influence; (c) assigning the responsibility for CO 2 emission control to local governments, optimizing the structure of exports, and eliminating backward industrial capacity are highly essential to further increase China's carbon productivity. -- Highlights: ► Using the change of carbon productivity to measure a country's contribution. ► Absolute and relative decomposition models for carbon productivity are built. ► The change is decomposed to the quantitative influence of three-dimension. ► Decomposition results can be used for improving a country's carbon productivity.

  15. Thermic decomposition of biphenyl; Decomposition thermique du biphenyle

    Energy Technology Data Exchange (ETDEWEB)

    Lutz, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1966-03-01

    Liquid and vapour phase pyrolysis of very pure biphenyl obtained by methods described in the text was carried out at 400 C in sealed ampoules, the fraction transformed being always less than 0.1 per cent. The main products were hydrogen, benzene, terphenyls, and a deposit of polyphenyls strongly adhering to the walls. Small quantities of the lower aliphatic hydrocarbons were also found. The variation of the yields of these products with a) the pyrolysis time, b) the state (gas or liquid) of the biphenyl, and c) the pressure of the vapour was measured. Varying the area and nature of the walls showed that in the absence of a liquid phase, the pyrolytic decomposition takes place in the adsorbed layer, and that metallic walls promote the reaction more actively than do those of glass (pyrex or silica). A mechanism is proposed to explain the results pertaining to this decomposition in the adsorbed phase. The adsorption seems to obey a Langmuir isotherm, and the chemical act which determines the overall rate of decomposition is unimolecular. (author) [French] Du biphenyle tres pur, dont la purification est decrite, est pyrolyse a 400 C en phase vapeur et en phase liquide dans des ampoules scellees sous vide, a des taux de decomposition n'ayant jamais depasse 0,1 pour cent. Les produits provenant de la pyrolyse sont essentiellement: l' hydrogene, le benzene, les therphenyles, et un depot de polyphenyles adherant fortement aux parois. En plus il se forme de faibles quantites d'hydrocarbures aliphatiques gazeux. On indique la variation des rendements des differents produits avec la duree de pyrolyse, l'etat gazeux ou liquide du biphenyle, et la pression de la vapeur. Variant la superficie et la nature des parois, on montre qu'en absence de liquide la pyrolyse se fait en phase adsorbee. La pyrolyse est plus active au contact de parois metalliques que de celles de verres (pyrex ou silice). A partir des resultats experimentaux un mecanisme de degradation du biphenyle en phase

  16. Primary decomposition of torsion R[X]-modules

    Directory of Open Access Journals (Sweden)

    William A. Adkins

    1994-01-01

    Full Text Available This paper is concerned with studying hereditary properties of primary decompositions of torsion R[X]-modules M which are torsion free as R-modules. Specifically, if an R[X]-submodule of M is pure as an R-submodule, then the primary decomposition of M determines a primary decomposition of the submodule. This is a generalization of the classical fact from linear algebra that a diagonalizable linear transformation on a vector space restricts to a diagonalizable linear transformation of any invariant subspace. Additionally, primary decompositions are considered under direct sums and tensor product.

  17. Differential Decomposition Among Pig, Rabbit, and Human Remains.

    Science.gov (United States)

    Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe

    2018-03-30

    While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.

  18. Exploring Patterns of Soil Organic Matter Decomposition with Students and the Public Through the Global Decomposition Project (GDP)

    Science.gov (United States)

    Wood, J. H.; Natali, S.

    2014-12-01

    The Global Decomposition Project (GDP) is a program designed to introduce and educate students and the general public about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. This easy-to-use hands-on activity focuses on questions such as "How do environmental conditions control decomposition of organic matter in soil?" and "Why do some areas accumulate organic matter and others do not?" Soil organic matter is important to local ecosystems because it affects soil structure, regulates soil moisture and temperature, and provides energy and nutrients to soil organisms. It is also important globally because it stores a large amount of carbon, and when microbes "eat", or decompose organic matter they release greenhouse gasses such as carbon dioxide and methane into the atmosphere, which affects the earth's climate. The protocol describes a commonly used method to measure decomposition using a paper made of cellulose, a component of plant cell walls. Participants can receive pre-made cellulose decomposition bags, or make decomposition bags using instructions in the protocol and easily obtained materials (e.g., window screen and lignin-free paper). Individual results will be shared with all participants and the broader public through an online database. We will present decomposition bag results from a research site in Alaskan tundra, as well as from a middle-school-student led experiment in California. The GDP demonstrates how scientific methods can be extended to educate broader audiences, while at the same time, data collected by students and the public can provide new insight into global patterns of soil decomposition. The GDP provides a pathway for scientists and educators to interact and reach meaningful education and research goals.

  19. Astronomical Image Compression Techniques Based on ACC and KLT Coder

    Directory of Open Access Journals (Sweden)

    J. Schindler

    2011-01-01

    Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.

  20. FPGA Implementation of Real-Time Compressive Sensing with Partial Fourier Dictionary

    Directory of Open Access Journals (Sweden)

    Yinghui Quan

    2016-01-01

    Full Text Available This paper presents a novel real-time compressive sensing (CS reconstruction which employs high density field-programmable gate array (FPGA for hardware acceleration. Traditionally, CS can be implemented using a high-level computer language in a personal computer (PC or multicore platforms, such as graphics processing units (GPUs and Digital Signal Processors (DSPs. However, reconstruction algorithms are computing demanding and software implementation of these algorithms is extremely slow and power consuming. In this paper, the orthogonal matching pursuit (OMP algorithm is refined to solve the sparse decomposition optimization for partial Fourier dictionary, which is always adopted in radar imaging and detection application. OMP reconstruction can be divided into two main stages: optimization which finds the closely correlated vectors and least square problem. For large scale dictionary, the implementation of correlation is time consuming since it often requires a large number of matrix multiplications. Also solving the least square problem always needs a scalable matrix decomposition operation. To solve these problems efficiently, the correlation optimization is implemented by fast Fourier transform (FFT and the large scale least square problem is implemented by Conjugate Gradient (CG technique, respectively. The proposed method is verified by FPGA (Xilinx Virtex-7 XC7VX690T realization, revealing its effectiveness in real-time applications.

  1. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  2. Pitfalls in VAR based return decompositions: A clarification

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

    in their analysis is not "cashflow news" but "inter- est rate news" which should not be zero. Consequently, in contrast to what Chen and Zhao claim, their decomposition does not serve as a valid caution against VAR based decompositions. Second, we point out that in order for VAR based decompositions to be valid......Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component...

  3. A Simulation-based Randomized Controlled Study of Factors Influencing Chest Compression Depth

    Directory of Open Access Journals (Sweden)

    Kelsey P. Mayrand

    2015-12-01

    Full Text Available Introduction: Current resuscitation guidelines emphasize a systems approach with a strong emphasis on quality cardiopulmonary resuscitation (CPR. Despite the American Heart Association (AHA emphasis on quality CPR for over 10 years, resuscitation teams do not consistently meet recommended CPR standards. The objective is to assess the impact on chest compression depth of factors including bed height, step stool utilization, position of the rescuer’s arms and shoulders relative to the point of chest compression, and rescuer characteristics including height, weight, and gender. Methods: Fifty-six eligible subjects, including physician assistant students and first-year emergency medicine residents, were enrolled and randomized to intervention (bed lowered and step stool readily available and control (bed raised and step stool accessible, but concealed groups. We instructed all subjects to complete all interventions on a high-fidelity mannequin per AHA guidelines. Secondary end points included subject arm angle, height, weight group, and gender. Results: Using an intention to treat analysis, the mean compression depths for the intervention and control groups were not significantly different. Subjects positioning their arms at a 90-degree angle relative to the sagittal plane of the mannequin’s chest achieved a mean compression depth significantly greater than those compressing at an angle less than 90 degrees. There was a significant correlation between using a step stool and achieving the correct shoulder position. Subject height, weight group, and gender were all independently associated with compression depth. Conclusion: Rescuer arm position relative to the patient’s chest and step stool utilization during CPR are modifiable factors facilitating improved chest compression depth.

  4. An investigation on thermal decomposition of DNTF-CMDB propellants

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Wei; Wang, Jiangning; Ren, Xiaoning; Zhang, Laying; Zhou, Yanshui [Xi' an Modern Chemistry Research Institute, Xi' an 710065 (China)

    2007-12-15

    The thermal decomposition of DNTF-CMDB propellants was investigated by pressure differential scanning calorimetry (PDSC) and thermogravimetry (TG). The results show that there is only one decomposition peak on DSC curves, because the decomposition peak of DNTF cannot be separated from that of the NC/NG binder. The decomposition of DNTF can be obviously accelerated by the decomposition products of the NC/NG binder. The kinetic parameters of thermal decompositions for four DNTF-CMDB propellants at 6 MPa were obtained by the Kissinger method. It is found that the reaction rate decreases with increasing content of DNTF. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  5. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Science.gov (United States)

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-09-01

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  6. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    Directory of Open Access Journals (Sweden)

    Khairi Nor Asilah

    2017-01-01

    Full Text Available An Internet of Things (IoT device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  7. Study on data compression algorithm and its implementation in portable electronic device for Internet of Things applications

    Science.gov (United States)

    Asilah Khairi, Nor; Bahari Jambek, Asral

    2017-11-01

    An Internet of Things (IoT) device is usually powered by a small battery, which does not last long. As a result, saving energy in IoT devices has become an important issue when it comes to this subject. Since power consumption is the primary cause of radio communication, some researchers have proposed several compression algorithms with the purpose of overcoming this particular problem. Several data compression algorithms from previous reference papers are discussed in this paper. The description of the compression algorithm in the reference papers was collected and summarized in a table form. From the analysis, MAS compression algorithm was selected as a project prototype due to its high potential for meeting the project requirements. Besides that, it also produced better performance regarding energy-saving, better memory usage, and data transmission efficiency. This method is also suitable to be implemented in WSN. MAS compression algorithm will be prototyped and applied in portable electronic devices for Internet of Things applications.

  8. Thermal decomposition process of silver behenate

    International Nuclear Information System (INIS)

    Liu Xianhao; Lu Shuxia; Zhang Jingchang; Cao Weiliang

    2006-01-01

    The thermal decomposition processes of silver behenate have been studied by infrared spectroscopy (IR), X-ray diffraction (XRD), combined thermogravimetry-differential thermal analysis-mass spectrometry (TG-DTA-MS), transmission electron microscopy (TEM) and UV-vis spectroscopy. The TG-DTA and the higher temperature IR and XRD measurements indicated that complicated structural changes took place while heating silver behenate, but there were two distinct thermal transitions. During the first transition at 138 deg. C, the alkyl chains of silver behenate were transformed from an ordered into a disordered state. During the second transition at about 231 deg. C, a structural change took place for silver behenate, which was the decomposition of silver behenate. The major products of the thermal decomposition of silver behenate were metallic silver and behenic acid. Upon heating up to 500 deg. C, the final product of the thermal decomposition was metallic silver. The combined TG-MS analysis showed that the gas products of the thermal decomposition of silver behenate were carbon dioxide, water, hydrogen, acetylene and some small molecule alkenes. TEM and UV-vis spectroscopy were used to investigate the process of the formation and growth of metallic silver nanoparticles

  9. Transport in aluminized RDX under shock compression explored using molecular dynamics simulations

    International Nuclear Information System (INIS)

    Losada, M; Chaudhuri, S

    2014-01-01

    Shock response of energetic materials is controlled by a combination of mechanical response, thermal, transport, and chemical properties. How these properties interplay in condensed-phase energetic materials is of fundamental interest for improving predictive capabilities. Due to unknown nature of chemistry during the evolution and growth of high-temperature regions within the energetic material (so called hot spots), the connection between reactive and unreactive equations of state contain a high degree of empiricism. In particular, chemistry in materials with high degree of heterogeneity such as aluminized HE is of interest. In order to identify shock compression states and transport properties in high-pressure/temperature (HP-HT) conditions, we use molecular dynamics (MD) simulations in conjunction with the multi-scale shock technique (MSST). Mean square displacement calculations enabled us to track the diffusivity of stable gas products. Among decomposition products, H 2 O and CO 2 are found to be the dominant diffusing species under compression conditions. Heat transport and diffusion rates in decomposed RDX are compared and the comparison shows that around 2000 K, transport can be a major contribution during propagation of the reaction front.

  10. Association of engineering geologists 32nd annual meeting

    International Nuclear Information System (INIS)

    Anon.

    1989-01-01

    This book contains the proceedings of the 32nd Annual Meeting of the Association of Engineering Geologists. Included are the following articles: Engineering geology---a tool in petroleum exploration ventures font, The soil headspace survey method as an indicator of soil and groundwater contamination by petroleum products, Determination of compressive strength of coal for pillar design hirt

  11. Local Fractional Adomian Decomposition and Function Decomposition Methods for Laplace Equation within Local Fractional Operators

    Directory of Open Access Journals (Sweden)

    Sheng-Ping Yan

    2014-01-01

    Full Text Available We perform a comparison between the local fractional Adomian decomposition and local fractional function decomposition methods applied to the Laplace equation. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.

  12. Constructive quantum Shannon decomposition from Cartan involutions

    Energy Technology Data Exchange (ETDEWEB)

    Drury, Byron; Love, Peter [Department of Physics, 370 Lancaster Ave., Haverford College, Haverford, PA 19041 (United States)], E-mail: plove@haverford.edu

    2008-10-03

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions.

  13. Constructive quantum Shannon decomposition from Cartan involutions

    International Nuclear Information System (INIS)

    Drury, Byron; Love, Peter

    2008-01-01

    The work presented here extends upon the best known universal quantum circuit, the quantum Shannon decomposition proposed by Shende et al (2006 IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 25 1000). We obtain the basis of the circuit's design in a pair of Cartan decompositions. This insight gives a simple constructive factoring algorithm in terms of the Cartan involutions corresponding to these decompositions

  14. Guidelines for clinical studies with compression devices in patients with venous disorders of the lower limb.

    Science.gov (United States)

    Rabe, E; Partsch, H; Jünger, M; Abel, M; Achhammer, I; Becker, F; Cornu-Thenard, A; Flour, M; Hutchinson, J; Issberner, K; Moffatt, Ch; Pannier, F

    2008-04-01

    The scientific quality of published clinical trials is generally poor in studies where compression devices have been assessed in the management of venous disease. The authors' aim was to establish a set of guidelines which could be used in the design of future clinical trials of compression treatments for venous diseases. Consensus conference leading to a consensus statement. The authors form a expert consensus group known as the International Compression Club (ICC). This group obtained published medical literature in the field of compression treatment in venous disease by searching medical literature databases. The literature was studied by the group which attended a consensus meeting. A draft document was circulated to ICC members and revised until agreement between contributors was reached. The authors have prepared a set of guidelines which should be given consideration when conducting studies to assess the efficacy of compression in venous disease. The form of compression therapy including the comparators used in the clinical study must be clearly characterised. In future studies the characteristics of the material provided by the manufacturer should be described including in vivo data on pressure and stiffness of the final compression system. The pressure exerted on the distal lower leg should be stated in mmHg and the method of pressure determination must be quoted.

  15. In situ study of glasses decomposition layer

    International Nuclear Information System (INIS)

    Zarembowitch-Deruelle, O.

    1997-01-01

    The aim of this work is to understand the involved mechanisms during the decomposition of glasses by water and the consequences on the morphology of the decomposition layer, in particular in the case of a nuclear glass: the R 7 T 7 . The chemical composition of this glass being very complicated, it is difficult to know the influence of the different elements on the decomposition kinetics and on the resulting morphology because several atoms have a same behaviour. Glasses with simplified composition (only 5 elements) have then been synthesized. The morphological and structural characteristics of these glasses have been given. They have then been decomposed by water. The leaching curves do not reflect the decomposition kinetics but the solubility of the different elements at every moment. The three steps of the leaching are: 1) de-alkalinization 2) lattice rearrangement 3) heavy elements solubilization. Two decomposition layer types have also been revealed according to the glass heavy elements rate. (O.M.)

  16. Decomposition studies of group 6 hexacarbonyl complexes. Pt. 2. Modelling of the decomposition process

    Energy Technology Data Exchange (ETDEWEB)

    Usoltsev, Ilya; Eichler, Robert; Tuerler, Andreas [Paul Scherrer Institut (PSI), Villigen (Switzerland); Bern Univ. (Switzerland)

    2016-11-01

    The decomposition behavior of group 6 metal hexacarbonyl complexes (M(CO){sub 6}) in a tubular flow reactor is simulated. A microscopic Monte-Carlo based model is presented for assessing the first bond dissociation enthalpy of M(CO){sub 6} complexes. The suggested approach superimposes a microscopic model of gas adsorption chromatography with a first-order heterogeneous decomposition model. The experimental data on the decomposition of Mo(CO){sub 6} and W(CO){sub 6} are successfully simulated by introducing available thermodynamic data. Thermodynamic data predicted by relativistic density functional theory is used in our model to deduce the most probable experimental behavior of the corresponding Sg carbonyl complex. Thus, the design of a chemical experiment with Sg(CO){sub 6} is suggested, which is sensitive to benchmark our theoretical understanding of the bond stability in carbonyl compounds of the heaviest elements.

  17. Spatial domain decomposition for neutron transport problems

    International Nuclear Information System (INIS)

    Yavuz, M.; Larsen, E.W.

    1989-01-01

    A spatial Domain Decomposition method is proposed for modifying the Source Iteration (SI) and Diffusion Synthetic Acceleration (DSA) algorithms for solving discrete ordinates problems. The method, which consists of subdividing the spatial domain of the problem and performing the transport sweeps independently on each subdomain, has the advantage of being parallelizable because the calculations in each subdomain can be performed on separate processors. In this paper we describe the details of this spatial decomposition and study, by numerical experimentation, the effect of this decomposition on the SI and DSA algorithms. Our results show that the spatial decomposition has little effect on the convergence rates until the subdomains become optically thin (less than about a mean free path in thickness)

  18. Advanced compressed hydrogen fuel storage systems

    International Nuclear Information System (INIS)

    Jeary, B.

    2000-01-01

    Dynetek was established in 1991 by a group of private investors, and since that time efforts have been focused on designing, improving, manufacturing and marketing advanced compressed fuel storage systems. The primary market for Dynetek fuel systems has been Natural Gas, however as the automotive industry investigates the possibility of using hydrogen as the fuel source solution in Alternative Energy Vehicles, there is a growing demand for hydrogen storage on -board. Dynetek is striving to meet the needs of the industry, by working towards developing a fuel storage system that will be efficient, economical, lightweight and eventually capable of storing enough hydrogen to match the driving range of the current gasoline fueled vehicles

  19. Aging-driven decomposition in zolpidem hemitartrate hemihydrate and the single-crystal structure of its decomposition products.

    Science.gov (United States)

    Vega, Daniel R; Baggio, Ricardo; Roca, Mariana; Tombari, Dora

    2011-04-01

    The "aging-driven" decomposition of zolpidem hemitartrate hemihydrate (form A) has been followed by X-ray powder diffraction (XRPD), and the crystal and molecular structures of the decomposition products studied by single-crystal methods. The process is very similar to the "thermally driven" one, recently described in the literature for form E (Halasz and Dinnebier. 2010. J Pharm Sci 99(2): 871-874), resulting in a two-phase system: the neutral free base (common to both decomposition processes) and, in the present case, a novel zolpidem tartrate monohydrate, unique to the "aging-driven" decomposition. Our room-temperature single-crystal analysis gives for the free base comparable results as the high-temperature XRPD ones already reported by Halasz and Dinnebier: orthorhombic, Pcba, a = 9.6360(10) Å, b = 18.2690(5) Å, c = 18.4980(11) Å, and V = 3256.4(4) Å(3) . The unreported zolpidem tartrate monohydrate instead crystallizes in monoclinic P21 , which, for comparison purposes, we treated in the nonstandard setting P1121 with a = 20.7582(9) Å, b = 15.2331(5) Å, c = 7.2420(2) Å, γ = 90.826(2)°, and V = 2289.73(14) Å(3) . The structure presents two complete moieties in the asymmetric unit (z = 4, z' = 2). The different phases obtained in both decompositions are readily explained, considering the diverse genesis of both processes. Copyright © 2010 Wiley-Liss, Inc.

  20. Microbiological decomposition of bagasse after radiation pasteurization

    International Nuclear Information System (INIS)

    Ito, Hitoshi; Ishigaki, Isao

    1987-01-01

    Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms. (author)

  1. Microbiological decomposition of bagasse after radiation pasteurization

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Hitoshi; Ishigaki, Isao

    1987-11-01

    Microbiological decomposition of bagasse was studied for upgrading to animal feeds after radiation pasteurization. Solid-state culture media of bagasse were prepared with addition of some amount of inorganic salts for nitrogen source, and after irradiation, fungi were infected for cultivation. In this study, many kind of cellulosic fungi such as Pleurotus ostreatus, P. flavellatus, Verticillium sp., Coprinus cinereus, Lentinus edodes, Aspergillus niger, Trichoderma koningi, T. viride were used for comparison of decomposition of crude fibers. In alkali nontreated bagasse, P. ostreatus, P. flavellatus, C. cinereus and Verticillium sp. could decompose crude fibers from 25 to 34 % after one month of cultivation, whereas other fungi such as A. niger, T. koningi, T. viride, L. edodes decomposed below 10 %. On the contrary, alkali treatment enhanced the decomposition of crude fiber by A. niger, T. koningi and T. viride to be 29 to 47 % as well as Pleurotus species or C. cinereus. Other species of mushrooms such as L. edodes had a little ability of decomposition even after alkali treatment. Radiation treatment with 10 kGy could not enhance the decomposition of bagasse compared with steam treatment, whereas higher doses of radiation treatment enhanced a little of decomposition of crude fibers by microorganisms.

  2. Self-decomposition of radiochemicals. Principles, control, observations and effects

    International Nuclear Information System (INIS)

    Evans, E.A.

    1976-01-01

    The aim of the booklet is to remind the established user of radiochemicals of the problems of self-decomposition and to inform those investigators who are new to the applications of radiotracers. The section headings are: introduction; radionuclides; mechanisms of decomposition; effects of temperature; control of decomposition; observations of self-decomposition (sections for compounds labelled with (a) carbon-14, (b) tritium, (c) phosphorus-32, (d) sulphur-35, (e) gamma- or X-ray emitting radionuclides, decomposition of labelled macromolecules); effects of impurities in radiotracer investigations; stability of labelled compounds during radiotracer studies. (U.K.)

  3. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    Science.gov (United States)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react

  4. BETTER FINGERPRINT IMAGE COMPRESSION AT LOWER BIT-RATES: AN APPROACH USING MULTIWAVELETS WITH OPTIMISED PREFILTER COEFFICIENTS

    Directory of Open Access Journals (Sweden)

    N R Rema

    2017-08-01

    Full Text Available In this paper, a multiwavelet based fingerprint compression technique using set partitioning in hierarchical trees (SPIHT algorithm with optimised prefilter coefficients is proposed. While wavelet based progressive compression techniques give a blurred image at lower bit rates due to lack of high frequency information, multiwavelets can be used efficiently to represent high frequency information. SA4 (Symmetric Antisymmetric multiwavelet when combined with SPIHT reduces the number of nodes during initialization to 1/4th compared to SPIHT with wavelet. This reduction in nodes leads to improvement in PSNR at lower bit rates. The PSNR can be further improved by optimizing the prefilter coefficients. In this work genetic algorithm (GA is used for optimizing prefilter coefficients. Using the proposed technique, there is a considerable improvement in PSNR at lower bit rates, compared to existing techniques in literature. An overall average improvement of 4.23dB and 2.52dB for bit rates in between 0.01 to 1 has been achieved for the images in the databases FVC 2000 DB1 and FVC 2002 DB3 respectively. The quality of the reconstructed image is better even at higher compression ratios like 80:1 and 100:1. The level of decomposition required for a multiwavelet is lesser compared to a wavelet.

  5. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  6. Magic Coset Decompositions

    CERN Document Server

    Cacciatori, Sergio L; Marrani, Alessio

    2013-01-01

    By exploiting a "mixed" non-symmetric Freudenthal-Rozenfeld-Tits magic square, two types of coset decompositions are analyzed for the non-compact special K\\"ahler symmetric rank-3 coset E7(-25)/[(E6(-78) x U(1))/Z_3], occurring in supergravity as the vector multiplets' scalar manifold in N=2, D=4 exceptional Maxwell-Einstein theory. The first decomposition exhibits maximal manifest covariance, whereas the second (triality-symmetric) one is of Iwasawa type, with maximal SO(8) covariance. Generalizations to conformal non-compact, real forms of non-degenerate, simple groups "of type E7" are presented for both classes of coset parametrizations, and relations to rank-3 simple Euclidean Jordan algebras and normed trialities over division algebras are also discussed.

  7. The impact of chest compression rates on quality of chest compressions - a manikin study.

    Science.gov (United States)

    Field, Richard A; Soar, Jasmeet; Davies, Robin P; Akhtar, Naheed; Perkins, Gavin D

    2012-03-01

    Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables. Twenty healthcare professionals performed 2 min of continuous compressions on an instrumented manikin at rates of 80, 100, 120, 140 and 160 min(-1) in a random order. An electronic metronome was used to guide compression rate. Compression data were analysed by repeated measures ANOVA and are presented as mean (SD). Non-parametric data was analysed by Friedman test. At faster compression rates there were significant improvements in the number of compressions delivered (160(2) at 80 min(-1) vs. 312(13) compressions at 160 min(-1), P<0.001); and compression duty-cycle (43(6)% at 80 min(-1) vs. 50(7)% at 160 min(-1), P<0.001). This was at the cost of a significant reduction in compression depth (39.5(10)mm at 80 min(-1) vs. 34.5(11)mm at 160 min(-1), P<0.001); and earlier decay in compression quality (median decay point 120 s at 80 min(-1) vs. 40s at 160 min(-1), P<0.001). Additionally not all participants achieved the target rate (100% at 80 min(-1) vs. 70% at 160 min(-1)). Rates above 120 min(-1) had the greatest impact on reducing chest compression quality. For Guidelines 2005 trained rescuers, a chest compression rate of 100-120 min(-1) for 2 min is feasible whilst maintaining adequate chest compression quality in terms of depth, duty-cycle, leaning, and decay in compression performance. Further studies are needed to assess the impact of the Guidelines 2010 recommendation for deeper and faster chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. JANNAF 17th Propulsion Systems Hazards Subcommittee Meeting. Volume 1

    Science.gov (United States)

    Cocchiaro, James E. (Editor); Gannaway, Mary T. (Editor); Rognan, Melanie (Editor)

    1998-01-01

    Volume 1, the first of two volumes is a compilation of 16 unclassified/unlimited technical papers presented at the 17th meeting of the Joint Army-Navy-NASA-Air Force (JANNAF) Propulsion Systems Hazards Subcommittee (PSHS) held jointly with the 35th Combustion Subcommittee (CS) and Airbreathing Propulsion Subcommittee (APS). The meeting was held on 7 - 11 December 1998 at Raytheon Systems Company and the Marriott Hotel, Tucson, AZ. Topics covered include projectile and shaped charge jet impact vulnerability of munitions; thermal decomposition and cookoff behavior of energetic materials; damage and hot spot initiation mechanisms with energetic materials; detonation phenomena of solid energetic materials; and hazard classification, insensitive munitions, and propulsion systems safety.

  9. Kinetics of thermal decomposition of aluminium hydride: I-non-isothermal decomposition under vacuum and in inert atmosphere (argon)

    International Nuclear Information System (INIS)

    Ismail, I.M.K.; Hawkins, T.

    2005-01-01

    Recently, interest in aluminium hydride (alane) as a rocket propulsion ingredient has been renewed due to improvements in its manufacturing process and an increase in thermal stability. When alane is added to solid propellant formulations, rocket performance is enhanced and the specific impulse increases. Preliminary work was performed at AFRL on the characterization and evaluation of two alane samples. Decomposition kinetics were determined from gravimetric TGA data and volumetric vacuum thermal stability (VTS) results. Chemical analysis showed the samples had 88.30% (by weight) aluminium and 9.96% hydrogen. The average density, as measured by helium pycnometery, was 1.486 g/cc. Scanning electron microscopy showed that the particles were mostly composed of sharp edged crystallographic polyhedral such as simple cubes, cubic octahedrons and hexagonal prisms. Thermogravimetric analysis was utilized to investigate the decomposition kinetics of alane in argon atmosphere and to shed light on the mechanism of alane decomposition. Two kinetic models were successfully developed and used to propose a mechanism for the complete decomposition of alane and to predict its shelf-life during storage. Alane decomposes in two steps. The slowest (rate-determining) step is solely controlled by solid state nucleation of aluminium crystals; the fastest step is due to growth of the crystals. Thus, during decomposition, hydrogen gas is liberated and the initial polyhedral AlH 3 crystals yield a final mix of amorphous aluminium and aluminium crystals. After establishing the kinetic model, prediction calculations indicated that alane can be stored in inert atmosphere at temperatures below 10 deg. C for long periods of time (e.g., 15 years) without significant decomposition. After 15 years of storage, the kinetic model predicts ∼0.1% decomposition, but storage at higher temperatures (e.g. 30 deg. C) is not recommended

  10. Real-time lossless data compression techniques for long-pulse operation

    International Nuclear Information System (INIS)

    Jesus Vega, J.; Sanchez, E.; Portas, A.; Pereira, A.; Ruiz, M.

    2006-01-01

    Data logging and data distribution will be two main tasks connected with data handling in ITER. Data logging refers to the recovery and ultimate storage of all data, independent on the data source. Control data and physics data distribution is related, on the one hand, to the on-line data broadcasting for immediate data availability for both data analysis and data visualization. On the other hand, delayed analyses require off-line data access. Due to the large data volume expected, data compression will be mandatory in order to save storage and bandwidth. On-line data distribution in a long pulse environment requires the use of a deterministic approach to be able to ensure a proper response time for data availability. However, an essential feature for all the above purposes is to apply compression techniques that ensure the recovery of the initial signals without spectral distortion when compacted data are expanded (lossless techniques). Delta compression methods are independent on the analogue characteristics of waveforms and there exist a variety of implementations that have been applied to the databases of several fusion devices such as Alcator, JET and TJ-II among others. Delta compression techniques are carried out in a two step algorithm. The first step consists of a delta calculation, i.e. the computation of the differences between the digital codes of adjacent signal samples. The resultant deltas are then encoded according to constant- or variable-length bit allocation. Several encoding forms can be considered for the second step and they have to satisfy a prefix code property. However, and in order to meet the requirement of on-line data distribution, the encoding forms have to be defined prior to data capture. This article reviews different lossless data compression techniques based on delta compression. In addition, the concept of cyclic delta transformation is introduced. Furthermore, comparative results concerning compression rates on different

  11. On the hadron mass decomposition

    Science.gov (United States)

    Lorcé, Cédric

    2018-02-01

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.

  12. On the hadron mass decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Lorce, Cedric [Universite Paris-Saclay, Centre de Physique Theorique, Ecole Polytechnique, CNRS, Palaiseau (France)

    2018-02-15

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force. (orig.)

  13. Mathematical modelling of the decomposition of explosives

    International Nuclear Information System (INIS)

    Smirnov, Lev P

    2010-01-01

    Studies on mathematical modelling of the molecular and supramolecular structures of explosives and the elementary steps and overall processes of their decomposition are analyzed. Investigations on the modelling of combustion and detonation taking into account the decomposition of explosives are also considered. It is shown that solution of problems related to the decomposition kinetics of explosives requires the use of a complex strategy based on the methods and concepts of chemical physics, solid state physics and theoretical chemistry instead of empirical approach.

  14. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest.

    Science.gov (United States)

    Monsieurs, Koenraad G; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F; Calle, Paul A

    2012-11-01

    BACKGROUND AND GOAL OF STUDY: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with decreased depth. In patients undergoing prehospital cardiopulmonary resuscitation by health care professionals, chest compression rate and depth were recorded using an accelerometer (E-series monitor-defibrillator, Zoll, U.S.A.). Compression depth was compared for rates 120/min. A difference in compression depth ≥0.5 cm was considered clinically significant. Mixed models with repeated measurements of chest compression depth and rate (level 1) nested within patients (level 2) were used with compression rate as a continuous and as a categorical predictor of depth. Results are reported as means and standard error (SE). One hundred and thirty-three consecutive patients were analysed (213,409 compressions). Of all compressions 2% were 120/min, 36% were 5 cm. In 77 out of 133 (58%) patients a statistically significant lower depth was observed for rates >120/min compared to rates 80-120/min, in 40 out of 133 (30%) this difference was also clinically significant. The mixed models predicted that the deepest compression (4.5 cm) occurred at a rate of 86/min, with progressively lower compression depths at higher rates. Rates >145/min would result in a depth compression depth for rates 80-120/min was on average 4.5 cm (SE 0.06) compared to 4.1 cm (SE 0.06) for compressions >120/min (mean difference 0.4 cm, Pcompression rates and lower compression depths. Avoiding excessive compression rates may lead to more compressions of sufficient depth. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Aridity and decomposition processes in complex landscapes

    Science.gov (United States)

    Ossola, Alessandro; Nyman, Petter

    2015-04-01

    Decomposition of organic matter is a key biogeochemical process contributing to nutrient cycles, carbon fluxes and soil development. The activity of decomposers depends on microclimate, with temperature and rainfall being major drivers. In complex terrain the fine-scale variation in microclimate (and hence water availability) as a result of slope orientation is caused by differences in incoming radiation and surface temperature. Aridity, measured as the long-term balance between net radiation and rainfall, is a metric that can be used to represent variations in water availability within the landscape. Since aridity metrics can be obtained at fine spatial scales, they could theoretically be used to investigate how decomposition processes vary across complex landscapes. In this study, four research sites were selected in tall open sclerophyll forest along a aridity gradient (Budyko dryness index ranging from 1.56 -2.22) where microclimate, litter moisture and soil moisture were monitored continuously for one year. Litter bags were packed to estimate decomposition rates (k) using leaves of a tree species not present in the study area (Eucalyptus globulus) in order to avoid home-field advantage effects. Litter mass loss was measured to assess the activity of macro-decomposers (6mm litter bag mesh size), meso-decomposers (1 mm mesh), microbes above-ground (0.2 mm mesh) and microbes below-ground (2 cm depth, 0.2 mm mesh). Four replicates for each set of bags were installed at each site and bags were collected at 1, 2, 4, 7 and 12 months since installation. We first tested whether differences in microclimate due to slope orientation have significant effects on decomposition processes. Then the dryness index was related to decomposition rates to evaluate if small-scale variation in decomposition can be predicted using readily available information on rainfall and radiation. Decomposition rates (k), calculated fitting single pool negative exponential models, generally

  16. Early stage litter decomposition across biomes

    Science.gov (United States)

    Ika Djukic; Sebastian Kepfer-Rojas; Inger Kappel Schmidt; Klaus Steenberg Larsen; Claus Beier; Björn Berg; Kris Verheyen; Adriano Caliman; Alain Paquette; Alba Gutiérrez-Girón; Alberto Humber; Alejandro Valdecantos; Alessandro Petraglia; Heather Alexander; Algirdas Augustaitis; Amélie Saillard; Ana Carolina Ruiz Fernández; Ana I. Sousa; Ana I. Lillebø; Anderson da Rocha Gripp; André-Jean Francez; Andrea Fischer; Andreas Bohner; Andrey Malyshev; Andrijana Andrić; Andy Smith; Angela Stanisci; Anikó Seres; Anja Schmidt; Anna Avila; Anne Probst; Annie Ouin; Anzar A. Khuroo; Arne Verstraeten; Arely N. Palabral-Aguilera; Artur Stefanski; Aurora Gaxiola; Bart Muys; Bernard Bosman; Bernd Ahrends; Bill Parker; Birgit Sattler; Bo Yang; Bohdan Juráni; Brigitta Erschbamer; Carmen Eugenia Rodriguez Ortiz; Casper T. Christiansen; E. Carol Adair; Céline Meredieu; Cendrine Mony; Charles A. Nock; Chi-Ling Chen; Chiao-Ping Wang; Christel Baum; Christian Rixen; Christine Delire; Christophe Piscart; Christopher Andrews; Corinna Rebmann; Cristina Branquinho; Dana Polyanskaya; David Fuentes Delgado; Dirk Wundram; Diyaa Radeideh; Eduardo Ordóñez-Regil; Edward Crawford; Elena Preda; Elena Tropina; Elli Groner; Eric Lucot; Erzsébet Hornung; Esperança Gacia; Esther Lévesque; Evanilde Benedito; Evgeny A. Davydov; Evy Ampoorter; Fabio Padilha Bolzan; Felipe Varela; Ferdinand Kristöfel; Fernando T. Maestre; Florence Maunoury-Danger; Florian Hofhansl; Florian Kitz; Flurin Sutter; Francisco Cuesta; Francisco de Almeida Lobo; Franco Leandro de Souza; Frank Berninger; Franz Zehetner; Georg Wohlfahrt; George Vourlitis; Geovana Carreño-Rocabado; Gina Arena; Gisele Daiane Pinha; Grizelle González; Guylaine Canut; Hanna Lee; Hans Verbeeck; Harald Auge; Harald Pauli; Hassan Bismarck Nacro; Héctor A. Bahamonde; Heike Feldhaar; Heinke Jäger; Helena C. Serrano; Hélène Verheyden; Helge Bruelheide; Henning Meesenburg; Hermann Jungkunst; Hervé Jactel; Hideaki Shibata; Hiroko Kurokawa; Hugo López Rosas; Hugo L. Rojas Villalobos; Ian Yesilonis; Inara Melece; Inge Van Halder; Inmaculada García Quirós; Isaac Makelele; Issaka Senou; István Fekete; Ivan Mihal; Ivika Ostonen; Jana Borovská; Javier Roales; Jawad Shoqeir; Jean-Christophe Lata; Jean-Paul Theurillat; Jean-Luc Probst; Jess Zimmerman; Jeyanny Vijayanathan; Jianwu Tang; Jill Thompson; Jiří Doležal; Joan-Albert Sanchez-Cabeza; Joël Merlet; Joh Henschel; Johan Neirynck; Johannes Knops; John Loehr; Jonathan von Oppen; Jónína Sigríður Þorláksdóttir; Jörg Löffler; José-Gilberto Cardoso-Mohedano; José-Luis Benito-Alonso; Jose Marcelo Torezan; Joseph C. Morina; Juan J. Jiménez; Juan Dario Quinde; Juha Alatalo; Julia Seeber; Jutta Stadler; Kaie Kriiska; Kalifa Coulibaly; Karibu Fukuzawa; Katalin Szlavecz; Katarína Gerhátová; Kate Lajtha; Kathrin Käppeler; Katie A. Jennings; Katja Tielbörger; Kazuhiko Hoshizaki; Ken Green; Lambiénou Yé; Laryssa Helena Ribeiro Pazianoto; Laura Dienstbach; Laura Williams; Laura Yahdjian; Laurel M. Brigham; Liesbeth van den Brink; Lindsey Rustad; al. et

    2018-01-01

    Through litter decomposition enormous amounts of carbon is emitted to the atmosphere. Numerous large-scale decomposition experiments have been conducted focusing on this fundamental soil process in order to understand the controls on the terrestrial carbon transfer to the atmosphere. However, previous studies were mostly based on site-specific litter and methodologies...

  17. Optimization and kinetics decomposition of monazite using NaOH

    International Nuclear Information System (INIS)

    MV Purwani; Suyanti; Deddy Husnurrofiq

    2015-01-01

    Decomposition of monazite with NaOH has been done. Decomposition performed at high temperature on furnace. The parameters studied were the comparison NaOH / monazite, temperature and time decomposition. From the research decomposition for 100 grams of monazite with NaOH, it can be concluded that the greater the ratio of NaOH / monazite, the greater the conversion. In the temperature influences decomposition 400 - 700°C, the greater the reaction rate constant with increasing temperature greater decomposition. Comparison NaOH / monazite optimum was 1.5 and the optimum time of 3 hours. Relations ratio NaOH / monazite with conversion (x) following the polynomial equation y = 0.1579x 2 – 0.2855x + 0.8301 (y = conversion and x = ratio of NaOH/monazite). Decomposition reaction of monazite with NaOH was second orde reaction, the relationship between temperature (T) with a reaction rate constant (k), k = 6.106.e - 1006.8 /T or ln k = - 1006.8/T + 6.106, frequency factor A = 448.541, activation energy E = 8.371 kJ/mol. (author)

  18. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z. [Institute of Applied Physics and Computational Mathematics, Beijing, 100094 (China)

    2013-07-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  19. Combinatorial geometry domain decomposition strategies for Monte Carlo simulations

    International Nuclear Information System (INIS)

    Li, G.; Zhang, B.; Deng, L.; Mo, Z.; Liu, Z.; Shangguan, D.; Ma, Y.; Li, S.; Hu, Z.

    2013-01-01

    Analysis and modeling of nuclear reactors can lead to memory overload for a single core processor when it comes to refined modeling. A method to solve this problem is called 'domain decomposition'. In the current work, domain decomposition algorithms for a combinatorial geometry Monte Carlo transport code are developed on the JCOGIN (J Combinatorial Geometry Monte Carlo transport INfrastructure). Tree-based decomposition and asynchronous communication of particle information between domains are described in the paper. Combination of domain decomposition and domain replication (particle parallelism) is demonstrated and compared with that of MERCURY code. A full-core reactor model is simulated to verify the domain decomposition algorithms using the Monte Carlo particle transport code JMCT (J Monte Carlo Transport Code), which has being developed on the JCOGIN infrastructure. Besides, influences of the domain decomposition algorithms to tally variances are discussed. (authors)

  20. Decompositional equivalence: A fundamental symmetry underlying quantum theory

    OpenAIRE

    Fields, Chris

    2014-01-01

    Decompositional equivalence is the principle that there is no preferred decomposition of the universe into subsystems. It is shown here, by using simple thought experiments, that quantum theory follows from decompositional equivalence together with Landauer's principle. This demonstration raises within physics a question previously left to psychology: how do human - or any - observers agree about what constitutes a "system of interest"?

  1. Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space

    Science.gov (United States)

    Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew

    2009-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.

  2. Adiabatic compression and radiative compression of magnetic fields

    International Nuclear Information System (INIS)

    Woods, C.H.

    1980-01-01

    Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape

  3. Modèles de compression et critères de complexité pour la description et l'inférence de structure musicale

    OpenAIRE

    Guichaoua , Corentin

    2017-01-01

    A very broad definition of music structure is to consider what distinguishes music from random noise as part of its structure. In this thesis, we take interest in the macroscopic aspects of music structure, especially the decomposition of musical pieces into autonomous segments (typically, sections) and their characterisation as the result of the grouping process of jointly compressible units. An important assumption of this work is to establish a link between the inference of music structure...

  4. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  5. Generalized Fisher index or Siegel-Shapley decomposition?

    International Nuclear Information System (INIS)

    De Boer, Paul

    2009-01-01

    It is generally believed that index decomposition analysis (IDA) and input-output structural decomposition analysis (SDA) [Rose, A., Casler, S., Input-output structural decomposition analysis: a critical appraisal, Economic Systems Research 1996; 8; 33-62; Dietzenbacher, E., Los, B., Structural decomposition techniques: sense and sensitivity. Economic Systems Research 1998;10; 307-323] are different approaches in energy studies; see for instance Ang et al. [Ang, B.W., Liu, F.L., Chung, H.S., A generalized Fisher index approach to energy decomposition analysis. Energy Economics 2004; 26; 757-763]. In this paper it is shown that the generalized Fisher approach, introduced in IDA by Ang et al. [Ang, B.W., Liu, F.L., Chung, H.S., A generalized Fisher index approach to energy decomposition analysis. Energy Economics 2004; 26; 757-763] for the decomposition of an aggregate change in a variable in r = 2, 3 or 4 factors is equivalent to SDA. They base their formulae on the very complicated generic formula that Shapley [Shapley, L., A value for n-person games. In: Kuhn H.W., Tucker A.W. (Eds), Contributions to the theory of games, vol. 2. Princeton University: Princeton; 1953. p. 307-317] derived for his value of n-person games, and mention that Siegel [Siegel, I.H., The generalized 'ideal' index-number formula. Journal of the American Statistical Association 1945; 40; 520-523] gave their formulae using a different route. In this paper tables are given from which the formulae of the generalized Fisher approach can easily be derived for the cases of r = 2, 3 or 4 factors. It is shown that these tables can easily be extended to cover the cases of r = 5 and r = 6 factors. (author)

  6. Compression stockings

    Science.gov (United States)

    Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...

  7. Compression for radiological images

    Science.gov (United States)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  8. Use of multiple singular value decompositions to analyze complex intracellular calcium ion signals

    KAUST Repository

    Martinez, Josue G.

    2009-12-01

    We compare calcium ion signaling (Ca(2+)) between two exposures; the data are present as movies, or, more prosaically, time series of images. This paper describes novel uses of singular value decompositions (SVD) and weighted versions of them (WSVD) to extract the signals from such movies, in a way that is semi-automatic and tuned closely to the actual data and their many complexities. These complexities include the following. First, the images themselves are of no interest: all interest focuses on the behavior of individual cells across time, and thus, the cells need to be segmented in an automated manner. Second, the cells themselves have 100+ pixels, so that they form 100+ curves measured over time, so that data compression is required to extract the features of these curves. Third, some of the pixels in some of the cells are subject to image saturation due to bit depth limits, and this saturation needs to be accounted for if one is to normalize the images in a reasonably un-biased manner. Finally, the Ca(2+) signals have oscillations or waves that vary with time and these signals need to be extracted. Thus, our aim is to show how to use multiple weighted and standard singular value decompositions to detect, extract and clarify the Ca(2+) signals. Our signal extraction methods then lead to simple although finely focused statistical methods to compare Ca(2+) signals across experimental conditions.

  9. Reactivity continuum modeling of leaf, root, and wood decomposition across biomes

    Science.gov (United States)

    Koehler, Birgit; Tranvik, Lars J.

    2015-07-01

    Large carbon dioxide amounts are released to the atmosphere during organic matter decomposition. Yet the large-scale and long-term regulation of this critical process in global carbon cycling by litter chemistry and climate remains poorly understood. We used reactivity continuum (RC) modeling to analyze the decadal data set of the "Long-term Intersite Decomposition Experiment," in which fine litter and wood decomposition was studied in eight biome types (224 time series). In 32 and 46% of all sites the litter content of the acid-unhydrolyzable residue (AUR, formerly referred to as lignin) and the AUR/nitrogen ratio, respectively, retarded initial decomposition rates. This initial rate-retarding effect generally disappeared within the first year of decomposition, and rate-stimulating effects of nutrients and a rate-retarding effect of the carbon/nitrogen ratio became more prevalent. For needles and leaves/grasses, the influence of climate on decomposition decreased over time. For fine roots, the climatic influence was initially smaller but increased toward later-stage decomposition. The climate decomposition index was the strongest climatic predictor of decomposition. The similar variability in initial decomposition rates across litter categories as across biome types suggested that future changes in decomposition may be dominated by warming-induced changes in plant community composition. In general, the RC model parameters successfully predicted independent decomposition data for the different litter-biome combinations (196 time series). We argue that parameterization of large-scale decomposition models with RC model parameters, as opposed to the currently common discrete multiexponential models, could significantly improve their mechanistic foundation and predictive accuracy across climate zones and litter categories.

  10. Kinetic study of lithium-cadmium ternary amalgam decomposition

    International Nuclear Information System (INIS)

    Cordova, M.H.; Andrade, C.E.

    1992-01-01

    The effect of metals, which form stable lithium phase in binary alloys, on the formation of intermetallic species in ternary amalgams and their effect on thermal decomposition in contact with water is analyzed. Cd is selected as ternary metal, based on general experimental selection criteria. Cd (Hg) binary amalgams are prepared by direct contact Cd-Hg, whereas Li is formed by electrolysis of Li OH aq using a liquid Cd (Hg) cathodic well. The decomposition kinetic of Li C(Hg) in contact with 0.6 M Li OH is studied in function of ageing and temperature, and these results are compared with the binary amalgam Li (Hg) decomposition. The decomposition rate is constant during one hour for binary and ternary systems. Ageing does not affect the binary systems but increases the decomposition activation energy of ternary systems. A reaction mechanism that considers an intermetallic specie participating in the activated complex is proposed and a kinetic law is suggested. (author)

  11. Crop residue decomposition in Minnesota biochar amended plots

    OpenAIRE

    S. L. Weyers; K. A. Spokas

    2014-01-01

    Impacts of biochar application at laboratory scales are routinely studied, but impacts of biochar application on decomposition of crop residues at field scales have not been widely addressed. The priming or hindrance of crop residue decomposition could have a cascading impact on soil processes, particularly those influencing nutrient availability. Our objectives were to evaluate biochar effects on field decomposition of crop residue, using plots that were amended with ...

  12. Single image super-resolution based on compressive sensing and improved TV minimization sparse recovery

    Science.gov (United States)

    Vishnukumar, S.; Wilscy, M.

    2017-12-01

    In this paper, we propose a single image Super-Resolution (SR) method based on Compressive Sensing (CS) and Improved Total Variation (TV) Minimization Sparse Recovery. In the CS framework, low-resolution (LR) image is treated as the compressed version of high-resolution (HR) image. Dictionary Training and Sparse Recovery are the two phases of the method. K-Singular Value Decomposition (K-SVD) method is used for dictionary training and the dictionary represents HR image patches in a sparse manner. Here, only the interpolated version of the LR image is used for training purpose and thereby the structural self similarity inherent in the LR image is exploited. In the sparse recovery phase the sparse representation coefficients with respect to the trained dictionary for LR image patches are derived using Improved TV Minimization method. HR image can be reconstructed by the linear combination of the dictionary and the sparse coefficients. The experimental results show that the proposed method gives better results quantitatively as well as qualitatively on both natural and remote sensing images. The reconstructed images have better visual quality since edges and other sharp details are preserved.

  13. Excimer laser decomposition of silicone

    International Nuclear Information System (INIS)

    Laude, L.D.; Cochrane, C.; Dicara, Cl.; Dupas-Bruzek, C.; Kolev, K.

    2003-01-01

    Excimer laser irradiation of silicone foils is shown in this work to induce decomposition, ablation and activation of such materials. Thin (100 μm) laminated silicone foils are irradiated at 248 nm as a function of impacting laser fluence and number of pulsed irradiations at 1 s intervals. Above a threshold fluence of 0.7 J/cm 2 , material starts decomposing. At higher fluences, this decomposition develops and gives rise to (i) swelling of the irradiated surface and then (ii) emission of matter (ablation) at a rate that is not proportioned to the number of pulses. Taking into consideration the polymer structure and the foil lamination process, these results help defining the phenomenology of silicone ablation. The polymer decomposition results in two parts: one which is organic and volatile, and another part which is inorganic and remains, forming an ever thickening screen to light penetration as the number of light pulses increases. A mathematical model is developed that accounts successfully for this physical screening effect

  14. 1.7. Acid decomposition of kaolin clays of Ziddi Deposit. 1.7.1. The hydrochloric acid decomposition of kaolin clays and siallites

    International Nuclear Information System (INIS)

    Mirsaidov, U.M.; Mirzoev, D.Kh.; Boboev, Kh.E.

    2016-01-01

    Present article of book is devoted to hydrochloric acid decomposition of kaolin clays and siallites. The chemical composition of kaolin clays and siallites was determined. The influence of temperature, process duration, acid concentration on hydrochloric acid decomposition of kaolin clays and siallites was studied. The optimal conditions of hydrochloric acid decomposition of kaolin clays and siallites were determined.

  15. Domain decomposition parallel computing for transient two-phase flow of nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Yoon, Han Young [KAERI, Daejeon (Korea, Republic of); Choi, Hyoung Gwon [Seoul National University, Seoul (Korea, Republic of)

    2016-05-15

    KAERI (Korea Atomic Energy Research Institute) has been developing a multi-dimensional two-phase flow code named CUPID for multi-physics and multi-scale thermal hydraulics analysis of Light water reactors (LWRs). The CUPID code has been validated against a set of conceptual problems and experimental data. In this work, the CUPID code has been parallelized based on the domain decomposition method with Message passing interface (MPI) library. For domain decomposition, the CUPID code provides both manual and automatic methods with METIS library. For the effective memory management, the Compressed sparse row (CSR) format is adopted, which is one of the methods to represent the sparse asymmetric matrix. CSR format saves only non-zero value and its position (row and column). By performing the verification for the fundamental problem set, the parallelization of the CUPID has been successfully confirmed. Since the scalability of a parallel simulation is generally known to be better for fine mesh system, three different scales of mesh system are considered: 40000 meshes for coarse mesh system, 320000 meshes for mid-size mesh system, and 2560000 meshes for fine mesh system. In the given geometry, both single- and two-phase calculations were conducted. In addition, two types of preconditioners for a matrix solver were compared: Diagonal and incomplete LU preconditioner. In terms of enhancement of the parallel performance, the OpenMP and MPI hybrid parallel computing for a pressure solver was examined. It is revealed that the scalability of hybrid calculation was enhanced for the multi-core parallel computation.

  16. Multi hollow needle to plate plasmachemical reactor for pollutant decomposition

    International Nuclear Information System (INIS)

    Pekarek, S.; Kriha, V.; Viden, I.; Pospisil, M.

    2001-01-01

    Modification of the classical multipin to plate plasmachemical reactor for pollutant decomposition is proposed in this paper. In this modified reactor a mixture of air and pollutant flows through the needles, contrary to the classical reactor where a mixture of air and pollutant flows around the pins or through the channel plus through the hollow needles. We give the results of comparison of toluene decomposition efficiency for (a) a reactor with the main stream of a mixture through the channel around the needles and a small flow rate through the needles and (b) a modified reactor. It was found that for similar flow rates and similar energy deposition, the decomposition efficiency of toluene was increased more than six times in the modified reactor. This new modified reactor was also experimentally tested for the decomposition of volatile hydrocarbons from gasoline distillation range. An average efficiency of VOC decomposition of about 25% was reached. However, significant differences in the decomposition of various hydrocarbon types were observed. The best results were obtained for the decomposition of olefins (reaching 90%) and methyl-tert-butyl ether (about 50%). Moreover, the number of carbon atoms in the molecule affects the quality of VOC decomposition. (author)

  17. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Science.gov (United States)

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  18. Advanced Oxidation: Oxalate Decomposition Testing With Ozone

    International Nuclear Information System (INIS)

    Ketusky, E.; Subramanian, K.

    2012-01-01

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration) after nearing

  19. ADVANCED OXIDATION: OXALATE DECOMPOSITION TESTING WITH OZONE

    Energy Technology Data Exchange (ETDEWEB)

    Ketusky, E.; Subramanian, K.

    2012-02-29

    At the Savannah River Site (SRS), oxalic acid is currently considered the preferred agent for chemically cleaning the large underground Liquid Radioactive Waste Tanks. It is applied only in the final stages of emptying a tank when generally less than 5,000 kg of waste solids remain, and slurrying based removal methods are no-longer effective. The use of oxalic acid is preferred because of its combined dissolution and chelating properties, as well as the fact that corrosion to the carbon steel tank walls can be controlled. Although oxalic acid is the preferred agent, there are significant potential downstream impacts. Impacts include: (1) Degraded evaporator operation; (2) Resultant oxalate precipitates taking away critically needed operating volume; and (3) Eventual creation of significant volumes of additional feed to salt processing. As an alternative to dealing with the downstream impacts, oxalate decomposition using variations of ozone based Advanced Oxidation Process (AOP) were investigated. In general AOPs use ozone or peroxide and a catalyst to create hydroxyl radicals. Hydroxyl radicals have among the highest oxidation potentials, and are commonly used to decompose organics. Although oxalate is considered among the most difficult organic to decompose, the ability of hydroxyl radicals to decompose oxalate is considered to be well demonstrated. In addition, as AOPs are considered to be 'green' their use enables any net chemical additions to the waste to be minimized. In order to test the ability to decompose the oxalate and determine the decomposition rates, a test rig was designed, where 10 vol% ozone would be educted into a spent oxalic acid decomposition loop, with the loop maintained at 70 C and recirculated at 40L/min. Each of the spent oxalic acid streams would be created from three oxalic acid strikes of an F-area simulant (i.e., Purex = high Fe/Al concentration) and H-area simulant (i.e., H area modified Purex = high Al/Fe concentration

  20. A handbook of decomposition methods in analytical chemistry

    International Nuclear Information System (INIS)

    Bok, R.

    1984-01-01

    Decomposition methods of metals, alloys, fluxes, slags, calcine, inorganic salts, oxides, nitrides, carbides, borides, sulfides, ores, minerals, rocks, concentrates, glasses, ceramics, organic substances, polymers, phyto- and biological materials from the viewpoint of sample preparation for analysis have been described. The methods are systemitized according to decomposition principle: thermal with the use of electricity, irradiation, dissolution with participation of chemical reactions and without it. Special equipment for different decomposition methods is described. Bibliography contains 3420 references

  1. Decomposition with thermoeconomic isolation applied to the optimal synthesis/design and operation of an advanced tactical aircraft system

    International Nuclear Information System (INIS)

    Rancruel, Diego F.; Spakovsky, Michael R. von

    2006-01-01

    A decomposition methodology based on the concept of 'thermoeconomic isolation' and applied to the synthesis/design and operational optimization of an advanced tactical fighter aircraft is the focus of this paper. The total system is composed of six sub-systems of which five participate with degrees of freedom (493) in the optimization. They are the propulsion sub-system (PS), the environmental control sub-system (ECS), the fuel loop subsystem (FLS), the vapor compression and Polyalphaolefin (PAO) loops sub-system (VC/PAOS), and the airframe sub-system (AFS). The sixth subsystem comprises the expendable and permanent payloads as well as the equipment group. For each of the first five, detailed thermodynamic, geometric, physical, and aerodynamic models at both design and off-design were formulated and implemented. The most promising set of aircraft sub-system and system configurations were then determined based on both an energy integration and aerodynamic performance analysis at each stage of the mission (including the transient ones). Conceptual, time, and physical decomposition were subsequently applied to the synthesis/design and operational optimization of these aircraft configurations as well as to the highly dynamic process of heat generation and dissipation internal to the subsystems. The physical decomposition strategy used (i.e. Iterative Local-Global Optimization-ILGO) is the first to successfully closely approach the theoretical condition of 'thermoeconomic isolation' when applied to highly complex, highly dynamic non-linear systems. Developed at our Center for Energy Systems research, it has been effectively applied to a number of complex stationary and transportation applications

  2. Decomposition with thermoeconomic isolation applied to the optimal synthesis/design and operation of an advanced tactical aircraft system

    Energy Technology Data Exchange (ETDEWEB)

    Rancruel, Diego F. [Center for Energy Systems Research, Department of Mechanical Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24060 (United States); Spakovsky, Michael R. von [Center for Energy Systems Research, Department of Mechanical Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24060 (United States)]. E-mail: vonspako@vt.edu

    2006-12-15

    A decomposition methodology based on the concept of 'thermoeconomic isolation' and applied to the synthesis/design and operational optimization of an advanced tactical fighter aircraft is the focus of this paper. The total system is composed of six sub-systems of which five participate with degrees of freedom (493) in the optimization. They are the propulsion sub-system (PS), the environmental control sub-system (ECS), the fuel loop subsystem (FLS), the vapor compression and Polyalphaolefin (PAO) loops sub-system (VC/PAOS), and the airframe sub-system (AFS). The sixth subsystem comprises the expendable and permanent payloads as well as the equipment group. For each of the first five, detailed thermodynamic, geometric, physical, and aerodynamic models at both design and off-design were formulated and implemented. The most promising set of aircraft sub-system and system configurations were then determined based on both an energy integration and aerodynamic performance analysis at each stage of the mission (including the transient ones). Conceptual, time, and physical decomposition were subsequently applied to the synthesis/design and operational optimization of these aircraft configurations as well as to the highly dynamic process of heat generation and dissipation internal to the subsystems. The physical decomposition strategy used (i.e. Iterative Local-Global Optimization-ILGO) is the first to successfully closely approach the theoretical condition of 'thermoeconomic isolation' when applied to highly complex, highly dynamic non-linear systems. Developed at our Center for Energy Systems research, it has been effectively applied to a number of complex stationary and transportation applications.

  3. Decomposition of silicon carbide at high pressures and temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Daviau, Kierstin; Lee, Kanani K. M.

    2017-11-01

    We measure the onset of decomposition of silicon carbide, SiC, to silicon and carbon (e.g., diamond) at high pressures and high temperatures in a laser-heated diamond-anvil cell. We identify decomposition through x-ray diffraction and multiwavelength imaging radiometry coupled with electron microscopy analyses on quenched samples. We find that B3 SiC (also known as 3C or zinc blende SiC) decomposes at high pressures and high temperatures, following a phase boundary with a negative slope. The high-pressure decomposition temperatures measured are considerably lower than those at ambient, with our measurements indicating that SiC begins to decompose at ~ 2000 K at 60 GPa as compared to ~ 2800 K at ambient pressure. Once B3 SiC transitions to the high-pressure B1 (rocksalt) structure, we no longer observe decomposition, despite heating to temperatures in excess of ~ 3200 K. The temperature of decomposition and the nature of the decomposition phase boundary appear to be strongly influenced by the pressure-induced phase transitions to higher-density structures in SiC, silicon, and carbon. The decomposition of SiC at high pressure and temperature has implications for the stability of naturally forming moissanite on Earth and in carbon-rich exoplanets.

  4. Radiation decomposition of alcohols and chloro phenols in micellar systems

    International Nuclear Information System (INIS)

    Moreno A, J.

    1998-01-01

    The effect of surfactants on the radiation decomposition yield of alcohols and chloro phenols has been studied with gamma doses of 2, 3, and 5 KGy. These compounds were used as typical pollutants in waste water, and the effect of the water solubility, chemical structure, and the nature of the surfactant, anionic or cationic, was studied. The results show that anionic surfactant like sodium dodecylsulfate (SDS), improve the radiation decomposition yield of ortho-chloro phenol, while cationic surfactant like cetyl trimethylammonium chloride (CTAC), improve the radiation decomposition yield of butyl alcohol. A similar behavior is expected for those alcohols with water solubility close to the studied ones. Surfactant concentrations below critical micellar concentration (CMC), inhibited radiation decomposition for both types of alcohols. However radiation decomposition yield increased when surfactant concentrations were bigger than the CMC. Aromatic alcohols decomposition was more marked than for linear alcohols decomposition. On a mixture of alcohols and chloro phenols in aqueous solution the radiation decomposition yield decreased with increasing surfactant concentration. Nevertheless, there were competitive reactions between the alcohols, surfactants dimers, hydroxyl radical and other reactive species formed on water radiolysis, producing a catalytic positive effect in the decomposition of alcohols. Chemical structure and the number of carbons were not important factors in the radiation decomposition. When an alcohol like ortho-chloro phenol contained an additional chlorine atom, the decomposition of this compound was almost constant. In conclusion the micellar effect depend on both, the nature of the surfactant (anionic or cationic) and the chemical structure of the alcohols. The results of this study are useful for wastewater treatment plants based on the oxidant effect of the hydroxyl radical, like in advanced oxidation processes, or in combined treatment such as

  5. International magnetic pulse compression workshop: (Proceedings)

    Energy Technology Data Exchange (ETDEWEB)

    Kirbie, H.C.; Newton, M.A.; Siemens, P.D.

    1991-04-01

    A few individuals have tried to broaden the understanding of specific and salient pulsed-power topics. One such attempt is this documentation of a workshop on magnetic switching as it applies primarily to pulse compression (power transformation), affording a truly international perspective by its participants under the initiative and leadership of Hugh Kirbie and Mark Newton of the Lawrence Livermore National Laboratory (LLNL) and supported by other interested organizations. During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card--its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

  6. International magnetic pulse compression workshop: [Proceedings

    International Nuclear Information System (INIS)

    Kirbie, H.C.; Newton, M.A.; Siemens, P.D.

    1991-04-01

    A few individuals have tried to broaden the understanding of specific and salient pulsed-power topics. One such attempt is this documentation of a workshop on magnetic switching as it applies primarily to pulse compression (power transformation), affording a truly international perspective by its participants under the initiative and leadership of Hugh Kirbie and Mark Newton of the Lawrence Livermore National Laboratory (LLNL) and supported by other interested organizations. During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card--its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants

  7. Note on Symplectic SVD-Like Decomposition

    Directory of Open Access Journals (Sweden)

    AGOUJIL Said

    2016-02-01

    Full Text Available The aim of this study was to introduce a constructive method to compute a symplectic singular value decomposition (SVD-like decomposition of a 2n-by-m rectangular real matrix A, based on symplectic refectors.This approach used a canonical Schur form of skew-symmetric matrix and it allowed us to compute eigenvalues for the structured matrices as Hamiltonian matrix JAA^T.

  8. Evaluating litter decomposition and soil organic matter dynamics in earth system models: contrasting analysis of long-term litter decomposition and steady-state soil carbon

    Science.gov (United States)

    Bonan, G. B.; Wieder, W. R.

    2012-12-01

    Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. Here, we demonstrate a protocol to document model performance with respect to both long-term (10 year) litter decomposition and steady-state soil carbon stocks. First, we test the soil organic matter parameterization of the Community Land Model version 4 (CLM4), the terrestrial component of the Community Earth System Model, with data from the Long-term Intersite Decomposition Experiment Team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We show results for 10-year litter decomposition simulations compared with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes. We show additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results reveal large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, despite using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. Nitrogen immobilization is similarly biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, without requirement for nitrogen limitation of decomposition. Second, we compare global observationally-based datasets of soil carbon with simulated steady-state soil carbon stocks for both models. The models simulations were forced with observationally-based estimates of annual

  9. The decomposition of estuarine macrophytes under different ...

    African Journals Online (AJOL)

    The aim of this study was to determine the decomposition characteristics of the most dominant submerged macrophyte and macroalgal species in the Great Brak Estuary. Laboratory experiments were conducted to determine the effect of different temperature regimes on the rate of decomposition of 3 macrophyte species ...

  10. Decomposition and flame structure of hydrazinium nitroformate

    NARCIS (Netherlands)

    Louwers, J.; Parr, T.; Hanson-Parr, D.

    1999-01-01

    The decomposition of hydrazinium nitroformate (HNF) was studied in a hot quartz cell and by dropping small amounts of HNF on a hot plate. The species formed during the decomposition were identified by ultraviolet-visible absorption experiments. These experiments reveal that first HONO is formed. The

  11. Spectral decomposition of tent maps using symmetry considerations

    International Nuclear Information System (INIS)

    Ordonez, G.E.; Driebe, D.J.

    1996-01-01

    The spectral decompostion of the Frobenius-Perron operator of maps composed of many tents is determined from symmetry considerations. The eigenstates involve Euler as well as Bernoulli polynomials. The authors have introduced some new techniques, based on symmetry considerations, enabling the construction of spectral decompositions in a much simpler way than previous construction algorithms, Here we utilize these techniques to construct the spectral decomposition for one- dimensional maps of the unit interval composed of many tents. The construction uses the knowledge of the spectral decomposition of the r-adic map, which involves Bernoulli polynomials and their duals. It will be seen that the spectral decomposition of the tent maps involves both Bernoulli polynomials and Euler polynomials along with the appropriate dual states

  12. Decomposition of forest products buried in landfills

    International Nuclear Information System (INIS)

    Wang, Xiaoming; Padgett, Jennifer M.; Powell, John S.; Barlaz, Morton A.

    2013-01-01

    Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g −1 dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than

  13. Decomposition of forest products buried in landfills

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Xiaoming, E-mail: xwang25@ncsu.edu [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States); Padgett, Jennifer M. [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States); Powell, John S. [Department of Chemical and Biomolecular Engineering, Campus Box 7905, North Carolina State University, Raleigh, NC 27695-7905 (United States); Barlaz, Morton A. [Department of Civil, Construction, and Environmental Engineering, Campus Box 7908, North Carolina State University, Raleigh, NC 27695-7908 (United States)

    2013-11-15

    Highlights: • This study tracked chemical changes of wood and paper in landfills. • A decomposition index was developed to quantify carbohydrate biodegradation. • Newsprint biodegradation as measured here is greater than previous reports. • The field results correlate well with previous laboratory measurements. - Abstract: The objective of this study was to investigate the decomposition of selected wood and paper products in landfills. The decomposition of these products under anaerobic landfill conditions results in the generation of biogenic carbon dioxide and methane, while the un-decomposed portion represents a biogenic carbon sink. Information on the decomposition of these municipal waste components is used to estimate national methane emissions inventories, for attribution of carbon storage credits, and to assess the life-cycle greenhouse gas impacts of wood and paper products. Hardwood (HW), softwood (SW), plywood (PW), oriented strand board (OSB), particleboard (PB), medium-density fiberboard (MDF), newsprint (NP), corrugated container (CC) and copy paper (CP) were buried in landfills operated with leachate recirculation, and were excavated after approximately 1.5 and 2.5 yr. Samples were analyzed for cellulose (C), hemicellulose (H), lignin (L), volatile solids (VS), and organic carbon (OC). A holocellulose decomposition index (HOD) and carbon storage factor (CSF) were calculated to evaluate the extent of solids decomposition and carbon storage. Samples of OSB made from HW exhibited cellulose plus hemicellulose (C + H) loss of up to 38%, while loss for the other wood types was 0–10% in most samples. The C + H loss was up to 81%, 95% and 96% for NP, CP and CC, respectively. The CSFs for wood and paper samples ranged from 0.34 to 0.47 and 0.02 to 0.27 g OC g{sup −1} dry material, respectively. These results, in general, correlated well with an earlier laboratory-scale study, though NP and CC decomposition measured in this study were higher than

  14. Measurement of lower leg compression in vivo: recommendations for the performance of measurements of interface pressure and stiffness: consensus statement.

    Science.gov (United States)

    Partsch, Hugo; Clark, Michael; Bassez, Sophie; Benigni, Jean-Patrick; Becker, Francis; Blazek, Vladimir; Caprini, Joseph; Cornu-Thénard, André; Hafner, Jürg; Flour, Mieke; Jünger, Michael; Moffatt, Christine; Neumann, Martino

    2006-02-01

    Interface pressure and stiffness characterizing the elastic properties of the material are the parameters determining the dosage of compression treatment and should therefore be measured in future clinical trials. To provide some recommendations regarding the use of suitable methods for this indication. This article was formulated based on the results of an international consensus meeting between a group of medical experts and representatives from the industry held in January 2005 in Vienna, Austria. Proposals are made concerning methods for measuring the interface pressure and for assessing the stiffness of a compression device in an individual patient. In vivo measurement of interface pressure is encouraged when clinical and experimental outcomes of compression treatment are to be evaluated.

  15. Parallel processing for pitch splitting decomposition

    Science.gov (United States)

    Barnes, Levi; Li, Yong; Wadkins, David; Biederman, Steve; Miloslavsky, Alex; Cork, Chris

    2009-10-01

    Decomposition of an input pattern in preparation for a double patterning process is an inherently global problem in which the influence of a local decomposition decision can be felt across an entire pattern. In spite of this, a large portion of the work can be massively distributed. Here, we discuss the advantages of geometric distribution for polygon operations with limited range of influence. Further, we have found that even the naturally global "coloring" step can, in large part, be handled in a geometrically local manner. In some practical cases, up to 70% of the work can be distributed geometrically. We also describe the methods for partitioning the problem into local pieces and present scaling data up to 100 CPUs. These techniques reduce DPT decomposition runtime by orders of magnitude.

  16. Nutrient Dynamics and Litter Decomposition in Leucaena ...

    African Journals Online (AJOL)

    Nutrient contents and rate of litter decomposition were investigated in Leucaena leucocephala plantation in the University of Agriculture, Abeokuta, Ogun State, Nigeria. Litter bag technique was used to study the pattern and rate of litter decomposition and nutrient release of Leucaena leucocephala. Fifty grams of oven-dried ...

  17. Climate fails to predict wood decomposition at regional scales

    Science.gov (United States)

    Mark A. Bradford; Robert J. Warren; Petr Baldrian; Thomas W. Crowther; Daniel S. Maynard; Emily E. Oldfield; William R. Wieder; Stephen A. Wood; Joshua R. King

    2014-01-01

    Decomposition of organic matter strongly influences ecosystem carbon storage1. In Earth-system models, climate is a predominant control on the decomposition rates of organic matter2, 3, 4, 5. This assumption is based on the mean response of decomposition to climate, yet there is a growing appreciation in other areas of global change science that projections based on...

  18. Formation of volatile decomposition products by self-radiolysis of tritiated thymidine

    International Nuclear Information System (INIS)

    Shiba, Kazuhiro; Mori, Hirofumi

    1997-01-01

    In order to estimate the internal exposure dose in an experiment using tritiated thymidine, the rate of volatile 3 H-decomposition of several tritiated thymidine samples was measured. The decomposition rate of (methyl- 3 H)thymidine in water was over 80% in less than one year after initial analysis. (methyl- 3 H)thymidine was decomposed into volatile and non-volatile 3 H-decomposition products. The ratio of volatile 3 H-decomposition products increased with increasing the rate of the decomposition of (methyl- 3 H) thymidine. The volatile 3 H-decomposition products consisted of two components, of which the main component was tritiated water. Internal exposure dose caused by the inhalation of such volatile 3 H-decomposition products of (methyl- 3 H) thymidine was assumed to be several μSv. (author)

  19. Are litter decomposition and fire linked through plant species traits?

    Science.gov (United States)

    Cornelissen, Johannes H C; Grootemaat, Saskia; Verheijen, Lieneke M; Cornwell, William K; van Bodegom, Peter M; van der Wal, René; Aerts, Rien

    2017-11-01

    Contents 653 I. 654 II. 657 III. 659 IV. 661 V. 662 VI. 663 VII. 665 665 References 665 SUMMARY: Biological decomposition and wildfire are connected carbon release pathways for dead plant material: slower litter decomposition leads to fuel accumulation. Are decomposition and surface fires also connected through plant community composition, via the species' traits? Our central concept involves two axes of trait variation related to decomposition and fire. The 'plant economics spectrum' (PES) links biochemistry traits to the litter decomposability of different fine organs. The 'size and shape spectrum' (SSS) includes litter particle size and shape and their consequent effect on fuel bed structure, ventilation and flammability. Our literature synthesis revealed that PES-driven decomposability is largely decoupled from predominantly SSS-driven surface litter flammability across species; this finding needs empirical testing in various environmental settings. Under certain conditions, carbon release will be dominated by decomposition, while under other conditions litter fuel will accumulate and fire may dominate carbon release. Ecosystem-level feedbacks between decomposition and fire, for example via litter amounts, litter decomposition stage, community-level biotic interactions and altered environment, will influence the trait-driven effects on decomposition and fire. Yet, our conceptual framework, explicitly comparing the effects of two plant trait spectra on litter decomposition vs fire, provides a promising new research direction for better understanding and predicting Earth surface carbon dynamics. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  20. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  1. Thermal decomposition of lanthanide and actinide tetrafluorides

    International Nuclear Information System (INIS)

    Gibson, J.K.; Haire, R.G.

    1988-01-01

    The thermal stabilities of several lanthanide/actinide tetrafluorides have been studied using mass spectrometry to monitor the gaseous decomposition products, and powder X-ray diffraction (XRD) to identify solid products. The tetrafluorides, TbF 4 , CmF 4 , and AmF 4 , have been found to thermally decompose to their respective solid trifluorides with accompanying release of fluorine, while cerium tetrafluoride has been found to be significantly more thermally stable and to congruently sublime as CeF 4 prior to appreciable decomposition. The results of these studies are discussed in relation to other relevant experimental studies and the thermodynamics of the decomposition processes. 9 refs., 3 figs

  2. Sub-band/transform compression of video sequences

    Science.gov (United States)

    Sauer, Ken; Bauer, Peter

    1992-01-01

    The progress on compression of video sequences is discussed. The overall goal of the research was the development of data compression algorithms for high-definition television (HDTV) sequences, but most of our research is general enough to be applicable to much more general problems. We have concentrated on coding algorithms based on both sub-band and transform approaches. Two very fundamental issues arise in designing a sub-band coder. First, the form of the signal decomposition must be chosen to yield band-pass images with characteristics favorable to efficient coding. A second basic consideration, whether coding is to be done in two or three dimensions, is the form of the coders to be applied to each sub-band. Computational simplicity is of essence. We review the first portion of the year, during which we improved and extended some of the previous grant period's results. The pyramid nonrectangular sub-band coder limited to intra-frame application is discussed. Perhaps the most critical component of the sub-band structure is the design of bandsplitting filters. We apply very simple recursive filters, which operate at alternating levels on rectangularly sampled, and quincunx sampled images. We will also cover the techniques we have studied for the coding of the resulting bandpass signals. We discuss adaptive three-dimensional coding which takes advantage of the detection algorithm developed last year. To this point, all the work on this project has been done without the benefit of motion compensation (MC). Motion compensation is included in many proposed codecs, but adds significant computational burden and hardware expense. We have sought to find a lower-cost alternative featuring a simple adaptation to motion in the form of the codec. In sequences of high spatial detail and zooming or panning, it appears that MC will likely be necessary for the proposed quality and bit rates.

  3. Thermal decomposition of UO3-2H20

    International Nuclear Information System (INIS)

    Flament, T.A.

    1998-01-01

    The first part of the report summarizes the literature data regarding the uranium trioxide water system. In the second part, the experimental aspects are presented. An experimental program has been set up to determine the steps and species involved in decomposition of uranium oxide di-hydrate. Particular attention has been paid to determine both loss of free water (moisture in the fuel) and loss of chemically bound water (decomposition of hydrates). The influence of water pressure on decomposition has been taken into account

  4. Mammographic compression in Asian women.

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  5. Steganography based on pixel intensity value decomposition

    Science.gov (United States)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  6. Wood decomposition as influenced by invertebrates.

    Science.gov (United States)

    Ulyshen, Michael D

    2016-02-01

    The diversity and habitat requirements of invertebrates associated with dead wood have been the subjects of hundreds of studies in recent years but we still know very little about the ecological or economic importance of these organisms. The purpose of this review is to examine whether, how and to what extent invertebrates affect wood decomposition in terrestrial ecosystems. Three broad conclusions can be reached from the available literature. First, wood decomposition is largely driven by microbial activity but invertebrates also play a significant role in both temperate and tropical environments. Primary mechanisms include enzymatic digestion (involving both endogenous enzymes and those produced by endo- and ectosymbionts), substrate alteration (tunnelling and fragmentation), biotic interactions and nitrogen fertilization (i.e. promoting nitrogen fixation by endosymbiotic and free-living bacteria). Second, the effects of individual invertebrate taxa or functional groups can be accelerative or inhibitory but the cumulative effect of the entire community is generally to accelerate wood decomposition, at least during the early stages of the process (most studies are limited to the first 2-3 years). Although methodological differences and design limitations preclude meta-analysis, studies aimed at quantifying the contributions of invertebrates to wood decomposition commonly attribute 10-20% of wood loss to these organisms. Finally, some taxa appear to be particularly influential with respect to promoting wood decomposition. These include large wood-boring beetles (Coleoptera) and termites (Termitoidae), especially fungus-farming macrotermitines. The presence or absence of these species may be more consequential than species richness and the influence of invertebrates is likely to vary biogeographically. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.

  7. Radiolytic decomposition of 4-bromodiphenyl ether

    International Nuclear Information System (INIS)

    Tang Liang; Xu Gang; Wu Wenjing; Shi Wenyan; Liu Ning; Bai Yulei; Wu Minghong

    2010-01-01

    Polybrominated diphenyl ethers (PBDEs) spread widely in the environment are mainly removed by photochemical and anaerobic microbial degradation. In this paper, the decomposition of 4-bromodiphenyl ether (BDE -3), the PBDEs homologues, is investigated by electron beam irradiation of its ethanol/water solution (reduction system) and acetonitrile/water solution (oxidation system). The radiolytic products were determined by GC coupled with electron capture detector, and the reaction rate constant of e sol - in the reduction system was measured at 2.7 x 10 10 L · mol -1 · s -1 by pulsed radiolysis. The results show that the BDE-3 concentration affects strongly the decomposition ratio in the alkali solution, and the reduction system has a higher BDE-3 decomposition rate than the oxidation system. This indicates that the BDE-3 was reduced by effectively capturing e sol - in radiolytic process. (authors)

  8. A Continuum Damage Mechanics Model to Predict Kink-Band Propagation Using Deformation Gradient Tensor Decomposition

    Science.gov (United States)

    Bergan, Andrew C.; Leone, Frank A., Jr.

    2016-01-01

    A new model is proposed that represents the kinematics of kink-band formation and propagation within the framework of a mesoscale continuum damage mechanics (CDM) model. The model uses the recently proposed deformation gradient decomposition approach to represent a kink band as a displacement jump via a cohesive interface that is embedded in an elastic bulk material. The model is capable of representing the combination of matrix failure in the frame of a misaligned fiber and instability due to shear nonlinearity. In contrast to conventional linear or bilinear strain softening laws used in most mesoscale CDM models for longitudinal compression, the constitutive response of the proposed model includes features predicted by detailed micromechanical models. These features include: 1) the rotational kinematics of the kink band, 2) an instability when the peak load is reached, and 3) a nonzero plateau stress under large strains.

  9. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    Science.gov (United States)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2017-12-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  10. MA-core loaded untuned RF compression cavity for HIRFL-CSR

    International Nuclear Information System (INIS)

    Mei Lirong; Xu Zhe; Yuan Youjin; Jin Peng; Bian Zhibin; Zhao Hongwei; Xia Jiawen

    2012-01-01

    To meet the requirements of high energy density physics and plasma physics research at HIRFL-CSR the goal of achieving a higher accelerating gap voltage was proposed. Therefore, a magnetic alloy (MA)-core loaded radio frequency (RF) cavity that can provide a higher accelerating gap voltage compared to standard ferrite loaded cavities has been studied at IMP. In order to select the proper magnetic alloy material to load the RF compression cavity, measurements of four different kinds of sample MA-cores have been carried out. By testing the small cores, the core composition was selected to obtain the desired performance. According to the theoretical calculation and simulation, which show reasonable consistency for the MA-core loaded cavity, the desired performance can be achieved. Finally about 1000 kW power will be needed to meet the requirements of 50 kV accelerating gap voltage by calculation.

  11. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  12. Comparison of decomposition rates between autopsied and non-autopsied human remains.

    Science.gov (United States)

    Bates, Lennon N; Wescott, Daniel J

    2016-04-01

    Penetrating trauma has been cited as a significant factor in the rate of decomposition. Therefore, penetrating trauma may have an effect on estimations of time-since-death in medicolegal investigations and on research examining decomposition rates and processes when autopsied human bodies are used. The goal of this study was to determine if there are differences in the rate of decomposition between autopsied and non-autopsied human remains in the same environment. The purpose is to shed light on how large incisions, such as those from a thorocoabdominal autopsy, effect time-since-death estimations and research on the rate of decomposition that use both autopsied and non-autopsied human remains. In this study, 59 non-autopsied and 24 autopsied bodies were studied. The number of accumulated degree days required to reach each decomposition stage was then compared between autopsied and non-autopsied remains. Additionally, both types of bodies were examined for seasonal differences in decomposition rates. As temperature affects the rate of decomposition, this study also compared the internal body temperatures of autopsied and non-autopsied remains to see if differences between the two may be leading to differential decomposition. For this portion of this study, eight non-autopsied and five autopsied bodies were investigated. Internal temperature was collected once a day for two weeks. The results showed that differences in the decomposition rate between autopsied and non-autopsied remains was not statistically significant, though the average ADD needed to reach each stage of decomposition was slightly lower for autopsied bodies than non-autopsied bodies. There was also no significant difference between autopsied and non-autopsied bodies in the rate of decomposition by season or in internal temperature. Therefore, this study suggests that it is unnecessary to separate autopsied and non-autopsied remains when studying gross stages of human decomposition in Central Texas

  13. The platinum catalysed decomposition of hydrazine in acidic media

    International Nuclear Information System (INIS)

    Ananiev, A.V.; Tananaev, I.G.; Brossard, Ph.; Broudic, J.C.

    2000-01-01

    Kinetic study of the hydrazine decomposition in the solutions of HClO 4 , H 2 SO 4 and HNO 3 in the presence of Pt/SiO 2 catalyst has been undertaken. It was shown that the kinetics of the hydrazine catalytic decomposition in HClO 4 and H 2 SO 4 are identical. The process is determined by the heterogeneous catalytic auto-decomposition of N 2 H 4 on the catalyst's surface. The platinum catalysed hydrazine decomposition in the nitric acid solutions is a complex process, including heterogeneous catalytic auto-decomposition of N 2 H 4 , reaction of hydrazine with catalytically generated nitrous acid and the catalytic oxidation of hydrazine by nitric acid. The kinetic parameters of these reactions have been determined. The contribution of each reaction in the total process is determined by the liquid phase composition and by the temperature. (authors)

  14. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  15. Generalized decompositions of dynamic systems and vector Lyapunov functions

    Science.gov (United States)

    Ikeda, M.; Siljak, D. D.

    1981-10-01

    The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.

  16. In situ XAS of the solvothermal decomposition of dithiocarbamate complexes

    NARCIS (Netherlands)

    Islam, H.-U.; Roffey, A.; Hollingsworth, N.; Catlow, R.; Wolthers, M.; de Leeuw, N.H.; Bras, W.; Sankar, G.; Hogarth, G.

    2012-01-01

    An in situ XAS study of the solvothermal decomposition of iron and nickel dithiocarbamate complexes was performed in order to gain understanding of the decomposition mechanisms. This work has given insight into the steps involved in the decomposition, showing variation in reaction pathways between

  17. High Performance Polar Decomposition on Distributed Memory Systems

    KAUST Repository

    Sukkari, Dalal E.; Ltaief, Hatem; Keyes, David E.

    2016-01-01

    The polar decomposition of a dense matrix is an important operation in linear algebra. It can be directly calculated through the singular value decomposition (SVD) or iteratively using the QR dynamically-weighted Halley algorithm (QDWH). The former

  18. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika; Amato, Nancy M.; Lu, Yanyan; Lien, Jyh-Ming

    2013-01-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  19. Fast approximate convex decomposition using relative concavity

    KAUST Repository

    Ghosh, Mukulika

    2013-02-01

    Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31]. © 2012 Elsevier Ltd. All rights reserved.

  20. Microbunching and RF Compression

    International Nuclear Information System (INIS)

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-01-01

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  1. Climate history shapes contemporary leaf litter decomposition

    Science.gov (United States)

    Michael S. Strickland; Ashley D. Keiser; Mark A. Bradford

    2015-01-01

    Litter decomposition is mediated by multiple variables, of which climate is expected to be a dominant factor at global scales. However, like other organisms, traits of decomposers and their communities are shaped not just by the contemporary climate but also their climate history. Whether or not this affects decomposition rates is underexplored. Here we source...

  2. Optical pulse compression

    International Nuclear Information System (INIS)

    Glass, A.J.

    1975-01-01

    The interest in using large lasers to achieve a very short and intense pulse for generating fusion plasma has provided a strong impetus to reexamine the possibilities of optical pulse compression at high energy. Pulse compression allows one to generate pulses of long duration (minimizing damage problems) and subsequently compress optical pulses to achieve the short pulse duration required for specific applications. The ideal device for carrying out this program has not been developed. Of the two approaches considered, the Gires--Tournois approach is limited by the fact that the bandwidth and compression are intimately related, so that the group delay dispersion times the square of the bandwidth is about unity for all simple Gires--Tournois interferometers. The Treacy grating pair does not suffer from this limitation, but is inefficient because diffraction generally occurs in several orders and is limited by the problem of optical damage to the grating surfaces themselves. Nonlinear and parametric processes were explored. Some pulse compression was achieved by these techniques; however, they are generally difficult to control and are not very efficient. (U.S.)

  3. Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.

    Science.gov (United States)

    Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin

    2017-11-15

    Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  4. Decomposition of dioxin analogues and ablation study for carbon nanotube

    International Nuclear Information System (INIS)

    Yamauchi, Toshihiko

    2002-01-01

    Two application studies associated with the free electron laser are presented separately, which are the titles of 'Decomposition of Dioxin Analogues' and 'Ablation Study for Carbon Nanotube'. The decomposition of dioxin analogues by infrared (IR) laser irradiation includes the thermal destruction and multiple-photon dissociation. It is important for us to choose the highly absorbable laser wavelength for the decomposition. The thermal decomposition takes place by the irradiation of the low IR laser power. Considering the model of thermal decomposition, it is proposed that adjacent water molecules assist the decomposition of dioxin analogues in addition to the thermal decomposition by the direct laser absorption. The laser ablation study is performed for the aim of a carbon nanotube synthesis. The vapor by the ablation is weakly ionized in the power of several-hundred megawatts. The plasma internal energy is kept over an 8.5 times longer than the vacuum. The cluster was produced from the weakly ionized gas in the enclosed gas, which is composed of the rough particles in the low power laser more than the high power which is composed of the fine particles. (J.P.N.)

  5. Decomposition of oxalate precipitates by photochemical reaction

    International Nuclear Information System (INIS)

    Jae-Hyung Yoo; Eung-Ho Kim

    1999-01-01

    A photo-radiation method was applied to decompose oxalate precipitates so that it can be dissolved into dilute nitric acid. This work has been studied as a part of partitioning of minor actinides. Minor actinides can be recovered from high-level wastes as oxalate precipitates, but they tend to be coprecipitated together with lanthanide oxalates. This requires another partitioning step for mutual separation of actinide and lanthanide groups. In this study, therefore, some experimental work of photochemical decomposition of oxalate was carried out to prove its feasibility as a step of partitioning process. The decomposition of oxalic acid in the presence of nitric acid was performed in advance in order to understand the mechanistic behaviour of oxalate destruction, and then the decomposition of neodymium oxalate, which was chosen as a stand-in compound representing minor actinide and lanthanide oxalates, was examined. The decomposition rate of neodymium oxalate was found as 0.003 mole/hr at the conditions of 0.5 M HNO 3 and room temperature when a mercury lamp was used as a light source. (author)

  6. Abstract decomposition theorem and applications

    CERN Document Server

    Grossberg, R; Grossberg, Rami; Lessmann, Olivier

    2005-01-01

    Let K be an Abstract Elementary Class. Under the asusmptions that K has a nicely behaved forking-like notion, regular types and existence of some prime models we establish a decomposition theorem for such classes. The decomposition implies a main gap result for the class K. The setting is general enough to cover \\aleph_0-stable first-order theories (proved by Shelah in 1982), Excellent Classes of atomic models of a first order tehory (proved Grossberg and Hart 1987) and the class of submodels of a large sequentially homogenuus \\aleph_0-stable model (which is new).

  7. Forest products decomposition in municipal solid waste landfills

    International Nuclear Information System (INIS)

    Barlaz, Morton A.

    2006-01-01

    Cellulose and hemicellulose are present in paper and wood products and are the dominant biodegradable polymers in municipal waste. While their conversion to methane in landfills is well documented, there is little information on the rate and extent of decomposition of individual waste components, particularly under field conditions. Such information is important for the landfill carbon balance as methane is a greenhouse gas that may be recovered and converted to a CO 2 -neutral source of energy, while non-degraded cellulose and hemicellulose are sequestered. This paper presents a critical review of research on the decomposition of cellulosic wastes in landfills and identifies additional work that is needed to quantify the ultimate extent of decomposition of individual waste components. Cellulose to lignin ratios as low as 0.01-0.02 have been measured for well decomposed refuse, with corresponding lignin concentrations of over 80% due to the depletion of cellulose and resulting enrichment of lignin. Only a few studies have even tried to address the decomposition of specific waste components at field-scale. Long-term controlled field experiments with supporting laboratory work will be required to measure the ultimate extent of decomposition of individual waste components

  8. The trait contribution to wood decomposition rates of 15 Neotropical tree species.

    Science.gov (United States)

    van Geffen, Koert G; Poorter, Lourens; Sass-Klaassen, Ute; van Logtestijn, Richard S P; Cornelissen, Johannes H C

    2010-12-01

    The decomposition of dead wood is a critical uncertainty in models of the global carbon cycle. Despite this, relatively few studies have focused on dead wood decomposition, with a strong bias to higher latitudes. Especially the effect of interspecific variation in species traits on differences in wood decomposition rates remains unknown. In order to fill these gaps, we applied a novel method to study long-term wood decomposition of 15 tree species in a Bolivian semi-evergreen tropical moist forest. We hypothesized that interspecific differences in species traits are important drivers of variation in wood decomposition rates. Wood decomposition rates (fractional mass loss) varied between 0.01 and 0.31 yr(-1). We measured 10 different chemical, anatomical, and morphological traits for all species. The species' average traits were useful predictors of wood decomposition rates, particularly the average diameter (dbh) of the tree species (R2 = 0.41). Lignin concentration further increased the proportion of explained inter-specific variation in wood decomposition (both negative relations, cumulative R2 = 0.55), although it did not significantly explain variation in wood decomposition rates if considered alone. When dbh values of the actual dead trees sampled for decomposition rate determination were used as a predictor variable, the final model (including dead tree dbh and lignin concentration) explained even more variation in wood decomposition rates (R2 = 0.71), underlining the importance of dbh in wood decomposition. Other traits, including wood density, wood anatomical traits, macronutrient concentrations, and the amount of phenolic extractives could not significantly explain the variation in wood decomposition rates. The surprising results of this multi-species study, in which for the first time a large set of traits is explicitly linked to wood decomposition rates, merits further testing in other forest ecosystems.

  9. Thermoanalytical study of the decomposition of yttrium trifluoroacetate thin films

    International Nuclear Information System (INIS)

    Eloussifi, H.; Farjas, J.; Roura, P.; Ricart, S.; Puig, T.; Obradors, X.; Dammak, M.

    2013-01-01

    We present the use of the thermal analysis techniques to study yttrium trifluoroacetate thin films decomposition. In situ analysis was done by means of thermogravimetry, differential thermal analysis, and evolved gas analysis. Solid residues at different stages and the final product have been characterized by X-ray diffraction and scanning electron microscopy. The thermal decomposition of yttrium trifluoroacetate thin films results in the formation of yttria and presents the same succession of intermediates than powder's decomposition, however, yttria and all intermediates but YF 3 appear at significantly lower temperatures. We also observe a dependence on the water partial pressure that was not observed in the decomposition of yttrium trifluoroacetate powders. Finally, a dependence on the substrate chemical composition is discerned. - Highlights: • Thermal decomposition of yttrium trifluoroacetate films. • Very different behavior of films with respect to powders. • Decomposition is enhanced in films. • Application of thermal analysis to chemical solution deposition synthesis of films

  10. Joint Matrices Decompositions and Blind Source Separation

    Czech Academy of Sciences Publication Activity Database

    Chabriel, G.; Kleinsteuber, M.; Moreau, E.; Shen, H.; Tichavský, Petr; Yeredor, A.

    2014-01-01

    Roč. 31, č. 3 (2014), s. 34-43 ISSN 1053-5888 R&D Projects: GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : joint matrices decomposition * tensor decomposition * blind source separation Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 5.852, year: 2014 http://library.utia.cas.cz/separaty/2014/SI/tichavsky-0427607.pdf

  11. A review of plutonium oxalate decomposition reactions and effects of decomposition temperature on the surface area of the plutonium dioxide product

    International Nuclear Information System (INIS)

    Orr, R.M.; Sims, H.E.; Taylor, R.J.

    2015-01-01

    Plutonium (IV) and (III) ions in nitric acid solution readily form insoluble precipitates with oxalic acid. The plutonium oxalates are then easily thermally decomposed to form plutonium dioxide powder. This simple process forms the basis of current industrial conversion or ‘finishing’ processes that are used in commercial scale reprocessing plants. It is also widely used in analytical or laboratory scale operations and for waste residues treatment. However, the mechanisms of the thermal decompositions in both air and inert atmospheres have been the subject of various studies over several decades. The nature of intermediate phases is of fundamental interest whilst understanding the evolution of gases at different temperatures is relevant to process control. The thermal decomposition is also used to control a number of powder properties of the PuO_2 product that are important to either long term storage or mixed oxide fuel manufacturing. These properties are the surface area, residual carbon impurities and adsorbed volatile species whereas the morphology and particle size distribution are functions of the precipitation process. Available data and experience regarding the thermal and radiation-induced decompositions of plutonium oxalate to oxide are reviewed. The mechanisms of the thermal decompositions are considered with a particular focus on the likely redox chemistry involved. Also, whilst it is well known that the surface area is dependent on calcination temperature, there is a wide variation in the published data and so new correlations have been derived. Better understanding of plutonium (III) and (IV) oxalate decompositions will assist the development of more proliferation resistant actinide co-conversion processes that are needed for advanced reprocessing in future closed nuclear fuel cycles. - Highlights: • Critical review of plutonium oxalate decomposition reactions. • New analysis of relationship between SSA and calcination temperature. • New SEM

  12. Microbial Signatures of Cadaver Gravesoil During Decomposition.

    Science.gov (United States)

    Finley, Sheree J; Pechal, Jennifer L; Benbow, M Eric; Robertson, B K; Javan, Gulnaz T

    2016-04-01

    Genomic studies have estimated there are approximately 10(3)-10(6) bacterial species per gram of soil. The microbial species found in soil associated with decomposing human remains (gravesoil) have been investigated and recognized as potential molecular determinants for estimates of time since death. The nascent era of high-throughput amplicon sequencing of the conserved 16S ribosomal RNA (rRNA) gene region of gravesoil microbes is allowing research to expand beyond more subjective empirical methods used in forensic microbiology. The goal of the present study was to evaluate microbial communities and identify taxonomic signatures associated with the gravesoil human cadavers. Using 16S rRNA gene amplicon-based sequencing, soil microbial communities were surveyed from 18 cadavers placed on the surface or buried that were allowed to decompose over a range of decomposition time periods (3-303 days). Surface soil microbial communities showed a decreasing trend in taxon richness, diversity, and evenness over decomposition, while buried cadaver-soil microbial communities demonstrated increasing taxon richness, consistent diversity, and decreasing evenness. The results show that ubiquitous Proteobacteria was confirmed as the most abundant phylum in all gravesoil samples. Surface cadaver-soil communities demonstrated a decrease in Acidobacteria and an increase in Firmicutes relative abundance over decomposition, while buried soil communities were consistent in their community composition throughout decomposition. Better understanding of microbial community structure and its shifts over time may be important for advancing general knowledge of decomposition soil ecology and its potential use during forensic investigations.

  13. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  14. LZ-Compressed String Dictionaries

    OpenAIRE

    Arz, Julian; Fischer, Johannes

    2013-01-01

    We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

  15. Theoretical and experimental study: the size dependence of decomposition thermodynamics of nanomaterials

    International Nuclear Information System (INIS)

    Cui, Zixiang; Duan, Huijuan; Li, Wenjiao; Xue, Yongqiang

    2015-01-01

    In the processes of preparation and application of nanomaterials, the decomposition reactions of nanomaterials are often involved. However, there is a dramatic difference in decomposition thermodynamics between nanomaterials and the bulk counterparts, and the difference depends on the size of the particles that compose the nanomaterials. In this paper, the decomposition model of a nanoparticle was built, the theory of decomposition thermodynamics of nanomaterials was proposed, and the relations of the size dependence of thermodynamic quantities for the decomposition reactions were deduced. In experiment, taking the thermal decomposition of nano-Cu 2 (OH) 2 CO 3 with different particle sizes (the range of radius is at 8.95–27.4 nm) as a system, the reaction thermodynamic quantities were determined, and the regularities of size dependence of the quantities were summarized. These experimental regularities consist with the above thermodynamic relations. The results show that there is a significant effect of the size of particles composing a nanomaterial on the decomposition thermodynamics. When all the decomposition products are gases, the differences in thermodynamic quantities of reaction between the nanomaterials and the bulk counterparts depend on the particle size; while when one of the decomposition products is a solid, the differences depend on both the initial particle size of the nanoparticle and the decomposition ratio. When the decomposition ratio is very small, these differences are only related to the initial particle size; and when the radius of the nanoparticles approaches or exceeds 10 nm, the reaction thermodynamic functions and the logarithm of the equilibrium constant are linearly associated with the reciprocal of radius, respectively. The thermodynamic theory can quantificationally describe the regularities of the size dependence of thermodynamic quantities for decomposition reactions of nanomaterials, and contribute to the researches and the

  16. Two Notes on Discrimination and Decomposition

    DEFF Research Database (Denmark)

    Nielsen, Helena Skyt

    1998-01-01

    1. It turns out that the Oaxaca-Blinder wage decomposition is inadequate when it comes to calculation of separate contributions for indicator variables. The contributions are not robust against a change of reference group. I extend the Oaxaca-Blinder decomposition to handle this problem. 2. The p....... The paper suggests how to use the logit model to decompose the gender difference in the probability of an occurrence. The technique is illustrated by an analysis of discrimination in child labor in rural Zambia....

  17. Decomposition of atrazine traces in water by combination of non-thermal electrical discharge and adsorption on nanofiber membrane.

    Science.gov (United States)

    Vanraes, Patrick; Willems, Gert; Daels, Nele; Van Hulle, Stijn W H; De Clerck, Karen; Surmont, Pieter; Lynen, Frederic; Vandamme, Jeroen; Van Durme, Jim; Nikiforov, Anton; Leys, Christophe

    2015-04-01

    In recent decades, several types of persistent substances are detected in the aquatic environment at very low concentrations. Unfortunately, conventional water treatment processes are not able to remove these micropollutants. As such, advanced treatment methods are required to meet both current and anticipated maximally allowed concentrations. Plasma discharge in contact with water is a promising new technology, since it produces a wide spectrum of oxidizing species. In this study, a new type of reactor is tested, in which decomposition by atmospheric pulsed direct barrier discharge (pDBD) plasma is combined with micropollutant adsorption on a nanofiber polyamide membrane. Atrazine is chosen as model micropollutant with an initial concentration of 30 μg/L. While the H2O2 and O3 production in the reactor is not influenced by the presence of the membrane, there is a significant increase in atrazine decomposition when the membrane is added. With membrane, 85% atrazine removal can be obtained in comparison to only 61% removal without membrane, at the same experimental parameters. The by-products of atrazine decomposition identified by HPLC-MS are deethylatrazine and ammelide. Formation of these by-products is more pronounced when the membrane is added. These results indicate the synergetic effect of plasma discharge and pollutant adsorption, which is attractive for future applications of water treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Decomposition Technology Development of Organic Component in a Decontamination Waste Solution

    International Nuclear Information System (INIS)

    Jung, Chong Hun; Oh, W. Z.; Won, H. J.; Choi, W. K.; Kim, G. N.; Moon, J. K.

    2007-11-01

    Through the project of 'Decomposition Technology Development of Organic Component in a Decontamination Waste Solution', the followings were studied. 1. Investigation of decontamination characteristics of chemical decontamination process 2. Analysis of COD, ferrous ion concentration, hydrogen peroxide concentration 3. Decomposition tests of hardly decomposable organic compounds 4. Improvement of organic acid decomposition process by ultrasonic wave and UV light 5. Optimization of decomposition process using a surrogate decontamination waste solution

  19. Compression Ignition Engines - revolutionary technology that has civilized frontiers all over the globe from the Industrial Revolution into the 21st Century

    Directory of Open Access Journals (Sweden)

    Stephen Anthony Ciatti

    2015-06-01

    Full Text Available The history, present and future of the compression ignition engine is a fascinating story that spans over 100 years, from the time of Rudolf Diesel to the highly regulated and computerized engines of the 21st Century. The development of these engines provided inexpensive, reliable and high power density machines to allow transportation, construction and farming to be more productive with less human effort than in any previous period of human history. The concept that fuels could be consumed efficiently and effectively with only the ignition of pressurized and heated air was a significant departure from the previous coal-burning architecture of the 1800s. Today, the compression ignition engine is undergoing yet another revolution. The equipment that provides transport, builds roads and infrastructure, and harvests the food we eat needs to meet more stringent requirements than ever before. How successfully 21st Century engineers are able to make compression ignition engine technology meet these demands will be of major influence in assisting developing nations (with over 50% of the world’s population achieve the economic and environmental goals they seek.

  20. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression

    Science.gov (United States)

    Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping

    2015-10-01

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.

  1. Kinetics of the decomposition reaction of phosphorite concentrate

    Directory of Open Access Journals (Sweden)

    Huang Run

    2014-01-01

    Full Text Available Apatite is the raw material, which is mainly used in phosphate fertilizer, and part are used in yellow phosphorus, red phosphorus, and phosphoric acid in the industry. With the decrease of the high grade phosphorite lump, the agglomeration process is necessary for the phosphorite concentrate after beneficiation process. The decomposition behavior and the phase transformation are of vital importance for the agglomeration process of phosphorite. In this study, the thermal kinetic analysis method was used to study the kinetics of the decomposition of phosphorite concentrate. The phosphorite concentrate was heated under various heating rate, and the phases in the sample heated were examined by the X-ray diffraction method. It was found that the main phases in the phosphorite are fluorapatiteCa5(PO43F, quartz SiO2,and dolomite CaMg(CO32.The endothermic DSC peak corresponding to the mass loss caused by the decomposition of dolomite covers from 600°C to 850°C. The activation energy of the decomposition of dolomite, which increases with the increase in the extent of conversion, is about 71.6~123.6kJ/mol. The mechanism equation for the decomposition of dolomite agrees with the Valensi equation and G-B equation.

  2. Salt dependence of compression normal forces of quenched polyelectrolyte brushes

    Science.gov (United States)

    Hernandez-Zapata, Ernesto; Tamashiro, Mario N.; Pincus, Philip A.

    2001-03-01

    We obtained mean-field expressions for the compression normal forces between two identical opposing quenched polyelectrolyte brushes in the presence of monovalent salt. The brush elasticity is modeled using the entropy of ideal Gaussian chains, while the entropy of the microions and the electrostatic contribution to the grand potential is obtained by solving the non-linear Poisson-Boltzmann equation for the system in contact with a salt reservoir. For the polyelectrolyte brush we considered both a uniformly charged slab as well as an inhomogeneous charge profile obtained using a self-consistent field theory. Using the Derjaguin approximation, we related the planar-geometry results to the realistic two-crossed cylinders experimental set up. Theoretical predictions are compared to experimental measurements(Marc Balastre's abstract, APS March 2001 Meeting.) of the salt dependence of the compression normal forces between two quenched polyelectrolyte brushes formed by the adsorption of diblock copolymers poly(tert-butyl styrene)-sodium poly(styrene sulfonate) [PtBs/NaPSS] onto an octadecyltriethoxysilane (OTE) hydrophobically modified mica, as well as onto bare mica.

  3. The Effect of Body Mass on Outdoor Adult Human Decomposition.

    Science.gov (United States)

    Roberts, Lindsey G; Spencer, Jessica R; Dabbs, Gretchen R

    2017-09-01

    Forensic taphonomy explores factors impacting human decomposition. This study investigated the effect of body mass on the rate and pattern of adult human decomposition. Nine males and three females aged 49-95 years ranging in mass from 73 to 159 kg who were donated to the Complex for Forensic Anthropology Research between December 2012 and September 2015 were included in this study. Kelvin accumulated degree days (KADD) were used to assess the thermal energy required for subjects to reach several total body score (TBS) thresholds: early decomposition (TBS ≥6.0), TBS ≥12.5, advanced decomposition (TBS ≥19.0), TBS ≥23.0, and skeletonization (TBS ≥27.0). Results indicate no significant correlation between body mass and KADD at any TBS threshold. Body mass accounted for up to 24.0% of variation in decomposition rate depending on stage, and minor differences in decomposition pattern were observed. Body mass likely has a minimal impact on postmortem interval estimation. © 2017 American Academy of Forensic Sciences.

  4. Randomized interpolative decomposition of separated representations

    Science.gov (United States)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  5. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    Science.gov (United States)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  6. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    Science.gov (United States)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  7. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  8. Detailed Chemical Kinetic Modeling of Hydrazine Decomposition

    Science.gov (United States)

    Meagher, Nancy E.; Bates, Kami R.

    2000-01-01

    The purpose of this research project is to develop and validate a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. Hydrazine is used extensively in aerospace propulsion, and although liquid hydrazine is not considered detonable, many fuel handling systems create multiphase mixtures of fuels and fuel vapors during their operation. Therefore, a thorough knowledge of the decomposition chemistry of hydrazine under a variety of conditions can be of value in assessing potential operational hazards in hydrazine fuel systems. To gain such knowledge, a reasonable starting point is the development and validation of a detailed chemical kinetic mechanism for gas-phase hydrazine decomposition. A reasonably complete mechanism was published in 1996, however, many of the elementary steps included had outdated rate expressions and a thorough investigation of the behavior of the mechanism under a variety of conditions was not presented. The current work has included substantial revision of the previously published mechanism, along with a more extensive examination of the decomposition behavior of hydrazine. An attempt to validate the mechanism against the limited experimental data available has been made and was moderately successful. Further computational and experimental research into the chemistry of this fuel needs to be completed.

  9. Primary decomposition of zero-dimensional ideals over finite fields

    Science.gov (United States)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  10. Envera Variable Compression Ratio Engine

    Energy Technology Data Exchange (ETDEWEB)

    Charles Mendler

    2011-03-15

    Aggressive engine downsizing, variable compression ratio and use of the Atkinson cycle are being combined to improve fuel economy by up to 40 percent relative to port fuel injected gasoline engines, while maintaining full engine power. Approach Engine downsizing is viewed by US and foreign automobile manufacturers as one of the best options for improving fuel economy. While this strategy has already demonstrated a degree of success, downsizing and fuel economy gains are currently limited. With new variable compression ratio technology however, the degree of engine downsizing and fuel economy improvement can be greatly increased. A small variable compression ratio (VCR) engine has the potential to return significantly higher vehicle fuel economy while also providing high power. Affordability and potential for near term commercialization are key attributes of the Envera VCR engine. VCR Technology To meet torque and power requirements, a smaller engine needs to do more work per stroke. This is typically accomplished by boosting the incoming charge with either a turbo or supercharger so that more energy is present in the cylinder per stroke to do the work. With current production engines the degree of engine boosting (which correlates to downsizing) is limited by detonation (combustion knock) at high boost levels. Additionally, the turbo or supercharger needs to be responsive and efficient while providing the needed boost. VCR technology eliminates the limitation of engine knock at high load levels by reducing compression ratio to {approx}9:1 (or whatever level is appropriate) when high boost pressures are needed. By reducing the compression ratio during high load demand periods there is increased volume in the cylinder at top dead center (TDC) which allows more charge (or energy) to be present in the cylinder without increasing the peak pressure. Cylinder pressure is thus kept below the level at which the engine would begin to knock. When loads on the engine are low

  11. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest

    NARCIS (Netherlands)

    Monsieurs, Koenraad G.; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F.; Calle, Paul A.

    2012-01-01

    Background and goal of study: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with

  12. Decomposition Technology Development of Organic Component in a Decontamination Waste Solution

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Chong Hun; Oh, W. Z.; Won, H. J.; Choi, W. K.; Kim, G. N.; Moon, J. K

    2007-11-15

    Through the project of 'Decomposition Technology Development of Organic Component in a Decontamination Waste Solution', the followings were studied. 1. Investigation of decontamination characteristics of chemical decontamination process 2. Analysis of COD, ferrous ion concentration, hydrogen peroxide concentration 3. Decomposition tests of hardly decomposable organic compounds 4. Improvement of organic acid decomposition process by ultrasonic wave and UV light 5. Optimization of decomposition process using a surrogate decontamination waste solution.

  13. The decomposition of methyltrichlorosilane: Studies in a high-temperature flow reactor

    Energy Technology Data Exchange (ETDEWEB)

    Allendorf, M.D.; Osterheld, T.H.; Melius, C.F.

    1994-01-01

    Experimental measurements of the decomposition of methyltrichlorosilane (MTS), a common silicon carbide precursor, in a high-temperature flow reactor are presented. The results indicate that methane and hydrogen chloride are major products of the decomposition. No chlorinated silane products were observed. Hydrogen carrier gas was found to increase the rate of MTS decomposition. The observations suggest a radical-chain mechanism for the decomposition. The implications for silicon carbide chemical vapor deposition are discussed.

  14. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  15. Thermal decomposition of zirconium compounds with some aromatic hydroxycarboxylic acids

    Energy Technology Data Exchange (ETDEWEB)

    Koshel, A V; Malinko, L A; Karlysheva, K F; Sheka, I A; Shchepak, N I [AN Ukrainskoj SSR, Kiev. Inst. Obshchej i Neorganicheskoj Khimii

    1980-02-01

    By the thermogravimetry method investigated are processes of thermal decomposition of different zirconium compounds with mandelic, parabromomandelic, salicylic and sulphosalicylic acids. For identification of decomposition products the specimens have been kept at the temperature of effects up to the constant weight. Taken are IR-spectra, rentgenoarams, carried out is elementary analysis of decomposition products. It is stated that thermal decomposition of the investigated compounds passes in stages; the final product of thermolysis is ZrO/sub 2/. Nonhydrolized compounds are stable at heating in the air up to 200-265 deg. Hydroxy compounds begin to decompose at lower temperature (80-100 deg).

  16. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  17. Decomposition of continuum {gamma}-ray spectra using synthesized response matrix

    Energy Technology Data Exchange (ETDEWEB)

    Jandel, M.; Morhac, M.; Kliman, J.; Krupa, L.; Matousek, V. E-mail: vladislav.matousek@savba.sk; Hamilton, J.H.; Ramayya, A.V

    2004-01-01

    The efficient methods of decomposition of {gamma}-ray spectra, based on the Gold algorithm, are presented. They use a response matrix of Gammasphere, which was obtained by synthesis of simulated and interpolated response functions using a new developed interpolation algorithm. The decomposition method has been applied to the measured spectra of {sup 152}Eu and {sup 56}Co. The results show a very effective removal of the background counts and their concentration into the corresponding photopeaks. The peak-to-total ratio in the spectra achieved after applying the decomposition method is in the interval 0.95-0.99. In addition, a new advanced algorithm of the 'boosted' decomposition has been proposed. In the spectra obtained after applying the boosted decomposition to the measured spectra, very narrow photopeaks are observed with the counts concentrated to several channels.

  18. Inverse scale space decomposition

    DEFF Research Database (Denmark)

    Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane

    2018-01-01

    We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...

  19. Decomposition of benzidine, α-naphthylamine, and p-toluidine in soils

    International Nuclear Information System (INIS)

    Graveel, J.G.; Sommers, L.E.; Nelson, D.W.

    1986-01-01

    Decomposition of 14 C-labeled benzidine, α-naphthylamine, and p-toluidine in soil was studied in laboratory experiments by monitoring CO 2 production during a 308- to 365-d incubation period. The importance of microbial activity in decomposition of all three aromatic amines was shown by decreased 14 CO 2 evolution in 60 Co treated soils. After 365 d of incubation, 8.4 to 12% of added benzidine (54.3 μmol kg -1 ) was evolved as CO 2 while 17 to 31% of added α-naphthylamine (69.8 μmol kg -1 ) and 19 to 35% of added p-toluidine (93.3 μmol kg -1 ) were evolved as CO 2 in 308 d. Decomposition was enhanced by increasing the temperature from 12 to 30 0 C. For benzidine, both the amount and proportion decomposed increased with an increase in application rate. Decomposition of aromatic amines was not enhanced by the addition of decomposable substrates. Differences in decomposition of aromatic amines occurred among soils, but consistent relationships between decomposition of amines and soil properties were not observed. In batch equilibration studies, the Freundlich equation described aromatic amine sorption. Isotherms were nonlinear for benzidine and 1 -naphthylamine and linear for p-toluidine. Desorption of sorbed amines followed the order: benzidine < p-toluidine < α-naphthylamine and was inversely related to the extent of decomposition

  20. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    Science.gov (United States)

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  1. Multilevel domain decomposition for electronic structure calculations

    International Nuclear Information System (INIS)

    Barrault, M.; Cances, E.; Hager, W.W.; Le Bris, C.

    2007-01-01

    We introduce a new multilevel domain decomposition method (MDD) for electronic structure calculations within semi-empirical and density functional theory (DFT) frameworks. This method iterates between local fine solvers and global coarse solvers, in the spirit of domain decomposition methods. Using this approach, calculations have been successfully performed on several linear polymer chains containing up to 40,000 atoms and 200,000 atomic orbitals. Both the computational cost and the memory requirement scale linearly with the number of atoms. Additional speed-up can easily be obtained by parallelization. We show that this domain decomposition method outperforms the density matrix minimization (DMM) method for poor initial guesses. Our method provides an efficient preconditioner for DMM and other linear scaling methods, variational in nature, such as the orbital minimization (OM) procedure

  2. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  3. DECOMPOSITION STUDY OF CALCIUM CARBONATE IN COCKLE SHELL

    Directory of Open Access Journals (Sweden)

    MUSTAKIMAH MOHAMED

    2012-02-01

    Full Text Available Calcium oxide (CaO is recognized as an efficient carbon dioxide (CO2 adsorbent and separation of CO2 from gas stream using CaO based adsorbent is widely applied in gas purification process especially at high temperature reaction. CaO is normally been produced via thermal decomposition of calcium carbonate (CaCO3 sources such as limestone which is obtained through mining and quarrying limestone hill. Yet, this study able to exploit the vast availability of waste resources in Malaysia which is cockle shell, as the potential biomass resources for CaCO3 and CaO. In addition, effect of particle size towards decomposition process is put under study using four particle sizes which are 0.125-0.25 mm, 0.25-0.5 mm, 1-2 mm, and 2-4 mm. Decomposition reactivity is conducted using Thermal Gravimetric Analyzer (TGA at heating rate of 20°C/minutes in inert (Nitrogen atmosphere. Chemical property analysis using x-ray fluorescence (XRF, shows cockle shell is made up of 97% Calcium (Ca element and CaO is produced after decomposition is conducted, as been analyzed by x-ray diffusivity (XRD analyzer. Besides, smallest particle size exhibits the highest decomposition rate and the process was observed to follow first order kinetics. Activation energy, E, of the process was found to vary from 179.38 to 232.67 kJ/mol. From Arrhenius plot, E increased when the particle size is larger. To conclude, cockle shell is a promising source for CaO and based on four different particles sizes used, sample at 0.125-0.25 mm offers the highest decomposition rate.

  4. Microbial community functional change during vertebrate carrion decomposition.

    Directory of Open Access Journals (Sweden)

    Jennifer L Pechal

    Full Text Available Microorganisms play a critical role in the decomposition of organic matter, which contributes to energy and nutrient transformation in every ecosystem. Yet, little is known about the functional activity of epinecrotic microbial communities associated with carrion. The objective of this study was to provide a description of the carrion associated microbial community functional activity using differential carbon source use throughout decomposition over seasons, between years and when microbial communities were isolated from eukaryotic colonizers (e.g., necrophagous insects. Additionally, microbial communities were identified at the phyletic level using high throughput sequencing during a single study. We hypothesized that carrion microbial community functional profiles would change over the duration of decomposition, and that this change would depend on season, year and presence of necrophagous insect colonization. Biolog EcoPlates™ were used to measure the variation in epinecrotic microbial community function by the differential use of 29 carbon sources throughout vertebrate carrion decomposition. Pyrosequencing was used to describe the bacterial community composition in one experiment to identify key phyla associated with community functional changes. Overall, microbial functional activity increased throughout decomposition in spring, summer and winter while it decreased in autumn. Additionally, microbial functional activity was higher in 2011 when necrophagous arthropod colonizer effects were tested. There were inconsistent trends in the microbial function of communities isolated from remains colonized by necrophagous insects between 2010 and 2011, suggesting a greater need for a mechanistic understanding of the process. These data indicate that functional analyses can be implemented in carrion studies and will be important in understanding the influence of microbial communities on an essential ecosystem process, carrion decomposition.

  5. Mechanism and kinetics of thermal decomposition of ammoniacal complex of copper oxalate

    International Nuclear Information System (INIS)

    Prasad, R.

    2003-01-01

    A complex precursor has been synthesized by dissolving copper oxalate in liquor ammonia followed by drying. The thermal decomposition of the precursor has been studied in different atmospheres, air/nitrogen. The mechanism of decomposition of the precursor in air is not as simple one as in nitrogen. In nitrogen, it involves endothermic deammoniation followed by decomposition to finely divided elemental particles of copper. Whereas in air, decomposition and simultaneous oxidation of the residual products (oxidative decomposition), make the process complex and relatively bigger particle of cupric oxide are obtained as final product. The products of decomposition in different atmospheres have been characterized by X-ray diffraction and particle size analysis. The stoichiometric formula, Cu(NH 3 ) 2 C 2 O 4 of the precursor is established from elemental analysis and TG measurements, and it is designated as copper amino oxalate (CAO). In nitrogen atmosphere, the deammoniation and decomposition have been found to be zero and first order, respectively. The values of activation energy have been found to be 102.52 and 95.38 kJ/mol for deammoniation and decomposition, respectively

  6. A review of plutonium oxalate decomposition reactions and effects of decomposition temperature on the surface area of the plutonium dioxide product

    Energy Technology Data Exchange (ETDEWEB)

    Orr, R.M.; Sims, H.E.; Taylor, R.J., E-mail: robin.j.taylor@nnl.co.uk

    2015-10-15

    Plutonium (IV) and (III) ions in nitric acid solution readily form insoluble precipitates with oxalic acid. The plutonium oxalates are then easily thermally decomposed to form plutonium dioxide powder. This simple process forms the basis of current industrial conversion or ‘finishing’ processes that are used in commercial scale reprocessing plants. It is also widely used in analytical or laboratory scale operations and for waste residues treatment. However, the mechanisms of the thermal decompositions in both air and inert atmospheres have been the subject of various studies over several decades. The nature of intermediate phases is of fundamental interest whilst understanding the evolution of gases at different temperatures is relevant to process control. The thermal decomposition is also used to control a number of powder properties of the PuO{sub 2} product that are important to either long term storage or mixed oxide fuel manufacturing. These properties are the surface area, residual carbon impurities and adsorbed volatile species whereas the morphology and particle size distribution are functions of the precipitation process. Available data and experience regarding the thermal and radiation-induced decompositions of plutonium oxalate to oxide are reviewed. The mechanisms of the thermal decompositions are considered with a particular focus on the likely redox chemistry involved. Also, whilst it is well known that the surface area is dependent on calcination temperature, there is a wide variation in the published data and so new correlations have been derived. Better understanding of plutonium (III) and (IV) oxalate decompositions will assist the development of more proliferation resistant actinide co-conversion processes that are needed for advanced reprocessing in future closed nuclear fuel cycles. - Highlights: • Critical review of plutonium oxalate decomposition reactions. • New analysis of relationship between SSA and calcination temperature.

  7. Characteristic of root decomposition in a tropical rainforest in Sarawak, Malaysi

    Science.gov (United States)

    Ohashi, Mizue; Makita, Naoki; Katayam, Ayumi; Kume, Tomonori; Matsumoto, Kazuho; Khoon Kho, L.

    2016-04-01

    Woody roots play a significant role in forest carbon cycling, as up to 60 percent of tree photosynthetic production can be allocated to belowground. Root decay is one of the main processes of soil C dynamics and potentially relates to soil C sequestration. However, much less attention has been paid for root litter decomposition compared to the studies of leaf litter because roots are hidden from view. Previous studies have revealed that physico-chemical quality of roots, climate, and soil organisms affect root decomposition significantly. However, patterns and mechanisms of root decomposition are still poorly understood because of the high variability of root properties, field environment and potential decomposers. For example, root size would be a factor controlling decomposition rates, but general understanding of the difference between coarse and fine root decompositions is still lacking. Also, it is known that root decomposition is performed by soil animals, fungi and bacteria, but their relative importance is poorly understood. In this study, therefore, we aimed to characterize the root decomposition in a tropical rainforest in Sarawak, Malaysia, and clarify the impact of soil living organisms and root sizes on root litter decomposition. We buried soil cores with fine and coarse root litter bags in soil in Lambir Hills National Park. Three different types of soil cores that are covered by 1.5 cm plastic mesh, root-impermeable sheet (50um) and fungi-impermeable sheet (1um) were prepared. The soil cores were buried in February 2013 and collected 4 times, 134 days, 226 days, 786 days and 1151 days after the installation. We found that nearly 80 percent of the coarse root litter was decomposed after two years, whereas only 60 percent of the fine root litter was decomposed. Our results also showed significantly different ratio of decomposition between different cores, suggesting the different contribution of soil living organisms to decomposition process.

  8. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  9. Thermal decomposition of gaseous ammonium nitrate at low pressure: kinetic modeling of product formation and heterogeneous decomposition of nitric acid.

    Science.gov (United States)

    Park, J; Lin, M C

    2009-12-03

    The thermal decomposition of ammonium nitrate, NH(4)NO(3) (AN), in the gas phase has been studied at 423-56 K by pyrolysis/mass spectrometry under low-pressure conditions using a Saalfeld reactor coated with boric acid. The sublimation of NH(4)NO(3) at 423 K was proposed to produce equal amounts of NH(3) and HNO(3), followed by the decomposition reaction of HNO(3), HNO(3) + M --> OH + NO(2) + M (where M = third-body and reactor surface). The absolute yields of N(2), N(2)O, H(2)O, and NH(3), which can be unambiguously measured and quantitatively calibrated under a constant pressure at 5-6.2 torr He are kinetically modeled using the detailed [H,N,O]-mechanism established earlier for the simulation of NH(3)-NO(2) (Park, J.; Lin, M. C. Technologies and Combustion for a Clean Environment. Proc. 4th Int. Conf. 1997, 34-1, 1-5) and ADN decomposition reactions (Park, J.; Chakraborty, D.; Lin, M. C. Proc. Combust. Inst. 1998, 27, 2351-2357). Since the homogeneous decomposition reaction of HNO(3) itself was found to be too slow to account for the consumption of reactants and the formation of products, we also introduced the heterogeneous decomposition of HNO(3) in our kinetic modeling. The heterogeneous decomposition rate of HNO(3), HNO(3) + (B(2)O(3)/SiO(2)) --> OH + NO(2) + (B(2)O(3)/SiO(2)), was determined by varying its rate to match the modeled result to the measured concentrations of NH(3) and H(2)O; the rate could be represented by k(2b) = 7.91 x 10(7) exp(-12 600/T) s(-1), which appears to be consistent with those reported by Johnston and co-workers (Johnston, H. S.; Foering, L.; Tao, Y.-S.; Messerly, G. H. J. Am. Chem. Soc. 1951, 73, 2319-2321) for HNO(3) decomposition on glass reactors at higher temperatures. Notably, the concentration profiles of all species measured could be satisfactorily predicted by the existing [H,N,O]-mechanism with the heterogeneous initiation process.

  10. Thermal Decomposition of Gaseous Ammonium Nitrate at Low Pressure: Kinetic Modeling of Product Formation and Heterogeneous Decomposition of Nitric Acid

    Science.gov (United States)

    Park, J.; Lin, M. C.

    2009-10-01

    The thermal decomposition of ammonium nitrate, NH4NO3 (AN), in the gas phase has been studied at 423-56 K by pyrolysis/mass spectrometry under low-pressure conditions using a Saalfeld reactor coated with boric acid. The sublimation of NH4NO3 at 423 K was proposed to produce equal amounts of NH3 and HNO3, followed by the decomposition reaction of HNO3, HNO3 + M → OH + NO2 + M (where M = third-body and reactor surface). The absolute yields of N2, N2O, H2O, and NH3, which can be unambiguously measured and quantitatively calibrated under a constant pressure at 5-6.2 torr He are kinetically modeled using the detailed [H,N,O]-mechanism established earlier for the simulation of NH3-NO2 (Park, J.; Lin, M. C. Technologies and Combustion for a Clean Environment. Proc. 4th Int. Conf. 1997, 34-1, 1-5) and ADN decomposition reactions (Park, J.; Chakraborty, D.; Lin, M. C. Proc. Combust. Inst. 1998, 27, 2351-2357). Since the homogeneous decomposition reaction of HNO3 itself was found to be too slow to account for the consumption of reactants and the formation of products, we also introduced the heterogeneous decomposition of HNO3 in our kinetic modeling. The heterogeneous decomposition rate of HNO3, HNO3 + (B2O3/SiO2) → OH + NO2 + (B2O3/SiO2), was determined by varying its rate to match the modeled result to the measured concentrations of NH3 and H2O; the rate could be represented by k2b = 7.91 × 107 exp(-12 600/T) s-1, which appears to be consistent with those reported by Johnston and co-workers (Johnston, H. S.; Foering, L.; Tao, Y.-S.; Messerly, G. H. J. Am. Chem. Soc. 1951, 73, 2319-2321) for HNO3 decomposition on glass reactors at higher temperatures. Notably, the concentration profiles of all species measured could be satisfactorily predicted by the existing [H,N,O]-mechanism with the heterogeneous initiation process.

  11. Preconditioned dynamic mode decomposition and mode selection algorithms for large datasets using incremental proper orthogonal decomposition

    Science.gov (United States)

    Ohmichi, Yuya

    2017-07-01

    In this letter, we propose a simple and efficient framework of dynamic mode decomposition (DMD) and mode selection for large datasets. The proposed framework explicitly introduces a preconditioning step using an incremental proper orthogonal decomposition (POD) to DMD and mode selection algorithms. By performing the preconditioning step, the DMD and mode selection can be performed with low memory consumption and therefore can be applied to large datasets. Additionally, we propose a simple mode selection algorithm based on a greedy method. The proposed framework is applied to the analysis of three-dimensional flow around a circular cylinder.

  12. Soviet paper on laser target heating, symmetry of irradiation, and two-dimensional effects on compression

    International Nuclear Information System (INIS)

    Sahlin, H.L.

    1976-01-01

    Included is a paper presented at the Annual Meeting of the Plasma Physics Division of the American Physical Society in San Francisco on November 19, 1976. The paper discusses some theoretical problems of laser target irradiation and compression investigated at the laboratory of quantum radiophysics of Lebedev Physical Institute. Of significant interest was the absorption and reflection of laser radiation in the corona plasma of a laser target

  13. Comparison of the effectiveness of compression stockings and layer compression systems in venous ulceration treatment

    Science.gov (United States)

    Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna; Mościcka, Paulina

    2010-01-01

    Introduction The aim of the research was to compare the dynamics of venous ulcer healing when treated with the use of compression stockings as well as original two- and four-layer bandage systems. Material and methods A group of 46 patients suffering from venous ulcers was studied. This group consisted of 36 (78.3%) women and 10 (21.70%) men aged between 41 and 88 years (the average age was 66.6 years and the median was 67). Patients were randomized into three groups, for treatment with the ProGuide two-layer system, Profore four-layer compression, and with the use of compression stockings class II. In the case of multi-layer compression, compression ensuring 40 mmHg blood pressure at ankle level was used. Results In all patients, independently of the type of compression therapy, a few significant statistical changes of ulceration area in time were observed (Student’s t test for matched pairs, p ulceration area in each of the successive measurements was observed in patients treated with the four-layer system – on average 0.63 cm2/per week. The smallest loss of ulceration area was observed in patients using compression stockings – on average 0.44 cm2/per week. However, the observed differences were not statistically significant (Kruskal-Wallis test H = 4.45, p > 0.05). Conclusions A systematic compression therapy, applied with preliminary blood pressure of 40 mmHg, is an effective method of conservative treatment of venous ulcers. Compression stockings and prepared systems of multi-layer compression were characterized by similar clinical effectiveness. PMID:22419941

  14. Review on Thermal Decomposition of Ammonium Nitrate

    Science.gov (United States)

    Chaturvedi, Shalini; Dave, Pragnesh N.

    2013-01-01

    In this review data from the literature on thermal decomposition of ammonium nitrate (AN) and the effect of additives to their thermal decomposition are summarized. The effect of additives like oxides, cations, inorganic acids, organic compounds, phase-stablized CuO, etc., is discussed. The effect of an additive mainly occurs at the exothermic peak of pure AN in a temperature range of 200°C to 140°C.

  15. Correlations between quality indexes of chest compression.

    Science.gov (United States)

    Zhang, Feng-Ling; Yan, Li; Huang, Su-Fang; Bai, Xiang-Jun

    2013-01-01

    Cardiopulmonary resuscitation (CPR) is a kind of emergency treatment for cardiopulmonary arrest, and chest compression is the most important and necessary part of CPR. The American Heart Association published the new Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care in 2010 and demanded for better performance of chest compression practice, especially in compression depth and rate. The current study was to explore the relationship of quality indexes of chest compression and to identify the key points in chest compression training and practice. Totally 219 healthcare workers accepted chest compression training by using Laerdal ACLS advanced life support resuscitation model. The quality indexes of chest compression, including compression hands placement, compression rate, compression depth, and chest wall recoil as well as self-reported fatigue time were monitored by the Laerdal Computer Skills and Reporting System. The quality of chest compression was related to the gender of the compressor. The indexes in males, including self-reported fatigue time, the accuracy of compression depth and the compression rate, the accuracy of compression rate, were higher than those in females. However, the accuracy of chest recoil was higher in females than in males. The quality indexes of chest compression were correlated with each other. The self-reported fatigue time was related to all the indexes except the compression rate. It is necessary to offer CPR training courses regularly. In clinical practice, it might be better to change the practitioner before fatigue, especially for females or weak practitioners. In training projects, more attention should be paid to the control of compression rate, in order to delay the fatigue, guarantee enough compression depth and improve the quality of chest compression.

  16. Crop residue decomposition in Minnesota biochar-amended plots

    Science.gov (United States)

    Weyers, S. L.; Spokas, K. A.

    2014-06-01

    Impacts of biochar application at laboratory scales are routinely studied, but impacts of biochar application on decomposition of crop residues at field scales have not been widely addressed. The priming or hindrance of crop residue decomposition could have a cascading impact on soil processes, particularly those influencing nutrient availability. Our objectives were to evaluate biochar effects on field decomposition of crop residue, using plots that were amended with biochars made from different plant-based feedstocks and pyrolysis platforms in the fall of 2008. Litterbags containing wheat straw material were buried in July of 2011 below the soil surface in a continuous-corn cropped field in plots that had received one of seven different biochar amendments or a uncharred wood-pellet amendment 2.5 yr prior to start of this study. Litterbags were collected over the course of 14 weeks. Microbial biomass was assessed in treatment plots the previous fall. Though first-order decomposition rate constants were positively correlated to microbial biomass, neither parameter was statistically affected by biochar or wood-pellet treatments. The findings indicated only a residual of potentially positive and negative initial impacts of biochars on residue decomposition, which fit in line with established feedstock and pyrolysis influences. Overall, these findings indicate that no significant alteration in the microbial dynamics of the soil decomposer communities occurred as a consequence of the application of plant-based biochars evaluated here.

  17. Crop residue decomposition in Minnesota biochar amended plots

    Science.gov (United States)

    Weyers, S. L.; Spokas, K. A.

    2014-02-01

    Impacts of biochar application at laboratory scales are routinely studied, but impacts of biochar application on decomposition of crop residues at field scales have not been widely addressed. The priming or hindrance of crop residue decomposition could have a cascading impact on soil processes, particularly those influencing nutrient availability. Our objectives were to evaluate biochar effects on field decomposition of crop residue, using plots that were amended with biochars made from different feedstocks and pyrolysis platforms prior to the start of this study. Litterbags containing wheat straw material were buried below the soil surface in a continuous-corn cropped field in plots that had received one of seven different biochar amendments or a non-charred wood pellet amendment 2.5 yr prior to start of this study. Litterbags were collected over the course of 14 weeks. Microbial biomass was assessed in treatment plots the previous fall. Though first-order decomposition rate constants were positively correlated to microbial biomass, neither parameter was statistically affected by biochar or wood-pellet treatments. The findings indicated only a residual of potentially positive and negative initial impacts of biochars on residue decomposition, which fit in line with established feedstock and pyrolysis influences. Though no significant impacts were observed with field-weathered biochars, effective soil management may yet have to account for repeat applications of biochar.

  18. Does the quality of chest compressions deteriorate when the chest compression rate is above 120/min?

    Science.gov (United States)

    Lee, Soo Hoon; Kim, Kyuseok; Lee, Jae Hyuk; Kim, Taeyun; Kang, Changwoo; Park, Chanjong; Kim, Joonghee; Jo, You Hwan; Rhee, Joong Eui; Kim, Dong Hoon

    2014-08-01

    The quality of chest compressions along with defibrillation is the cornerstone of cardiopulmonary resuscitation (CPR), which is known to improve the outcome of cardiac arrest. We aimed to investigate the relationship between the compression rate and other CPR quality parameters including compression depth and recoil. A conventional CPR training for lay rescuers was performed 2 weeks before the 'CPR contest'. CPR anytime training kits were distributed to respective participants for self-training on their own in their own time. The participants were tested for two-person CPR in pairs. The quantitative and qualitative data regarding the quality of CPR were collected from a standardised check list and SkillReporter, and compared by the compression rate. A total of 161 teams consisting of 322 students, which includes 116 men and 206 women, participated in the CPR contest. The mean depth and rate for chest compression were 49.0±8.2 mm and 110.2±10.2/min. Significantly deeper chest compression depths were noted at rates over 120/min than those at any other rates (47.0±7.4, 48.8±8.4, 52.3±6.7, p=0.008). Chest compression depth was proportional to chest compression rate (r=0.206, pcompression including chest compression depth and chest recoil by chest compression rate. Further evaluation regarding the upper limit of the chest compression rate is needed to ensure complete full chest wall recoil while maintaining an adequate chest compression depth. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  19. Effects of simulated acid precipitation and liming on pine litter decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Ishac, Y.Z.; Hovland, J.

    1976-01-01

    The decomposition of withered lodgepole pine needles (Pinus contorta douglas) has been studied in a laboratory experiment. The needles were picked from trees that have been irrigated with simulated acid rain at pH 5.6 or 3.0. The soil beneath some of the trees was limed. The decomposition of the needles increased with temperature and incubation period. Liming of the soil retarded the decomposition of the needles that have been given rain at pH 3, while irrigation with 50 mm of water per month at pH 3 increased the decomposition compared with 25 mm/month. When needles were incubated in dilute sulphuric acid, the decomposition was reduced at pH 1.8 compared to the decomposition at pH 3.5. At pH 1.0 no decomposition occurred. Fungi were isolated from the needles. The different treatments did not seem to affect the composition of the fungal flora of the needles. The fungi were tested for their ability to decompose cellulose. The four most active cellulose decomposeres were Trichoderma harzianum, Coniothyrium sp., Cladosporium macrocarpum, and a sterile white mycelium. T. harzianum seemed to be more tolerant to acid conditions than the other fungi.

  20. Effect of dislocations on spinodal decomposition in Fe-Cr alloys

    International Nuclear Information System (INIS)

    Li Yongsheng; Li Shuxiao; Zhang Tongyi

    2009-01-01

    Phase-field simulations of spinodal decomposition in Fe-Cr alloys with dislocations were performed by using the Cahn-Hilliard diffusion equation. The stress field of dislocations was calculated in real space via Stroh's formalism, while the composition inhomogeneity-induced stress field and the diffusion equation were numerically calculated in Fourier space. The simulation results indicate that dislocation stress field facilitates, energetically and kinetically, spinodal decomposition, making the phase separation faster and the separated phase particles bigger at and near the dislocation core regions. A tilt grain boundary is thus a favorable place for spinodal decomposition, resulting in a special microstructure morphology, especially at the early stage of decomposition.

  1. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  2. A test of the hierarchical model of litter decomposition

    DEFF Research Database (Denmark)

    Bradford, Mark A.; Veen, G. F.; Bonis, Anne

    2017-01-01

    Our basic understanding of plant litter decomposition informs the assumptions underlying widely applied soil biogeochemical models, including those embedded in Earth system models. Confidence in projected carbon cycle-climate feedbacks therefore depends on accurate knowledge about the controls...... regulating the rate at which plant biomass is decomposed into products such as CO2. Here we test underlying assumptions of the dominant conceptual model of litter decomposition. The model posits that a primary control on the rate of decomposition at regional to global scales is climate (temperature...

  3. Testing the Use of Pigs as Human Proxies in Decomposition Studies.

    Science.gov (United States)

    Connor, Melissa; Baigent, Christiane; Hansen, Eriek S

    2017-12-28

    Pigs are a common human analogue in taphonomic study, yet data comparing the trajectory of decomposition between the two groups are lacking. This study compared decomposition rate and gross tissue change in 17 pigs and 22 human remains placed in the Forensic Investigation Research Station in western Colorado between 2012 and 2015. Accumulated degree days (ADD) were used to assess the number of thermal units required to reach a given total body score (TBS) (1) which was used as the measure of decomposition. A comparison of slopes in linear mixed effects model indicated that decomposition rates significantly differed between human donors and pig remains χ 2 (1) = 5.662, p = 0.017. Neither the pig nor the human trajectory compared well to the TBS model. Thus, (i) pigs are not an adequate proxy for human decomposition studies, and (ii) in the semiarid environment of western Colorado, there is a need to develop a regional decomposition model. © 2017 American Academy of Forensic Sciences.

  4. Triboluminescence and associated decomposition of solid methanol

    International Nuclear Information System (INIS)

    Trout, G.J.; Moore, D.E.; Hawke, J.G.

    1975-01-01

    The decomposition is initiated by the cooling of solid methanol through the β → α transiRon at 157.8K, producing the gases hydrogen, carbon monoxide, and methane. The passage through this lambda transition causes the breakup of large crystals of β-methanol into crystallites of α-methanol and is accompanied by light emission as well as decomposition. This triboluminescence is accompanied by, and apparently produced by, electrical discharges through methanol vapor in the vicinity of the solid. The potential differences needed to produce the electrical breakdown of the methanol vapor apparently arise from the disruption of the long hydrogen bonded chains of methanol molecules present in crystalline methanol. Charge separation following crystal deformation is a characteristic of substances which exhibit gas discharge triboluminescence; solid methanol has been found to emit such luminescence when mechanically deformed in the absence of the β → α transition The decomposition products are not produced directly by the breaking up of the solid methanol but from the vapor phase methanol by the electrical discharges. That gas phase decomposition does occur was confirmed by observing that the vapors of C 2 H 5 OH, CH 3 OD, and CD 3 OD decompose on being admitted to a vessel containing methanol undergoing the β → α phase transition. (U.S.)

  5. Radiolytic decomposition of dioxins in liquid wastes

    International Nuclear Information System (INIS)

    Zhao Changli; Taguchi, M.; Hirota, K.; Takigami, M.; Kojima, T.

    2006-01-01

    The dioxins including polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) are some of the most toxic persistent organic pollutants. These chemicals have widely contaminated the air, water, and soil. They would accumulate in the living body through the food chains, leading to a serious public health hazard. In the present study, radiolytic decomposition of dioxins has been investigated in liquid wastes, including organic waste and waste-water. Dioxin-containing organic wastes are commonly generated in nonane or toluene. However, it was found that high radiation doses are required to completely decompose dioxins in the two solvents. The decomposition was more efficient in ethanol than in nonane or toluene. The addition of ethanol to toluene or nonane could achieve >90% decomposition of dioxins at the dose of 100 kGy. Thus, dioxin-containing organic wastes can be treated as regular organic wastes after addition of ethanol and subsequent γ-ray irradiation. On the other hand, radiolytic decomposition of dioxins easily occurred in pure-water than in waste-water, because the reaction species is largely scavenged by the dominant organic materials in waste-water. Dechlorination was not a major reaction pathway for the radiolysis of dioxin in water. In addition, radiolytic mechanism and dechlorinated pathways in liquid wastes were also discussed. (authors)

  6. Nutrient-enhanced decomposition of plant biomass in a freshwater wetland

    Science.gov (United States)

    Bodker, James E.; Turner, Robert Eugene; Tweel, Andrew; Schulz, Christopher; Swarzenski, Christopher M.

    2015-01-01

    We studied soil decomposition in a Panicum hemitomon (Schultes)-dominated freshwater marsh located in southeastern Louisiana that was unambiguously changed by secondarily-treated municipal wastewater effluent. We used four approaches to evaluate how belowground biomass decomposition rates vary under different nutrient regimes in this marsh. The results of laboratory experiments demonstrated how nutrient enrichment enhanced the loss of soil or plant organic matter by 50%, and increased gas production. An experiment demonstrated that nitrogen, not phosphorus, limited decomposition. Cellulose decomposition at the field site was higher in the flowfield of the introduced secondarily treated sewage water, and the quality of the substrate (% N or % P) was directly related to the decomposition rates. We therefore rejected the null hypothesis that nutrient enrichment had no effect on the decomposition rates of these organic soils. In response to nutrient enrichment, plants respond through biomechanical or structural adaptations that alter the labile characteristics of plant tissue. These adaptations eventually change litter type and quality (where the marsh survives) as the % N content of plant tissue rises and is followed by even higher decomposition rates of the litter produced, creating a positive feedback loop. Marsh fragmentation will increase as a result. The assumptions and conditions underlying the use of unconstrained wastewater flow within natural wetlands, rather than controlled treatment within the confines of constructed wetlands, are revealed in the loss of previously sequestered carbon, habitat, public use, and other societal benefits.

  7. Hydrogen peroxide decomposition kinetics in aquaculture water

    DEFF Research Database (Denmark)

    Arvin, Erik; Pedersen, Lars-Flemming

    2015-01-01

    during the HP decomposition. The model assumes that the enzyme decay is controlled by an inactivation stoichiometry related to the HP decomposition. In order to make the model easily applicable, it is furthermore assumed that the COD is a proxy of the active biomass concentration of the water and thereby......Hydrogen peroxide (HP) is used in aquaculture systems where preventive or curative water treatments occasionally are required. Use of chemical agents can be challenging in recirculating aquaculture systems (RAS) due to extended water retention time and because the agents must not damage the fish...... reared or the nitrifying bacteria in the biofilters at concentrations required to eliminating pathogens. This calls for quantitative insight into the fate of the disinfectant residuals during water treatment. This paper presents a kinetic model that describes the HP decomposition in aquaculture water...

  8. Thermal Plasma decomposition of fluoriated greenhouse gases

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Soo Seok; Watanabe, Takayuki [Tokyo Institute of Technology, Yokohama (Japan); Park, Dong Wha [Inha University, Incheon (Korea, Republic of)

    2012-02-15

    Fluorinated compounds mainly used in the semiconductor industry are potent greenhouse gases. Recently, thermal plasma gas scrubbers have been gradually replacing conventional burn-wet type gas scrubbers which are based on the combustion of fossil fuels because high conversion efficiency and control of byproduct generation are achievable in chemically reactive high temperature thermal plasma. Chemical equilibrium composition at high temperature and numerical analysis on a complex thermal flow in the thermal plasma decomposition system are used to predict the process of thermal decomposition of fluorinated gas. In order to increase economic feasibility of the thermal plasma decomposition process, increase of thermal efficiency of the plasma torch and enhancement of gas mixing between the thermal plasma jet and waste gas are discussed. In addition, noble thermal plasma systems to be applied in the thermal plasma gas treatment are introduced in the present paper.

  9. DECOMPOSITION OF TARS IN MICROWAVE PLASMA – PRELIMINARY RESULTS

    Directory of Open Access Journals (Sweden)

    Mateusz Wnukowski

    2014-07-01

    Full Text Available The paper refers to the main problem connected with biomass gasification - a presence of tar in a product gas. This paper presents preliminary results of tar decomposition in a microwave plasma reactor. It gives a basic insight into the construction and work of the plasma reactor. During the experiment, researches were carried out on toluene as a tar surrogate. As a carrier gas for toluene and as a plasma agent, nitrogen was used. Flow rates of the gases and the microwave generator’s power were constant during the whole experiment. Results of the experiment showed that the decomposition process of toluene was effective because the decomposition efficiency attained above 95%. The main products of tar decomposition were light hydrocarbons and soot. The article also gives plans for further research in a matter of tar removal from the product gas.

  10. Ozone Decomposition on the Surface of Metal Oxide Catalyst

    Directory of Open Access Journals (Sweden)

    Batakliev Todor Todorov

    2014-12-01

    Full Text Available The catalytic decomposition of ozone to molecular oxygen over catalytic mixture containing manganese, copper and nickel oxides was investigated in the present work. The catalytic activity was evaluated on the basis of the decomposition coefficient which is proportional to ozone decomposition rate, and it has been already used in other studies for catalytic activity estimation. The reaction was studied in the presence of thermally modified catalytic samples operating at different temperatures and ozone flow rates. The catalyst changes were followed by kinetic methods, surface measurements, temperature programmed reduction and IR-spectroscopy. The phase composition of the metal oxide catalyst was determined by X-ray diffraction. The catalyst mixture has shown high activity in ozone decomposition at wet and dry O3/O2 gas mixtures. The mechanism of catalytic ozone degradation was suggested.

  11. Doob's decomposition of set-valued submartingales via ordered ...

    African Journals Online (AJOL)

    We use ideas from measure-free martingale theory and R˚adstr¨om' completion of a near vector space to derive a Doob decomposition of submartingales in ordered near vector spaces. As a special cases thereof, we obtain the Doob decomposition of set-valued submartingales, as noted by Daures, Ni and Zhang, and an ...

  12. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    Science.gov (United States)

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  13. Decomposition of argentiferous plumbojarosite in Ca O media

    International Nuclear Information System (INIS)

    Patino, F.; Arenas, A.; Rivera, I.; Cordoba, D.A.; Hernandez, L.; Salinas, E.

    1998-01-01

    The decomposition of argentiferous plumbojarosite in CaO media is studied to determine the rates dependences with respect to concentration, energetic request and particle size. The alkaline decomposition process of jarosite phase can be represented by: Pb .05 Fe 3 (SO 4 ) 2 (OH) 6 (s) + 4 (OH) - (aq) → 0.5 Pb (OH) 2 (s) + 3 Fe (OH) 3 (s) + 2 SO 4 2- (aq). The resultant solids of the decomposition formed by a gel of iron and lead hydroxides, are amorphous and do not evolve to crystalline phases of lead ferrite type in the studied conditions. The alkaline decomposition process in CaO media is of zero order with respect to the OH - concentration for [OH - ] > 10 -3 M, presenting an order of ≅ 0.5 at lower concentrations. The temperature effect indicates an activation energy of 45 KJ/mol, while the observed rates in different sizes of aggregate, as well as the whole-one are practically identical. These dependences are indicative of chemical control of the reaction because they are incompatible with a control by diffusion in ashes cape. (Author)

  14. An optimization approach for fitting canonical tensor decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M. (Sandia National Laboratories, Albuquerque, NM); Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  15. High Performance Polar Decomposition on Distributed Memory Systems

    KAUST Repository

    Sukkari, Dalal E.

    2016-08-08

    The polar decomposition of a dense matrix is an important operation in linear algebra. It can be directly calculated through the singular value decomposition (SVD) or iteratively using the QR dynamically-weighted Halley algorithm (QDWH). The former is difficult to parallelize due to the preponderant number of memory-bound operations during the bidiagonal reduction. We investigate the latter scenario, which performs more floating-point operations but exposes at the same time more parallelism, and therefore, runs closer to the theoretical peak performance of the system, thanks to more compute-bound matrix operations. Profiling results show the performance scalability of QDWH for calculating the polar decomposition using around 9200 MPI processes on well and ill-conditioned matrices of 100K×100K problem size. We study then the performance impact of the QDWH-based polar decomposition as a pre-processing step toward calculating the SVD itself. The new distributed-memory implementation of the QDWH-SVD solver achieves up to five-fold speedup against current state-of-the-art vendor SVD implementations. © Springer International Publishing Switzerland 2016.

  16. Decomposition of aboveground biomass of a herbaceous wetland stand

    OpenAIRE

    KLIMOVIČOVÁ, Lucie

    2010-01-01

    The master?s thesis is part of the project GA ČR č. P504/11/1151- Role of plants in the greenhouse gas budget of a sedge fen. This thesis deals with the decomposition of aboveground vegetation in a herbaceous wetland. The decomposition rate was established on the flooded part of the Wet Meadows near Třeboň. The rate of the decomposition processes was evaluated using the litter-bag method. Mesh bags filled with dry plant matter were located in the vicinity of the automatic meteorological stati...

  17. Wellhead compression

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  18. Soil fauna and plant litter decomposition in tropical and subalpine forests

    Science.gov (United States)

    G. Gonzalez; T.R. Seastedt

    2001-01-01

    The decomposition of plant residues is influenced by their chemical composition, the physical-chemical environment, and the decomposer organisms. Most studies interested in latitudinal gradients of decomposition have focused on substrate quality and climate effects on decomposition, and have excluded explicit recognition of the soil organisms involved in the process....

  19. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    Science.gov (United States)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  20. The Slice Algorithm For Irreducible Decomposition of Monomial Ideals

    DEFF Research Database (Denmark)

    Roune, Bjarke Hammersholt

    2009-01-01

    Irreducible decomposition of monomial ideals has an increasing number of applications from biology to pure math. This paper presents the Slice Algorithm for computing irreducible decompositions, Alexander duals and socles of monomial ideals. The paper includes experiments showing good performance...

  1. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    Directory of Open Access Journals (Sweden)

    Ming Dong

    2017-11-01

    Full Text Available Sulfur hexafluoride (SF6 gas-insulated electrical equipment is widely used in high-voltage (HV and extra-high-voltage (EHV power systems. Partial discharge (PD and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES and infrared (IR spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  2. Multi-country comparisons of energy performance: The index decomposition analysis approach

    International Nuclear Information System (INIS)

    Ang, B.W.; Xu, X.Y.; Su, Bin

    2015-01-01

    Index decomposition analysis (IDA) is a popular tool for studying changes in energy consumption over time in a country or region. This specific application of IDA, which may be called temporal decomposition analysis, has been extended by researchers and analysts to study variations in energy consumption or energy efficiency between countries or regions, i.e. spatial decomposition analysis. In spatial decomposition analysis, the main objective is often to understand the relative contributions of overall activity level, activity structure, and energy intensity in explaining differences in total energy consumption between two countries or regions. We review the literature of spatial decomposition analysis, investigate the methodological issues, and propose a spatial decomposition analysis framework for multi-region comparisons. A key feature of the proposed framework is that it passes the circularity test and provides consistent results for multi-region comparisons. A case study in which 30 regions in China are compared and ranked based on their performance in energy consumption is presented. - Highlights: • We conducted cross-regional comparisons of energy consumption using IDA. • We proposed two criteria for IDA method selection in spatial decomposition analysis. • We proposed a new model for regional comparison that passes the circularity test. • Features of the new model are illustrated using the data of 30 regions in China

  3. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    Science.gov (United States)

    Dong, Ming; Ren, Ming; Ye, Rixin

    2017-01-01

    Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268

  4. Peat decomposition records in three pristine ombrotrophic bogs in southern Patagonia

    Directory of Open Access Journals (Sweden)

    T. Broder

    2012-04-01

    Full Text Available Ombrotrophic bogs in southern Patagonia have been examined with regard to paleoclimatic and geochemical research questions but knowledge about organic matter decomposition in these bogs is limited. Therefore, we examined peat humification with depth by Fourier Transformed Infrared (FTIR measurements of solid peat, C/N ratio, and δ13C and δ15N isotope measurements in three bog sites. Peat decomposition generally increased with depth but distinct small scale variation occurred, reflecting fluctuations in factors controlling decomposition. C/N ratios varied mostly between 40 and 120 and were significantly correlated (R2 > 0.55, p < 0.01 with FTIR-derived humification indices. The degree of decomposition was lowest at a site presently dominated by Sphagnum mosses. The peat was most strongly decomposed at the driest site, where currently peat-forming vegetation produced less refractory organic material, possibly due to fertilizing effects of high sea spray deposition. Decomposition of peat was also advanced near ash layers, suggesting a stimulation of decomposition by ash deposition. Values of δ13C were 26.5 ± 2‰ in the peat and partly related to decomposition indices, while δ15N in the peat varied around zero and did not consistently relate to any decomposition index. Concentrations of DOM partly related to C/N ratios, partly to FTIR derived indices. They were not conclusively linked to the decomposition degree of the peat. DOM was enriched in 13C and in 15N relative to the solid phase probably due to multiple microbial modifications and recycling of N in these N-poor environments. In summary, the depth profiles of C/N ratios, δ13C values, and FTIR spectra seemed to reflect changes in environmental conditions affecting decomposition, such as bog wetness, but were dominated by site specific factors, and are further influenced by ash

  5. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  6. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  7. Role of electrodes in ambient electrolytic decomposition of hydroxylammonium nitrate (HAN) solutions

    OpenAIRE

    Koh, Kai Seng; Chin, Jitkai; Wahida Ku Chik, Tengku F.

    2013-01-01

    Decomposition of hydroxylammonium nitrate (HAN) solution with electrolytic decomposition method has attracted much attention in recent years due to its efficiencies and practicability. However, the phenomenon has not been well-studied till now. By utilizing mathematical model currently available, the effect of water content and power used for decomposition was studied. Experiment data shows that sacrificial material such as copper or aluminum outperforms inert electrodes in the decomposition ...

  8. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  9. NRSA enzyme decomposition model data

    Data.gov (United States)

    U.S. Environmental Protection Agency — Microbial enzyme activities measured at more than 2000 US streams and rivers. These enzyme data were then used to predict organic matter decomposition and microbial...

  10. Decomposition of residual oil by large scale HSC plant

    Energy Technology Data Exchange (ETDEWEB)

    Washimi, Koichi; Ogata, Yoshitaka; Limmer, H.; Schuetter, H. (Toyo Engineering Corp., funabashi, Japan VEB Petrolchemisches Kombinat Schwedt, Schwedt (East Germany))

    1989-07-01

    Regarding large scale and high decomposition ratio visbreaker HSC, characteristic points and operation conditions of a new plant in East Germany were introduced. As for the characteristics of the process, high decomposition ratio and stable decpmposed oil, availability of high sulfur content oil or even decomposed residuum of visbreaker, stableness of produced light oil with low content of unsaturated components, low investment with low running cost, were indicated. For the realization of high decomposition ratio, designing for suppressing the decomposition in heating furnace and accelaration of it in soaking drum, high space velocity of gas phase for better agitation, were raised. As the main subject of technical development, design of soaking drum was indicated with main dimensions for the designing. Operation conditions of the process in East Germany using residual oil supplied from already working visbreaker for USSR crude oil were introduced. 6 refs., 4 figs., 2 tabs.

  11. Thermal decomposition of hydroxylamine: Isoperibolic calorimetric measurements at different conditions

    International Nuclear Information System (INIS)

    Adamopoulou, Theodora; Papadaki, Maria I.; Kounalakis, Manolis; Vazquez-Carreto, Victor; Pineda-Solano, Alba; Wang, Qingsheng; Mannan, M.Sam

    2013-01-01

    Highlights: • Hydroxylamine thermal decomposition enthalpy was measured using larger quantities. • The rate at which heat is evolved depends on hydroxylamine concentration. • Decomposition heat is strongly affected by the conditions and the selected baseline. • The need for enthalpy measurements using a larger reactant mass is pinpointed. • Hydroxylamine decomposition in the presence of argon is much faster than in air. -- Abstract: Thermal decomposition of hydroxylamine, NH 2 OH, was responsible for two serious accidents. However, its reactive behavior and the synergy of factors affecting its decomposition are not being understood. In this work, the global enthalpy of hydroxylamine decomposition has been measured in the temperature range of 130–150 °C employing isoperibolic calorimetry. Measurements were performed in a metal reactor, employing 30–80 ml solutions containing 1.4–20 g of pure hydroxylamine (2.8–40 g of the supplied reagent). The measurements showed that increased concentration or temperature, results in higher global enthalpies of reaction per unit mass of reactant. At 150 °C, specific enthalpies as high as 8 kJ per gram of hydroxylamine were measured, although in general they were in the range of 3−5 kJ g −1 . The accurate measurement of the generated heat was proven to be a cumbersome task as (a) it is difficult to identify the end of decomposition, which after a fast initial stage, proceeds very slowly, especially at lower temperatures and (b) the environment of gases affects the reaction rate

  12. Thermal decomposition of hydroxylamine: Isoperibolic calorimetric measurements at different conditions

    Energy Technology Data Exchange (ETDEWEB)

    Adamopoulou, Theodora [Department of Environmental and Natural Resources Management, University of Western Greece (formerly of University of Ioannina), Seferi 2, Agrinio GR30100 (Greece); Papadaki, Maria I., E-mail: mpapadak@cc.uoi.gr [Department of Environmental and Natural Resources Management, University of Western Greece (formerly of University of Ioannina), Seferi 2, Agrinio GR30100 (Greece); Kounalakis, Manolis [Department of Environmental and Natural Resources Management, University of Western Greece (formerly of University of Ioannina), Seferi 2, Agrinio GR30100 (Greece); Vazquez-Carreto, Victor; Pineda-Solano, Alba [Mary Kay O’Connor Process Safety Center, Artie McFerrin Department of Chemical Engineering, Texas A and M University, College Station, TX 77843 (United States); Wang, Qingsheng [Department of Fire Protection and Safety and Department of Chemical Engineering, Oklahoma State University, 494 Cordell South, Stillwater, OK 74078 (United States); Mannan, M.Sam [Mary Kay O’Connor Process Safety Center, Artie McFerrin Department of Chemical Engineering, Texas A and M University, College Station, TX 77843 (United States)

    2013-06-15

    Highlights: • Hydroxylamine thermal decomposition enthalpy was measured using larger quantities. • The rate at which heat is evolved depends on hydroxylamine concentration. • Decomposition heat is strongly affected by the conditions and the selected baseline. • The need for enthalpy measurements using a larger reactant mass is pinpointed. • Hydroxylamine decomposition in the presence of argon is much faster than in air. -- Abstract: Thermal decomposition of hydroxylamine, NH{sub 2}OH, was responsible for two serious accidents. However, its reactive behavior and the synergy of factors affecting its decomposition are not being understood. In this work, the global enthalpy of hydroxylamine decomposition has been measured in the temperature range of 130–150 °C employing isoperibolic calorimetry. Measurements were performed in a metal reactor, employing 30–80 ml solutions containing 1.4–20 g of pure hydroxylamine (2.8–40 g of the supplied reagent). The measurements showed that increased concentration or temperature, results in higher global enthalpies of reaction per unit mass of reactant. At 150 °C, specific enthalpies as high as 8 kJ per gram of hydroxylamine were measured, although in general they were in the range of 3−5 kJ g{sup −1}. The accurate measurement of the generated heat was proven to be a cumbersome task as (a) it is difficult to identify the end of decomposition, which after a fast initial stage, proceeds very slowly, especially at lower temperatures and (b) the environment of gases affects the reaction rate.

  13. Radiolytic decomposition of organic C-14 released from TRU waste

    International Nuclear Information System (INIS)

    Kani, Yuko; Noshita, Kenji; Kawasaki, Toru; Nishimura, Tsutomu; Sakuragi, Tomofumi; Asano, Hidekazu

    2007-01-01

    It has been found that metallic TRU waste releases considerable portions of C-14 in the form of organic molecules such as lower molecular weight organic acids, alcohols and aldehydes. Due to the low sorption ability of organic C-14, it is important to clarify the long-term behavior of organic forms under waste disposal conditions. From investigations on radiolytic decomposition of organic carbon molecules into inorganic carbonic acid, it is expected that radiation from TRU waste will decompose organic C-14 into inorganic carbonic acid that has higher adsorption ability into the engineering barriers. Hence we have studied the decomposition behavior of organic C-14 by gamma irradiation experiments under simulated disposal conditions. The results showed that organic C-14 reacted with OH radicals formed by radiolysis of water, to produce inorganic carbonic acid. We introduced the concept of 'decomposition efficiency' which expresses the percentage of OH radicals consumed for the decomposition reaction of organic molecules in order to analyze the experimental results. We estimated the effect of radiolytic decomposition on the concentration of organic C-14 in the simulated conditions of the TRU disposal system using the decomposition efficiency, and found that the concentration of organic C-14 in the waste package will be lowered when the decomposition of organic C-14 by radiolysis was taken into account, in comparison with the concentration of organic C-14 without radiolysis. Our prediction suggested that some amount of organic C-14 can be expected to be transformed into the inorganic form in the waste package in an actual system. (authors)

  14. 29 CFR 1917.154 - Compressed air.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  15. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  16. Multivariate Empirical Mode Decomposition Based Signal Analysis and Efficient-Storage in Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Lu [University of Tennessee, Knoxville (UTK); Albright, Austin P [ORNL; Rahimpour, Alireza [University of Tennessee, Knoxville (UTK); Guo, Jiandong [University of Tennessee, Knoxville (UTK); Qi, Hairong [University of Tennessee, Knoxville (UTK); Liu, Yilu [University of Tennessee (UTK) and Oak Ridge National Laboratory (ORNL)

    2017-01-01

    Wide-area-measurement systems (WAMSs) are used in smart grid systems to enable the efficient monitoring of grid dynamics. However, the overwhelming amount of data and the severe contamination from noise often impede the effective and efficient data analysis and storage of WAMS generated measurements. To solve this problem, we propose a novel framework that takes advantage of Multivariate Empirical Mode Decomposition (MEMD), a fully data-driven approach to analyzing non-stationary signals, dubbed MEMD based Signal Analysis (MSA). The frequency measurements are considered as a linear superposition of different oscillatory components and noise. The low-frequency components, corresponding to the long-term trend and inter-area oscillations, are grouped and compressed by MSA using the mean shift clustering algorithm. Whereas, higher-frequency components, mostly noise and potentially part of high-frequency inter-area oscillations, are analyzed using Hilbert spectral analysis and they are delineated by statistical behavior. By conducting experiments on both synthetic and real-world data, we show that the proposed framework can capture the characteristics, such as trends and inter-area oscillation, while reducing the data storage requirements

  17. Long-term litter decomposition controlled by manganese redox cycling.

    Science.gov (United States)

    Keiluweit, Marco; Nico, Peter; Harmon, Mark E; Mao, Jingdong; Pett-Ridge, Jennifer; Kleber, Markus

    2015-09-22

    Litter decomposition is a keystone ecosystem process impacting nutrient cycling and productivity, soil properties, and the terrestrial carbon (C) balance, but the factors regulating decomposition rate are still poorly understood. Traditional models assume that the rate is controlled by litter quality, relying on parameters such as lignin content as predictors. However, a strong correlation has been observed between the manganese (Mn) content of litter and decomposition rates across a variety of forest ecosystems. Here, we show that long-term litter decomposition in forest ecosystems is tightly coupled to Mn redox cycling. Over 7 years of litter decomposition, microbial transformation of litter was paralleled by variations in Mn oxidation state and concentration. A detailed chemical imaging analysis of the litter revealed that fungi recruit and redistribute unreactive Mn(2+) provided by fresh plant litter to produce oxidative Mn(3+) species at sites of active decay, with Mn eventually accumulating as insoluble Mn(3+/4+) oxides. Formation of reactive Mn(3+) species coincided with the generation of aromatic oxidation products, providing direct proof of the previously posited role of Mn(3+)-based oxidizers in the breakdown of litter. Our results suggest that the litter-decomposing machinery at our coniferous forest site depends on the ability of plants and microbes to supply, accumulate, and regenerate short-lived Mn(3+) species in the litter layer. This observation indicates that biogeochemical constraints on bioavailability, mobility, and reactivity of Mn in the plant-soil system may have a profound impact on litter decomposition rates.

  18. Application specific compression : final report.

    Energy Technology Data Exchange (ETDEWEB)

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  19. Fast modal decomposition for optical fibers using digital holography.

    Science.gov (United States)

    Lyu, Meng; Lin, Zhiquan; Li, Guowei; Situ, Guohai

    2017-07-26

    Eigenmode decomposition of the light field at the output end of optical fibers can provide fundamental insights into the nature of electromagnetic-wave propagation through the fibers. Here we present a fast and complete modal decomposition technique for step-index optical fibers. The proposed technique employs digital holography to measure the light field at the output end of the multimode optical fiber, and utilizes the modal orthonormal property of the basis modes to calculate the modal coefficients of each mode. Optical experiments were carried out to demonstrate the proposed decomposition technique, showing that this approach is fast, accurate and cost-effective.

  20. Decomposition of ammonium nitrate in homogeneous and catalytic denitration

    International Nuclear Information System (INIS)

    Anan'ev, A. V.; Tananaev, I. G.; Shilov, V. P.

    2005-01-01

    Ammonium nitrate is one of potentially explosive by-products of spent fuel reprocessing. Decomposition of ammonium nitrate in the HNO 3 -HCOOH system was studied in the presence or absence of Pt/SiO 2 catalyst. It was found that decomposition of ammonium nitrate is due to homogeneous noncatalytic oxidation of ammonium ion with nitrous acid generated in the HNO 3 -HCOOH system during denitration. The platinum catalyst initiates the reaction of HNO 3 with HCOOH to form HNO 2 . The regular trends were revealed and the optimal conditions of decomposition of ammonium nitrate in nitric acid solutions were found [ru

  1. Martensite decomposition in Cu–Al–Mn–Ag alloys

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Camila Maria Andrade dos, E-mail: camilaandr@gmail.com [Departamento de Físico-Química, Instituto de Química, UNESP, Caixa Postal 355, 14801-970 Araraquara, SP (Brazil); Adorno, Antonio Tallarico [Departamento de Físico-Química, Instituto de Química, UNESP, Caixa Postal 355, 14801-970 Araraquara, SP (Brazil); Galdino da Silva, Ricardo Alexandre [Departamento de Ciências Exatas e da Terra, UNIFESP, 09972-270 Diadema, SP (Brazil); Carvalho, Thaisa Mary [Departamento de Físico-Química, Instituto de Química, UNESP, Caixa Postal 355, 14801-970 Araraquara, SP (Brazil)

    2014-12-05

    Highlights: • Martensite decomposition in Cu–Al–Mn–Ag alloys is mainly influenced by Mn. • Interaction between Cu–Mn atomic pairs increases activation energy. • Cu diffusion is disturbed by the interaction between Cu–Mn atomic pairs. - Abstract: The influence of Mn and Ag additions on the isothermal kinetics of martensite decomposition in the Cu–9wt.%Al alloy was studied using X-ray diffractometry (XRD), scanning electron microscopy (SEM), energy dispersive X-ray analysis (EDXS) and microhardness changes measurements with temperature and time. The results indicated that the reaction is disturbed by the increase of Mn, an effect associated with the increase in the Al–Mn and Cu–Mn atomic pairs, which disturbs Cu diffusion and increases the activation energy for the martensite decomposition reaction.

  2. Compressibility of the protein-water interface

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-01

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (˜0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ˜45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in

  3. Compressibility of the protein-water interface.

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-07

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (∼0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ∼45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than

  4. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  5. EFFECTIVENESS OF ADJUVANT USE OF POSTERIOR MANUAL COMPRESSION WITH GRADED COMPRESSION IN THE SONOGRAPHIC DIAGNOSIS OF ACUTE APPENDICITIS

    Directory of Open Access Journals (Sweden)

    Senthilnathan V

    2018-01-01

    Full Text Available BACKGROUND Diagnosing appendicitis by Graded Compression Ultrasonogram is a difficult task because of limiting factors such as operator– dependent technique, retrocaecal location of the appendix and patient obesity. Posterior manual compression technique visualizes the appendix better in the Grey-scale Ultrasonogram. The Aim of this study is to determine the accuracy of ultrasound in detecting or excluding acute appendicitis and to evaluate the usefulness of the adjuvant use of posterior manual compression technique in visualization of the appendix and in the diagnosis of acute appendicitis MATERIALS AND METHODS This prospective study involved a total of 240 patients in all age groups and both sexes. All these patients underwent USG for suspected appendicitis. Ultrasonography was performed with transverse and longitudinal graded compression sonography. If the appendix is not visualized on graded compression sonography, posterior manual compression technique was used to further improve the detection of appendix. RESULTS The vermiform appendix was visualized in 185 patients (77.1% out of 240 patients with graded compression alone. 55 out of 240 patients whose appendix could not be visualized by graded compression alone were subjected to both graded followed by posterior manual compression technique among that Appendix was visualized in 43 patients on posterior manual compression technique amounting to 78.2% of cases, Appendix could not be visualized in the remaining 12 patients (21.8% out of 55. CONCLUSION Combined method of graded compression with posterior manual compression technique is better than the graded compression technique alone in diagnostic accuracy and detection rate of the vermiform appendix.

  6. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  7. Compressed Air Quality, A Case Study In Paiton Coal Fired Power Plant Unit 1 And 2

    Science.gov (United States)

    Indah, Nur; Kusuma, Yuriadi; Mardani

    2018-03-01

    The compressed air system becomes part of a very important utility system in a Plant, including the Steam Power Plant. In PLN’S coal fired power plant, Paiton units 1 and 2, there are four Centrifugal air compressor types, which produce compressed air as much as 5.652 cfm and with electric power capacity of 1200 kW. Electricity consumption to operate centrifugal compressor is 7.104.117 kWh per year. Compressed air generation is not only sufficient in quantity (flow rate) but also meets the required air quality standards. compressed air at Steam Power Plant is used for; service air, Instrument air, and for fly Ash. This study aims to measure some important parameters related to air quality, followed by potential disturbance analysis, equipment breakdown or reduction of energy consumption from existing compressed air conditions. These measurements include counting the number of dust particles, moisture content, relative humidity, and also compressed air pressure. From the measurements, the compressed air pressure generated by the compressor is about 8.4 barg and decreased to 7.7 barg at the furthest point, so the pressure drop is 0.63 barg, this number satisfies the needs in the end user. The measurement of the number of particles contained in compressed air, for particle of 0.3 micron reaches 170,752 particles, while for the particle size 0.5 micron reaches 45,245 particles. Measurements of particles conducted at several points of measurement. For some point measurements the number of dust particle exceeds the standard set by ISO 8573.1-2010 and also NACE Code, so it needs to be improved on the air treatment process. To see the amount of moisture content in compressed air, it is done by measuring pressure dew point temperature (PDP). Measurements were made at several points with results ranging from -28.4 to 30.9 °C. The recommendation of improving compressed air quality in steam power plant, Paiton unit 1 and 2 has the potential to extend the life of

  8. Nonlinear viscoelasticity of pre-compressed layered polymeric composite under oscillatory compression

    KAUST Repository

    Xu, Yangguang

    2018-05-03

    Describing nonlinear viscoelastic properties of polymeric composites when subjected to dynamic loading is essential for development of practical applications of such materials. An efficient and easy method to analyze nonlinear viscoelasticity remains elusive because the dynamic moduli (storage modulus and loss modulus) are not very convenient when the material falls into nonlinear viscoelastic range. In this study, we utilize two methods, Fourier transform and geometrical nonlinear analysis, to quantitatively characterize the nonlinear viscoelasticity of a pre-compressed layered polymeric composite under oscillatory compression. We discuss the influences of pre-compression, dynamic loading, and the inner structure of polymeric composite on the nonlinear viscoelasticity. Furthermore, we reveal the nonlinear viscoelastic mechanism by combining with other experimental results from quasi-static compressive tests and microstructural analysis. From a methodology standpoint, it is proved that both Fourier transform and geometrical nonlinear analysis are efficient tools for analyzing the nonlinear viscoelasticity of a layered polymeric composite. From a material standpoint, we consequently posit that the dynamic nonlinear viscoelasticity of polymeric composites with complicated inner structures can also be well characterized using these methods.

  9. Effect of compressibility on the hypervelocity penetration

    Science.gov (United States)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  10. NanoCarbon 2011. Selected works from the Brazilian carbon meeting

    Energy Technology Data Exchange (ETDEWEB)

    Avellaneda, Cesar (ed.) [Univ. Federal de Pelotas (Brazil). Centro de Desenvolvimento Tecnologico

    2013-02-01

    This book presents eight selected papers from the Brazilian Carbon Meeting 2011. It contains the following topics: Review of field emission from carbon Nanotubes: Highlighting measuring energy spread. - Synthesis and characterisation of carbon nanocomposites. - Performance of Ni/MgAl{sub 2}O{sub 4} catalyst obtained by a metal-chitosan complex method in methane decomposition reaction with production of carbon nanotubes. - The use of nanostructures for DNA transfection. - Applications of carbon nanotubes in oncology. - CNTs/TiO2 composites. - Synthesis of vertically aligned carbon nanotubes by CVD Technique: A review. - Thermoset three-component composite systems using carbon nantubes.

  11. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    Science.gov (United States)

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  12. FRESCO: Referential compression of highly similar sequences.

    Science.gov (United States)

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  13. Vector domain decomposition schemes for parabolic equations

    Science.gov (United States)

    Vabishchevich, P. N.

    2017-09-01

    A new class of domain decomposition schemes for finding approximate solutions of timedependent problems for partial differential equations is proposed and studied. A boundary value problem for a second-order parabolic equation is used as a model problem. The general approach to the construction of domain decomposition schemes is based on partition of unity. Specifically, a vector problem is set up for solving problems in individual subdomains. Stability conditions for vector regionally additive schemes of first- and second-order accuracy are obtained.

  14. Decomposition of multilayer benzene and n-hexane films on vanadium.

    Science.gov (United States)

    Souda, Ryutaro

    2015-09-21

    Reactions of multilayer hydrocarbon films with a polycrystalline V substrate have been investigated using temperature-programmed desorption and time-of-flight secondary ion mass spectrometry. Most of the benzene molecules were dissociated on V, as evidenced by the strong depression in the thermal desorption yields of physisorbed species at 150 K. The reaction products dehydrogenated gradually after the multilayer film disappeared from the surface. Large amount of oxygen was needed to passivate the benzene decomposition on V. These behaviors indicate that the subsurface sites of V play a role in multilayer benzene decomposition. Decomposition of the n-hexane multilayer films is manifested by the desorption of methane at 105 K and gradual hydrogen desorption starting at this temperature, indicating that C-C bond scission precedes C-H bond cleavage. The n-hexane dissociation temperature is considerably lower than the thermal desorption temperature of the physisorbed species (140 K). The n-hexane multilayer morphology changes at the decomposition temperature, suggesting that a liquid-like phase formed after crystallization plays a role in the low-temperature decomposition of n-hexane.

  15. Comparing biological networks via graph compression

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2010-09-01

    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  16. Fixed-Rate Compressed Floating-Point Arrays.

    Science.gov (United States)

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  17. Canonical decomposition of magnetotelluric responses: Experiment on 1D anisotropic structures

    Science.gov (United States)

    Guo, Ze-qiu; Wei, Wen-bo; Ye, Gao-feng; Jin, Sheng; Jing, Jian-en

    2015-08-01

    Horizontal electrical heterogeneity of subsurface earth is mostly originated from structural complexity and electrical anisotropy, and local near-surface electrical heterogeneity will severely distort regional electromagnetic responses. Conventional distortion analyses for magnetotelluric soundings are primarily physical decomposition methods with respect to isotropic models, which mostly presume that the geoelectric distribution of geological structures is of local and regional patterns represented by 3D/2D models. Due to the widespread anisotropy of earth media, the confusion between 1D anisotropic responses and 2D isotropic responses, and the defects of physical decomposition methods, we propose to conduct modeling experiments with canonical decomposition in terms of 1D layered anisotropic models, and the method is one of the mathematical decomposition methods based on eigenstate analyses differentiated from distortion analyses, which can be used to recover electrical information such as strike directions, and maximum and minimum conductivity. We tested this method with numerical simulation experiments on several 1D synthetic models, which turned out that canonical decomposition is quite effective to reveal geological anisotropic information. Finally, for the background of anisotropy from previous study by geological and seismological methods, canonical decomposition is applied to real data acquired in North China Craton for 1D anisotropy analyses, and the result shows that, with effective modeling and cautious interpretation, canonical decomposition could be another good method to detect anisotropy of geological media.

  18. Time-frequency analysis : mathematical analysis of the empirical mode decomposition.

    Science.gov (United States)

    2009-01-01

    Invented over 10 years ago, empirical mode : decomposition (EMD) provides a nonlinear : time-frequency analysis with the ability to successfully : analyze nonstationary signals. Mathematical : Analysis of the Empirical Mode Decomposition : is a...

  19. Industrial application of the decomposition of CO2 . NOx by large flow atmospheric microwave plasma LAMP employed in motorcar

    Science.gov (United States)

    Pandey, Anil; Niwa, Syunta; Morii, Yoshinari; Ikezawa, Shunjiro

    2012-10-01

    In order to decompose CO2 . NOx [1], we have developed the large flow atmospheric microwave plasma; LAMP [2]. It is very important to apply it for industrial innovation, so we have studied to apply the LAMP into motorcar. The characteristics of the developed LAMP are that the price is cheap and the decomposition efficiencies of CO2 . NOx are high. The mechanism was shown as the vertical configuration between the exhaust gas pipe and the waveguide was suitable [2]. The system was set up in the car body with a battery and an inverter. The battery is common between the engine and the inverter. In the application of motorcar, the flow is large, so the LAMP which has the merits of large flow, high efficient decomposition, and cheap apparatus will be superior.[4pt] [1] H. Barankova, L. Bardos, ISSP 2011, Kyoto.[0pt] [2] S. Ikezawa, S. Parajulee, S. Sharma, A. Pandey, ISSP 2011, Kyoto (2011) pp. 28-31; S. Ikezawa, S. Niwa, Y. Morii, JJAP meeting 2012, March 16, Waseda U. (2012).

  20. Hydrothermal decomposition of liquid crystal in subcritical water

    International Nuclear Information System (INIS)

    Zhuang, Xuning; He, Wenzhi; Li, Guangming; Huang, Juwen; Lu, Shangming; Hou, Lianjiao

    2014-01-01

    Highlights: • Hydrothermal technology can effectively decompose the liquid crystal of 4-octoxy-4'-cyanobiphenyl. • The decomposition rate reached 97.6% under the optimized condition. • Octoxy-4'-cyanobiphenyl was mainly decomposed into simple and innocuous products. • The mechanism analysis reveals the decomposition reaction process. - Abstract: Treatment of liquid crystal has important significance for the environment protection and human health. This study proposed a hydrothermal process to decompose the liquid crystal of 4-octoxy-4′-cyanobiphenyl. Experiments were conducted with a 5.7 mL stainless tube reactor and heated by a salt-bath. Factors affecting the decomposition rate of 4-octoxy-4′-cyanobiphenyl were evaluated with HPLC. The decomposed liquid products were characterized by GC-MS. Under optimized conditions i.e., 0.2 mL H 2 O 2 supply, pH value 6, temperature 275 °C and reaction time 5 min, 97.6% of 4-octoxy-4′-cyanobiphenyl was decomposed into simple and environment-friendly products. Based on the mechanism analysis and products characterization, a possible hydrothermal decomposition pathway was proposed. The results indicate that hydrothermal technology is a promising choice for liquid crystal treatment

  1. Geometric decomposition of the conformation tensor in viscoelastic turbulence

    Science.gov (United States)

    Hameduddin, Ismail; Meneveau, Charles; Zaki, Tamer A.; Gayme, Dennice F.

    2018-05-01

    This work introduces a mathematical approach to analysing the polymer dynamics in turbulent viscoelastic flows that uses a new geometric decomposition of the conformation tensor, along with associated scalar measures of the polymer fluctuations. The approach circumvents an inherent difficulty in traditional Reynolds decompositions of the conformation tensor: the fluctuating tensor fields are not positive-definite and so do not retain the physical meaning of the tensor. The geometric decomposition of the conformation tensor yields both mean and fluctuating tensor fields that are positive-definite. The fluctuating tensor in the present decomposition has a clear physical interpretation as a polymer deformation relative to the mean configuration. Scalar measures of this fluctuating conformation tensor are developed based on the non-Euclidean geometry of the set of positive-definite tensors. Drag-reduced viscoelastic turbulent channel flow is then used an example case study. The conformation tensor field, obtained using direct numerical simulations, is analysed using the proposed framework.

  2. 25th annual meeting of the German Society of Neuroradiology. Abstracts

    International Nuclear Information System (INIS)

    Grau, H.C.

    1990-01-01

    The 62 abstracts are precising the individual papers read at the 25th annual meeting of the Germany Society of Neuroradiology, which dealt with the following subjects: The narrowed vertebral canal, congential and acquired spinal compression syndromes (8); neuroradiology of disturbances of perinatal development (7); new horizons in the diagnosis of degenerative and inflammatory disorders of the white matter (14); interventional neuroradiology (12); miscellaneous topics (21). (UHE) [de

  3. ENERGY EFFICIENCY LIMITS FOR A RECUPERATIVE BAYONET SULFURIC ACID DECOMPOSITION REACTOR FOR SULFUR CYCLE THERMOCHEMICAL HYDROGEN PRODUCTION

    Energy Technology Data Exchange (ETDEWEB)

    Gorensek, M.; Edwards, T.

    2009-06-11

    A recuperative bayonet reactor design for the high-temperature sulfuric acid decomposition step in sulfur-based thermochemical hydrogen cycles was evaluated using pinch analysis in conjunction with statistical methods. The objective was to establish the minimum energy requirement. Taking hydrogen production via alkaline electrolysis with nuclear power as the benchmark, the acid decomposition step can consume no more than 450 kJ/mol SO{sub 2} for sulfur cycles to be competitive. The lowest value of the minimum heating target, 320.9 kJ/mol SO{sub 2}, was found at the highest pressure (90 bar) and peak process temperature (900 C) considered, and at a feed concentration of 42.5 mol% H{sub 2}SO{sub 4}. This should be low enough for a practical water-splitting process, even including the additional energy required to concentrate the acid feed. Lower temperatures consistently gave higher minimum heating targets. The lowest peak process temperature that could meet the 450-kJ/mol SO{sub 2} benchmark was 750 C. If the decomposition reactor were to be heated indirectly by an advanced gas-cooled reactor heat source (50 C temperature difference between primary and secondary coolants, 25 C minimum temperature difference between the secondary coolant and the process), then sulfur cycles using this concept could be competitive with alkaline electrolysis provided the primary heat source temperature is at least 825 C. The bayonet design will not be practical if the (primary heat source) reactor outlet temperature is below 825 C.

  4. rCUR: an R package for CUR matrix decomposition

    Directory of Open Access Journals (Sweden)

    Bodor András

    2012-05-01

    Full Text Available Abstract Background Many methods for dimensionality reduction of large data sets such as those generated in microarray studies boil down to the Singular Value Decomposition (SVD. Although singular vectors associated with the largest singular values have strong optimality properties and can often be quite useful as a tool to summarize the data, they are linear combinations of up to all of the data points, and thus it is typically quite hard to interpret those vectors in terms of the application domain from which the data are drawn. Recently, an alternative dimensionality reduction paradigm, CUR matrix decompositions, has been proposed to address this problem and has been applied to genetic and internet data. CUR decompositions are low-rank matrix decompositions that are explicitly expressed in terms of a small number of actual columns and/or actual rows of the data matrix. Since they are constructed from actual data elements, CUR decompositions are interpretable by practitioners of the field from which the data are drawn. Results We present an implementation to perform CUR matrix decompositions, in the form of a freely available, open source R-package called rCUR. This package will help users to perform CUR-based analysis on large-scale data, such as those obtained from different high-throughput technologies, in an interactive and exploratory manner. We show two examples that illustrate how CUR-based techniques make it possible to reduce significantly the number of probes, while at the same time maintaining major trends in data and keeping the same classification accuracy. Conclusions The package rCUR provides functions for the users to perform CUR-based matrix decompositions in the R environment. In gene expression studies, it gives an additional way of analysis of differential expression and discriminant gene selection based on the use of statistical leverage scores. These scores, which have been used historically in diagnostic regression

  5. Benders’ Decomposition for Curriculum-Based Course Timetabling

    DEFF Research Database (Denmark)

    Bagger, Niels-Christian F.; Sørensen, Matias; Stidsen, Thomas R.

    2018-01-01

    feasibility. We compared our algorithm with other approaches from the literature for a total of 32 data instances. We obtained a lower bound on 23 of the instances, which were at least as good as the lower bounds obtained by the state-of-the-art, and on eight of these, our lower bounds were higher. On two......In this paper we applied Benders’ decomposition to the Curriculum-Based Course Timetabling (CBCT) problem. The objective of the CBCT problem is to assign a set of lectures to time slots and rooms. Our approach was based on segmenting the problem into time scheduling and room allocation problems...... of the instances, our lower bound was an improvement of the currently best-known. Lastly, we compared our decomposition to the model without the decomposition on an additional six instances, which are much larger than the other 32. To our knowledge, this was the first time that lower bounds were calculated...

  6. JPEG and wavelet compression of ophthalmic images

    Science.gov (United States)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  7. Separable decompositions of bipartite mixed states

    Science.gov (United States)

    Li, Jun-Li; Qiao, Cong-Feng

    2018-04-01

    We present a practical scheme for the decomposition of a bipartite mixed state into a sum of direct products of local density matrices, using the technique developed in Li and Qiao (Sci. Rep. 8:1442, 2018). In the scheme, the correlation matrix which characterizes the bipartite entanglement is first decomposed into two matrices composed of the Bloch vectors of local states. Then, we show that the symmetries of Bloch vectors are consistent with that of the correlation matrix, and the magnitudes of the local Bloch vectors are lower bounded by the correlation matrix. Concrete examples for the separable decompositions of bipartite mixed states are presented for illustration.

  8. Eigenvalue Decomposition-Based Modified Newton Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-jun Wang

    2013-01-01

    Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.

  9. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  10. Identification of liquid-phase decomposition species and reactions for guanidinium azotetrazolate

    International Nuclear Information System (INIS)

    Kumbhakarna, Neeraj R.; Shah, Kaushal J.; Chowdhury, Arindrajit; Thynell, Stefan T.

    2014-01-01

    Highlights: • Guanidinium azotetrazolate (GzT) is a high-nitrogen energetic material. • FTIR spectroscopy and ToFMS spectrometry were used for species identification. • Quantum mechanics was used to identify transition states and decomposition pathways. • Important reactions in the GzT liquid-phase decomposition process were identified. • Initiation of decomposition occurs via ring opening, releasing N 2 . - Abstract: The objective of this work is to analyze the decomposition of guanidinium azotetrazolate (GzT) in the liquid phase by using a combined experimental and computational approach. The experimental part involves the use of Fourier transform infrared (FTIR) spectroscopy to acquire the spectral transmittance of the evolved gas-phase species from rapid thermolysis, as well as to acquire spectral transmittance of the condensate and residue formed from the decomposition. Time-of-flight mass spectrometry (ToFMS) is also used to acquire mass spectra of the evolved gas-phase species. Sub-milligram samples of GzT were heated at rates of about 2000 K/s to a set temperature (553–573 K) where decomposition occurred under isothermal conditions. N 2 , NH 3 , HCN, guanidine and melamine were identified as products of decomposition. The computational approach is based on using quantum mechanics for confirming the identity of the species observed in experiments and for identifying elementary chemical reactions that formed these species. In these ab initio techniques, various levels of theory and basis sets were used. Based on the calculated enthalpy and free energy values of various molecular structures, important reaction pathways were identified. Initiation of decomposition of GzT occurs via ring opening to release N 2

  11. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix; Gregson, James; Wetzstein, Gordon; Raskar, Ramesh; Heidrich, Wolfgang

    2014-01-01

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  12. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  13. Xylanase and cellulase activities during anaerobic decomposition of three aquatic macrophytes.

    Science.gov (United States)

    Nunes, Maíra F; da Cunha-Santino, Marcela B; Bianchini, Irineu

    2011-01-01

    Enzymatic activity during decomposition is extremely important to hydrolyze molecules that are assimilated by microorganisms. During aquatic macrophytes decomposition, enzymes act mainly in the breakdown of lignocellulolytic matrix fibers (i.e. cellulose, hemicellulose and lignin) that encompass the refractory fraction from organic matter. Considering the importance of enzymatic activities role in decomposition processes, this study aimed to describe the temporal changes of xylanase and cellulose activities during anaerobic decomposition of Ricciocarpus natans (freely-floating), Oxycaryum cubense (emergent) and Cabomba furcata (submersed). The aquatic macrophytes were collected in Óleo Lagoon, Luiz Antonio, São Paulo, Brazil and bioassays were accomplished.  Decomposition chambers from each species (n = 10) were set up with dried macrophyte fragments and filtered Óleo Lagoon water. The chambers were incubated at 22.5°C, in the dark and under anaerobic conditions. Enzymatic activities and remaining organic matter were measured periodically during 90 days. The temporal variation of enzymes showed that C. furcata presented the highest decay and the highest maximum enzyme production. Xylanase production was higher than cellulase production for the decomposition of the three aquatic macrophytes species.

  14. Spinodal decomposition of austenite in long-term-aged duplex stainless steel

    International Nuclear Information System (INIS)

    Chung, H.M.

    1989-02-01

    Spinodal decomposition of austenite phase in the cast duplex stainless steels CF-8 and -8M grades has been observed after long- term thermal aging at 400 and 350/degree/C for 30,000 h (3.4 yr). At 320/degree/C, the reaction was observed only at the limited region near the austenite grain boundaries. Ni segregation and ''worm-holes'' corresponding to the spatial microchemical fluctuations have been confirmed. The decomposition was observed only for heats containing relatively high overall Ni content (9.6--12.0 wt %) but not in low-Ni (8.0--9.4 wt %) heats. In some specimens showing a relatively advanced stage of decomposition, localized regions of austenite with a Vickers hardness of 340--430 were observed. However, the effect of austenite decomposition on the overall material toughness appears secondary for aging up to 3--5 yr in comparison with the effect of the faster spinodal decomposition in ferrite phase. The observation of the thermally driven spinodal decomposition of the austenite phase in cast duplex stainless steels validates the proposition that a miscibility gap occurs in Fe-Ni and ancillary systems. 16 refs., 7 figs., 1 tab

  15. Compression experiments on the TOSKA tokamak

    International Nuclear Information System (INIS)

    Cima, G.; McGuire, K.M.; Robinson, D.C.; Wootton, A.J.

    1980-10-01

    Results from minor radius compression experiments on a tokamak plasma in TOSCA are reported. The compression is achieved by increasing the toroidal field up to twice its initial value in 200μs. Measurements show that particles and magnetic flux are conserved. When the initial energy confinement time is comparable with the compression time, energy gains are greater than for an adiabatic change of state. The total beta value increases. Central beta values approximately 3% are measured when a small major radius compression is superimposed on a minor radius compression. Magnetic field fluctuations are affected: both the amplitude and period decrease. Starting from low energy confinement times, approximately 200μs, increases in confinement times up to approximately 1 ms are measured. The increase in plasma energy results from a large reduction in the power losses during the compression. When the initial energy confinement time is much longer than the compression time, the parameter changes are those expected for an adiabatic change of state. (author)

  16. Entanglement and tensor product decomposition for two fermions

    International Nuclear Information System (INIS)

    Caban, P; Podlaski, K; Rembielinski, J; Smolinski, K A; Walczak, Z

    2005-01-01

    The problem of the choice of tensor product decomposition in a system of two fermions with the help of Bogoliubov transformations of creation and annihilation operators is discussed. The set of physical states of the composite system is restricted by the superselection rule forbidding the superposition of fermions and bosons. It is shown that the Wootters concurrence is not the proper entanglement measure in this case. The explicit formula for the entanglement of formation is found. This formula shows that the entanglement of a given state depends on the tensor product decomposition of a Hilbert space. It is shown that the set of separable states is narrower than in the two-qubit case. Moreover, there exist states which are separable with respect to all tensor product decompositions of the Hilbert space. (letter to the editor)

  17. Dinitraminic acid (HDN) isomerization and self-decomposition revisited

    International Nuclear Information System (INIS)

    Rahm, Martin; Brinck, Tore

    2008-01-01

    Density functional theory (DFT) and the ab initio based CBS-QB3 method have been used to study possible decomposition pathways of dinitraminic acid HN(NO 2 ) 2 (HDN) in gas-phase. The proton transfer isomer of HDN, O 2 NNN(O)OH, and its conformers can be formed and converted into each other through intra- and intermolecular proton transfer. The latter has been shown to proceed substantially faster via double proton transfer. The main mechanism for HDN decomposition is found to be initiated by a dissociation reaction, splitting of nitrogen dioxide from either HDN or the HDN isomer. This reaction has an activation enthalpy of 36.5 kcal/mol at the CBS-QB3 level, which is in good agreement with experimental estimates of the decomposition barrier

  18. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  19. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  20. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction

    Science.gov (United States)

    Hollingsworth, Kieren Grant

    2015-11-01

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.

  1. Oil-free centrifugal hydrogen compression technology demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Heshmat, Hooshang [Mohawk Innovative Technology Inc., Albany, NY (United States)

    2014-05-31

    One of the key elements in realizing a mature market for hydrogen vehicles is the deployment of a safe and efficient hydrogen production and delivery infrastructure on a scale that can compete economically with current fuels. The challenge, however, is that hydrogen, being the lightest and smallest of gases with a lower viscosity and density than natural gas, readily migrates through small spaces and is difficult to compresses efficiently. While efficient and cost effective compression technology is crucial to effective pipeline delivery of hydrogen, the compression methods used currently rely on oil lubricated positive displacement (PD) machines. PD compression technology is very costly, has poor reliability and durability, especially for components subjected to wear (e.g., valves, rider bands and piston rings) and contaminates hydrogen with lubricating fluid. Even so called “oil-free” machines use oil lubricants that migrate into and contaminate the gas path. Due to the poor reliability of PD compressors, current hydrogen producers often install duplicate units in order to maintain on-line times of 98-99%. Such machine redundancy adds substantially to system capital costs. As such, DOE deemed that low capital cost, reliable, efficient and oil-free advanced compressor technologies are needed. MiTi’s solution is a completely oil-free, multi-stage, high-speed, centrifugal compressor designed for flow capacity of 500,000 kg/day with a discharge pressure of 1200 psig. The design employs oil-free compliant foil bearings and seals to allow for very high operating speeds, totally contamination free operation, long life and reliability. This design meets the DOE’s performance targets and achieves an extremely aggressive, specific power metric of 0.48 kW-hr/kg and provides significant improvements in reliability/durability, energy efficiency, sealing and freedom from contamination. The multi-stage compressor system concept has been validated through full scale

  2. Gear fault diagnosis under variable conditions with intrinsic time-scale decomposition-singular value decomposition and support vector machine

    Energy Technology Data Exchange (ETDEWEB)

    Xing, Zhanqiang; Qu, Jianfeng; Chai, Yi; Tang, Qiu; Zhou, Yuming [Chongqing University, Chongqing (China)

    2017-02-15

    The gear vibration signal is nonlinear and non-stationary, gear fault diagnosis under variable conditions has always been unsatisfactory. To solve this problem, an intelligent fault diagnosis method based on Intrinsic time-scale decomposition (ITD)-Singular value decomposition (SVD) and Support vector machine (SVM) is proposed in this paper. The ITD method is adopted to decompose the vibration signal of gearbox into several Proper rotation components (PRCs). Subsequently, the singular value decomposition is proposed to obtain the singular value vectors of the proper rotation components and improve the robustness of feature extraction under variable conditions. Finally, the Support vector machine is applied to classify the fault type of gear. According to the experimental results, the performance of ITD-SVD exceeds those of the time-frequency analysis methods with EMD and WPT combined with SVD for feature extraction, and the classifier of SVM outperforms those for K-nearest neighbors (K-NN) and Back propagation (BP). Moreover, the proposed approach can accurately diagnose and identify different fault types of gear under variable conditions.

  3. Building indifferentiable compression functions from the PGV compression functions

    DEFF Research Database (Denmark)

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde

    2016-01-01

    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black......, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block...

  4. CEPRAM: Compression for Endurance in PCM RAM

    OpenAIRE

    González Alberquilla, Rodrigo; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Tirado Fernández, Francisco

    2017-01-01

    We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...

  5. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  6. The Distinction of Hot Herbal Compress, Hot Compress, and Topical Diclofenac as Myofascial Pain Syndrome Treatment.

    Science.gov (United States)

    Boonruab, Jurairat; Nimpitakpong, Netraya; Damjuti, Watchara

    2018-01-01

    This randomized controlled trial aimed to investigate the distinctness after treatment among hot herbal compress, hot compress, and topical diclofenac. The registrants were equally divided into groups and received the different treatments including hot herbal compress, hot compress, and topical diclofenac group, which served as the control group. After treatment courses, Visual Analog Scale and 36-Item Short Form Health survey were, respectively, used to establish the level of pain intensity and quality of life. In addition, cervical range of motion and pressure pain threshold were also examined to identify the motional effects. All treatments showed significantly decreased level of pain intensity and increased cervical range of motion, while the intervention groups exhibited extraordinary capability compared with the topical diclofenac group in pressure pain threshold and quality of life. In summary, hot herbal compress holds promise to be an efficacious treatment parallel to hot compress and topical diclofenac.

  7. Compression of the digitized X-ray images

    International Nuclear Information System (INIS)

    Terae, Satoshi; Miyasaka, Kazuo; Fujita, Nobuyuki; Takamura, Akio; Irie, Goro; Inamura, Kiyonari.

    1987-01-01

    Medical images are using an increased amount of space in the hospitals, while they are not accessed easily. Thus, suitable data filing system and precise data compression will be necessitated. Image quality was evaluated before and after image data compression, using local filing system (MediFile 1000, NEC Co.) and forty-seven modes of compression parameter. For this study X-ray images of 10 plain radiographs and 7 contrast examinations were digitized using a film reader of CCD sensor in MediFile 1000. Those images were compressed into forty-seven kinds of image data to save in an optical disc and then the compressed images were reconstructed. Each reconstructed image was compared with non-compressed images in respect to several regions of our interest by four radiologists. Compression and extension of radiological images were promptly made by employing the local filing system. Image quality was much more affected by the ratio of data compression than by the mode of parameter itself. In another word, the higher compression ratio became, the worse the image quality were. However, image quality was not significantly degraded until the compression ratio was about 15: 1 on plain radiographs and about 8: 1 on contrast studies. Image compression by this technique will be admitted by diagnostic radiology. (author)

  8. Introduction to compressible fluid flow

    CERN Document Server

    Oosthuizen, Patrick H

    2013-01-01

    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices

  9. Thermal decomposition of potassium metaperiodate doped with trivalent ions

    Energy Technology Data Exchange (ETDEWEB)

    Muraleedharan, K., E-mail: kmuralika@gmail.com [Department of Chemistry, University of Calicut, Calicut, Kerala 673 635 (India); Kannan, M.P.; Gangadevi, T. [Department of Chemistry, University of Calicut, Calicut, Kerala 673 635 (India)

    2010-04-20

    The kinetics of isothermal decomposition of potassium metaperiodate (KIO{sub 4}), doped with phosphate and aluminium has been studied by thermogravimetry (TG). We introduced a custom-made thermobalance that is able to record weight decrease with time under pure isothermal conditions. The decomposition proceeds mainly through two stages: an acceleratory stages up to {alpha} = 0.50 and the decay stage beyond. The decomposition data for aluminium and phosphate doped KIO{sub 4} were found to be best described by the Prout-Tompkins equation. Separate kinetic analyses of the {alpha}-t data corresponding to the acceleratory region and decay region showed that the acceleratory stage gave the best fit with Prout-Tompkins equation itself whereas the decay stage fitted better to the contracting area equation. The rate of decomposition of phosphate doped KIO{sub 4} increases approximately linearly with an increase in the dopant concentration. In the case of aluminium doped KIO{sub 4}, the rate passes through a maximum with increase in the dopant concentration. The {alpha}-t data of pure and doped KIO{sub 4} were also subjected to isoconversional studies for the determination of activation energy values. Doping did not change the activation energy of the reaction. The results favour an electron-transfer mechanism for the isothermal decomposition of KIO{sub 4}, agreeing well with our earlier observations.

  10. Development and assessment of compression technique for medical images using neural network. I. Assessment of lossless compression

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi

    2007-01-01

    This paper describes assessment of the lossless compression of a new efficient compression technique (JIS system) using neural network that the author and co-workers have recently developed. At first, theory is explained for encoding and decoding the data. Assessment is done on 55 images each of chest digital roentgenography, digital mammography, 64-row multi-slice CT, 1.5 Tesla MRI, positron emission tomography (PET) and digital subtraction angiography, which are lossless-compressed by the present JIS system to see the compression rate and loss. For comparison, those data are also JPEG lossless-compressed. Personal computer (PC) is an Apple MacBook Pro with configuration of Boot Camp for Windows environment. The present JIS system is found to have a more than 4 times higher efficiency than the usual compressions which compressing the file volume to only 1/11 in average, and thus to be importantly responsible to the increasing medical imaging data. (R.T.)

  11. Challenges of including nitrogen effects on decomposition in earth system models

    Science.gov (United States)

    Hobbie, S. E.

    2011-12-01

    Despite the importance of litter decomposition for ecosystem fertility and carbon balance, key uncertainties remain about how this fundamental process is affected by nitrogen (N) availability. Nevertheless, resolving such uncertainties is critical for mechanistic inclusion of such processes in earth system models, towards predicting the ecosystem consequences of increased anthropogenic reactive N. Towards that end, we have conducted a series of experiments examining nitrogen effects on litter decomposition. We found that both substrate N and externally supplied N (regardless of form) accelerated the initial decomposition rate. Faster initial decomposition rates were linked to the higher activity of carbohydrate-degrading enzymes associated with externally supplied N and the greater relative abundances of Gram negative and Gram positive bacteria associated with green leaves and externally supplied organic N (assessed using phospholipid fatty acid analysis, PLFA). By contrast, later in decomposition, externally supplied N slowed decomposition, increasing the fraction of slowly decomposing litter and reducing lignin-degrading enzyme activity and relative abundances of Gram negative and Gram positive bacteria. Our results suggest that elevated atmospheric N deposition may have contrasting effects on the dynamics of different soil carbon pools, decreasing mean residence times of active fractions comprising very fresh litter, while increasing those of more slowly decomposing fractions including more processed litter. Incorporating these contrasting effects of N on decomposition processes into models is complicated by lingering uncertainties about how these effects generalize across ecosystems and substrates.

  12. A comparative experimental study on engine operating on premixed charge compression ignition and compression ignition mode

    Directory of Open Access Journals (Sweden)

    Bhiogade Girish E.

    2017-01-01

    Full Text Available New combustion concepts have been recently developed with the purpose to tackle the problem of high emissions level of traditional direct injection Diesel engines. A good example is the premixed charge compression ignition combustion. A strategy in which early injection is used causing a burning process in which the fuel burns in the premixed condition. In compression ignition engines, soot (particulate matter and NOx emissions are an extremely unsolved issue. Premixed charge compression ignition is one of the most promising solutions that combine the advantages of both spark ignition and compression ignition combustion modes. It gives thermal efficiency close to the compression ignition engines and resolves the associated issues of high NOx and particulate matter, simultaneously. Premixing of air and fuel preparation is the challenging part to achieve premixed charge compression ignition combustion. In the present experimental study a diesel vaporizer is used to achieve premixed charge compression ignition combustion. A vaporized diesel fuel was mixed with the air to form premixed charge and inducted into the cylinder during the intake stroke. Low diesel volatility remains the main obstacle in preparing premixed air-fuel mixture. Exhaust gas re-circulation can be used to control the rate of heat release. The objective of this study is to reduce exhaust emission levels with maintaining thermal efficiency close to compression ignition engine.

  13. Avoiding spurious submovement decompositions: a globally optimal algorithm

    International Nuclear Information System (INIS)

    Rohrer, Brandon Robinson; Hogan, Neville

    2003-01-01

    Evidence for the existence of discrete submovements underlying continuous human movement has motivated many attempts to extract them. Although they produce visually convincing results, all of the methodologies that have been employed are prone to produce spurious decompositions. Examples of potential failures are given. A branch-and-bound algorithm for submovement extraction, capable of global nonlinear minimization (and hence capable of avoiding spurious decompositions), is developed and demonstrated.

  14. Pulsed Compression Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Roestenberg, T. [University of Twente, Enschede (Netherlands)

    2012-06-07

    The advantages of the Pulsed Compression Reactor (PCR) over the internal combustion engine-type chemical reactors are briefly discussed. Over the last four years a project concerning the fundamentals of the PCR technology has been performed by the University of Twente, Enschede, Netherlands. In order to assess the feasibility of the application of the PCR principle for the conversion methane to syngas, several fundamental questions needed to be answered. Two important questions that relate to the applicability of the PCR for any process are: how large is the heat transfer rate from a rapidly compressed and expanded volume of gas, and how does this heat transfer rate compare to energy contained in the compressed gas? And: can stable operation with a completely free piston as it is intended with the PCR be achieved?.

  15. Decomposition characteristics of maize ( Zea mays . L.) straw with ...

    African Journals Online (AJOL)

    Decomposition of maize straw incorporated into soil with various nitrogen amended carbon to nitrogen (C/N) ratios under a range of moisture was studied through a laboratory incubation trial. The experiment was set up to simulate the most suitable C/N ratio for straw carbon (C) decomposition and sequestering in the soil.

  16. Matrix-Inversion-Free Compressed Sensing With Variable Orthogonal Multi-Matching Pursuit Based on Prior Information for ECG Signals.

    Science.gov (United States)

    Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao

    2016-05-19

    Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.

  17. The Products of the Thermal Decomposition of CH3CHO

    Energy Technology Data Exchange (ETDEWEB)

    Vasiliou, AnGayle; Piech, Krzysztof M.; Zhang, Xu; Nimlos, Mark R.; Ahmed, Musahid; Golan, Amir; Kostko, Oleg; Osborn, David L.; Daily, John W.; Stanton, John F.; Ellison, G. Barney

    2011-04-06

    We have used a heated 2 cm x 1 mm SiC microtubular (mu tubular) reactor to decompose acetaldehyde: CH3CHO + DELTA --> products. Thermal decomposition is followed at pressures of 75 - 150 Torr and at temperatures up to 1700 K, conditions that correspond to residence times of roughly 50 - 100 mu sec in the mu tubular reactor. The acetaldehyde decomposition products are identified by two independent techniques: VUV photoionization mass spectroscopy (PIMS) and infrared (IR) absorption spectroscopy after isolation in a cryogenic matrix. Besides CH3CHO, we have studied three isotopologues, CH3CDO, CD3CHO, and CD3CDO. We have identified the thermal decomposition products CH3(PIMS), CO (IR, PIMS), H (PIMS), H2 (PIMS), CH2CO (IR, PIMS), CH2=CHOH (IR, PIMS), H2O (IR, PIMS), and HC=CH (IR, PIMS). Plausible evidence has been found to support the idea that there are at least three different thermal decomposition pathways for CH3CHO: Radical decomposition: CH3CHO + DELTA --> CH3 + [HCO] --> CH3 + H + CO Elimination: CH3CHO + DELTA --> H2 + CH2=C=O. Isomerization/elimination: CH3CHO + DELTA --> [CH2=CH-OH] --> HC=CH + H2O. Both PIMS and IR spectroscopy show compelling evidence for the participation of vinylidene, CH2=C:, as an intermediate in the decomposition of vinyl alchohol: CH2=CH-OH + DELTA --> [CH2=C:] + H2O --> HC=CH + H2O.

  18. Laser-induced diffusion decomposition in Fe–V thin-film alloys

    Energy Technology Data Exchange (ETDEWEB)

    Polushkin, N.I., E-mail: nipolushkin@fc.ul.pt [Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal); Instituto de Ciência e Engenharia de Materiais e Superfícies, 1049-001 Lisboa (Portugal); Duarte, A.C.; Conde, O. [Departamento de Física, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisboa (Portugal); Instituto de Ciência e Engenharia de Materiais e Superfícies, 1049-001 Lisboa (Portugal); Alves, E. [Associação Euratom/IST e Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal); Barradas, N.P. [Centro de Ciências e Tecnologias Nucleares, Instituto Superior Técnico, Universidade de Lisboa, 2695-066 Bobadela LRS (Portugal); García-García, A.; Kakazei, G.N.; Ventura, J.O.; Araujo, J.P. [Departamento de Física, Universidade do Porto e IFIMUP, 4169-007 Porto (Portugal); Oliveira, V. [Instituto de Ciência e Engenharia de Materiais e Superfícies, 1049-001 Lisboa (Portugal); Instituto Superior de Engenharia de Lisboa, 1959-007 Lisboa (Portugal); Vilar, R. [Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal); Instituto de Ciência e Engenharia de Materiais e Superfícies, 1049-001 Lisboa (Portugal)

    2015-05-01

    Highlights: • Irradiation of an Fe–V alloy by femtosecond laser triggers diffusion decomposition. • The decomposition occurs with strongly enhanced (∼4 orders) atomic diffusivity. • This anomaly is associated with the metallic glassy state achievable under laser quenching. • The ultrafast diffusion decomposition is responsible for laser-induced ferromagnetism. - Abstract: We investigate the origin of ferromagnetism induced in thin-film (∼20 nm) Fe–V alloys by their irradiation with subpicosecond laser pulses. We find with Rutherford backscattering that the magnetic modifications follow a thermally stimulated process of diffusion decomposition, with formation of a-few-nm-thick Fe enriched layer inside the film. Surprisingly, similar transformations in the samples were also found after their long-time (∼10{sup 3} s) thermal annealing. However, the laser action provides much higher diffusion coefficients (∼4 orders of magnitude) than those obtained under standard heat treatments. We get a hint that this ultrafast diffusion decomposition occurs in the metallic glassy state achievable in laser-quenched samples. This vitrification is thought to be a prerequisite for the laser-induced onset of ferromagnetism that we observe.

  19. C7-Decompositions of the Tensor Product of Complete Graphs

    Directory of Open Access Journals (Sweden)

    Manikandan R.S.

    2017-08-01

    Full Text Available In this paper we consider a decomposition of Km × Kn, where × denotes the tensor product of graphs, into cycles of length seven. We prove that for m, n ≥ 3, cycles of length seven decompose the graph Km × Kn if and only if (1 either m or n is odd and (2 14 | m(m − 1n(n − 1. The results of this paper together with the results of [Cp-Decompositions of some regular graphs, Discrete Math. 306 (2006 429–451] and [C5-Decompositions of the tensor product of complete graphs, Australasian J. Combinatorics 37 (2007 285–293], give necessary and sufficient conditions for the existence of a p-cycle decomposition, where p ≥ 5 is a prime number, of the graph Km × Kn.

  20. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    OpenAIRE

    Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin

    2017-01-01

    Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes ...

  1. A study of the thermodynamic performance and CO2 emissions of a vapour compression bio-trigeneration system

    International Nuclear Information System (INIS)

    Parise, Jose A.R.; Castillo Martinez, Luis C.; Pitanga Marques, Rui; Betancourt Mena, Jesus; Vargas, Jose V.C.

    2011-01-01

    A trigeneration system (simultaneous production of heating, cooling and electricity) using a heat engine and a vapour compression chiller, running on biofuel, is studied. A system configuration, capable of meeting the three energy demands in a realistic situation, was devised. It consisted of a compression ignition internal combustion engine driving an electric generator, an electrically driven vapour compression heat pump and a peak boiler. Part of the heating demand was met by recovering waste heat from the engine and the heat pump condenser, thus reducing the overall fuel consumption. New criteria parameters, based on the relative magnitudes of the three energy demands, were defined to evaluate thermal performance and CO 2 emissions. A comparative analysis between the biofuel trigeneration and conventional fossil fuel with no waste heat recovery was carried out, showing that, depending on the relative values of energy demands and on component characteristics, significant reduction on primary energy consumption (up to 50%) and on CO 2 emissions (up to 5% of the original emissions) can be attained with the biofuel-trigeneration combination.

  2. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  3. Sparse Channel Estimation for MIMO-OFDM Two-Way Relay Network with Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Aihua Zhang

    2013-01-01

    Full Text Available Accurate channel impulse response (CIR is required for equalization and can help improve communication service quality in next-generation wireless communication systems. An example of an advanced system is amplify-and-forward multiple-input multiple-output two-way relay network, which is modulated by orthogonal frequency-division multiplexing. Linear channel estimation methods, for example, least squares and expectation conditional maximization, have been proposed previously for the system. However, these methods do not take advantage of channel sparsity, and they decrease estimation performance. We propose a sparse channel estimation scheme, which is different from linear methods, at end users under the relay channel to enable us to exploit sparsity. First, we formulate the sparse channel estimation problem as a compressed sensing problem by using sparse decomposition theory. Second, the CIR is reconstructed by CoSaMP and OMP algorithms. Finally, computer simulations are conducted to confirm the superiority of the proposed methods over traditional linear channel estimation methods.

  4. Thermal unimolecular decomposition of methanol. Zum thermischen unimolekularen Zerfall von Methanol

    Energy Technology Data Exchange (ETDEWEB)

    Spindler, K

    1979-01-01

    The thermal unimolecular decomposition of methanol and that of acetone (1B) were investigated experimentally after reflected shockwaves, by following up the OH and CH/sub 3/ absorption or the CH/sub 3/ and acetone absorption respectively. A computer simulation of the decomposition of methanol and the subsequent reactions was done. This gave velocity constants for some reactions, which are different from those that are found in the literature. The experimental investigation of the decomposition of acetone, from comparison of the results with the data in the literature, shows that the observations of CH/sub 3/ absorption are very suitable for obtaining velocity constants for decomposition reactions, where CH/sub 3/ radicals are formed in the first stage.

  5. FTIR study of decomposition of carbon dioxide in dc corona discharges

    International Nuclear Information System (INIS)

    Horvath, G; Skalny, J D; Mason, N J

    2008-01-01

    The decomposition rate of carbon dioxide and the generation of ozone and carbon monoxide in coaxial corona discharges fed by pure CO 2 has been investigated in a dc corona discharge operated in both positive and negative polarities using FTIR spectroscopy. The degree of CO 2 decomposition is found to be dependent on the voltage, U, with a maximum CO 2 decomposition of nearly 10% found in a negative corona discharge for U = 7.5 kV. In all cases the amount of CO 2 decomposition was lower in positive polarity discharges than in negative polarity discharges operated under same conditions. CO and ozone were found to be the main products observed in the discharges.

  6. FTIR study of decomposition of carbon dioxide in dc corona discharges

    Energy Technology Data Exchange (ETDEWEB)

    Horvath, G; Skalny, J D [Department of Experimental Physics, Comenius University, Mlynska dolina F-2, 842 48, Bratislava (Slovakia); Mason, N J [Open University, Department of Physics and Astronomy, Walton Hall, Milton Keynes MK7 6AA (United Kingdom)

    2008-11-21

    The decomposition rate of carbon dioxide and the generation of ozone and carbon monoxide in coaxial corona discharges fed by pure CO{sub 2} has been investigated in a dc corona discharge operated in both positive and negative polarities using FTIR spectroscopy. The degree of CO{sub 2} decomposition is found to be dependent on the voltage, U, with a maximum CO{sub 2} decomposition of nearly 10% found in a negative corona discharge for U = 7.5 kV. In all cases the amount of CO{sub 2} decomposition was lower in positive polarity discharges than in negative polarity discharges operated under same conditions. CO and ozone were found to be the main products observed in the discharges.

  7. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  8. Biomass pyrolysis: Thermal decomposition mechanisms of furfural and benzaldehyde

    Science.gov (United States)

    Vasiliou, AnGayle K.; Kim, Jong Hyun; Ormond, Thomas K.; Piech, Krzysztof M.; Urness, Kimberly N.; Scheer, Adam M.; Robichaud, David J.; Mukarakate, Calvin; Nimlos, Mark R.; Daily, John W.; Guan, Qi; Carstensen, Hans-Heinrich; Ellison, G. Barney

    2013-09-01

    The thermal decompositions of furfural and benzaldehyde have been studied in a heated microtubular flow reactor. The pyrolysis experiments were carried out by passing a dilute mixture of the aromatic aldehydes (roughly 0.1%-1%) entrained in a stream of buffer gas (either He or Ar) through a pulsed, heated SiC reactor that is 2-3 cm long and 1 mm in diameter. Typical pressures in the reactor are 75-150 Torr with the SiC tube wall temperature in the range of 1200-1800 K. Characteristic residence times in the reactor are 100-200 μsec after which the gas mixture emerges as a skimmed molecular beam at a pressure of approximately 10 μTorr. Products were detected using matrix infrared absorption spectroscopy, 118.2 nm (10.487 eV) photoionization mass spectroscopy and resonance enhanced multiphoton ionization. The initial steps in the thermal decomposition of furfural and benzaldehyde have been identified. Furfural undergoes unimolecular decomposition to furan + CO: C4H3O-CHO (+ M) → CO + C4H4O. Sequential decomposition of furan leads to the production of HC≡CH, CH2CO, CH3C≡CH, CO, HCCCH2, and H atoms. In contrast, benzaldehyde resists decomposition until higher temperatures when it fragments to phenyl radical plus H atoms and CO: C6H5CHO (+ M) → C6H5CO + H → C6H5 + CO + H. The H atoms trigger a chain reaction by attacking C6H5CHO: H + C6H5CHO → [C6H6CHO]* → C6H6 + CO + H. The net result is the decomposition of benzaldehyde to produce benzene and CO.

  9. Kinetics of the thermal decomposition of nickel iodide

    International Nuclear Information System (INIS)

    Nakajima, Hayato; Shimizu, Saburo; Onuki, Kaoru; Ikezoe, Yasumasa; Sato, Shoichi

    1984-01-01

    Thermal decomposition kinetics of NiI 2 under constant I 2 partial pressure was studied by thermogravimetry. The reaction is considered as a reaction step of the thermochemical hydrogen production process in the Ni-I-S system. At temperatures from 775K to 869K and under I 2 pressures from 0 to 960Pa, the decomposition started at the NiI 2 pellet surface and the reactant-product interface moved interior at a constant rate until the decomposed fraction, α, reached 0.6. The overall reaction rate at a constant temperature can be expressed as the difference of the constant decomposition (forward) rate, which is proportional to the equilibrium dissociation pressure of NiI 2 , and the iodide formation (backward) rate, which is proportional to the I 2 pressure. The apparent activation energy of the decomposition was 147 kJ.mol -1 , which is very close to the heat of reaction, 152 kJ.mol -1 calculated from the equilibrium dissociation pressure. The electron microscopic observations, revealed that the reaction product obtained by decomposing NiI 2 under pure He atomosphere was composed of relatively well grown cubic Ni crystals. Whereas, the decomposed product obtained under I 2 -He mixture was composed of larger but disordered crystals. (author)

  10. Energetic contaminants inhibit plant litter decomposition in soil.

    Science.gov (United States)

    Kuperman, Roman G; Checkai, Ronald T; Simini, Michael; Sunahara, Geoffrey I; Hawari, Jalal

    2018-05-30

    Individual effects of nitrogen-based energetic materials (EMs) 2,4-dinitrotoluene (2,4-DNT), 2-amino-4,6-dinitrotoluene (2-ADNT), 4-amino-2,6-dinitrotoluene (4-ADNT), nitroglycerin (NG), and 2,4,6,8,10,12-hexanitrohexaazaisowurtzitane (CL-20) on litter decomposition, an essential biologically-mediated soil process, were assessed using Orchard grass (Dactylis glomerata) straw in Sassafras sandy loam (SSL) soil, which has physicochemical characteristics that support "very high" qualitative relative bioavailability for organic chemicals. Batches of SSL soil were separately amended with individual EMs or acetone carrier control. To quantify the decomposition rates, one straw cluster was harvested from a set of randomly selected replicate containers from within each treatment, after 1, 2, 3, 4, 6, and 8 months of exposure. Results showed that soil amended with 2,4-DNT or NG inhibited litter decomposition rates based on the median effective concentration (EC50) values of 1122 mg/kg and 860 mg/kg, respectively. Exposure to 2-ADNT, 4-ADNT or CL-20 amended soil did not significantly affect litter decomposition in SSL soil at ≥ 10,000 mg/kg. These ecotoxicological data will be helpful in identifying concentrations of EMs in soil that present an acceptable ecological risk for biologically-mediated soil processes. Published by Elsevier Inc.

  11. Atomic effect algebras with compression bases

    International Nuclear Information System (INIS)

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-01

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  12. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  13. Advances in compressible turbulent mixing

    International Nuclear Information System (INIS)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

  14. Advances in compressible turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  15. Study of CSR longitudinal bunch compression cavity

    International Nuclear Information System (INIS)

    Yin Dayu; Li Peng; Liu Yong; Xie Qingchun

    2009-01-01

    The scheme of longitudinal bunch compression cavity for the Cooling Storage Ring (CSR)is an important issue. Plasma physics experiments require high density heavy ion beam and short pulsed bunch,which can be produced by non-adiabatic compression of bunch implemented by a fast compression with 90 degree rotation in the longitudinal phase space. The phase space rotation in fast compression is initiated by a fast jump of the RF-voltage amplitude. For this purpose, the CSR longitudinal bunch compression cavity, loaded with FINEMET-FT-1M is studied and simulated with MAFIA code. In this paper, the CSR longitudinal bunch compression cavity is simulated and the initial bunch length of 238 U 72+ with 250 MeV/u will be compressed from 200 ns to 50 ns.The construction and RF properties of the CSR longitudinal bunch compression cavity are simulated and calculated also with MAFIA code. The operation frequency of the cavity is 1.15 MHz with peak voltage of 80 kV, and the cavity can be used to compress heavy ions in the CSR. (authors)

  16. Role of electrodes in ambient electrolytic decomposition of hydroxylammonium nitrate (HAN solutions

    Directory of Open Access Journals (Sweden)

    Kai Seng Koh

    2013-09-01

    Full Text Available Decomposition of hydroxylammonium nitrate (HAN solution with electrolytic decomposition method has attracted much attention in recent years due to its efficiencies and practicability. However, the phenomenon has not been well-studied till now. By utilizing mathematical model currently available, the effect of water content and power used for decomposition was studied. Experiment data shows that sacrificial material such as copper or aluminum outperforms inert electrodes in the decomposition of HAN solution. In the case of using copper wire to electrolyse HAN solutions, approximately 10 seconds is required to reach 100 °C regardless of concentration of HAN. In term of power consumption, 100 W–300 W was found to be the range in which decomposition could be triggered effectively using copper wire as electrodes.

  17. Using combinatorial problem decomposition for optimizing plutonium inventory management

    International Nuclear Information System (INIS)

    Niquil, Y.; Gondran, M.; Voskanian, A.; Paris-11 Univ., 91 - Orsay

    1997-03-01

    Plutonium Inventory Management Optimization can be modeled as a very large 0-1 linear program. To solve it, problem decomposition is necessary, since other classic techniques are not efficient for such a size. The first decomposition consists in favoring constraints that are the most difficult to reach and variables that have the highest influence on the cost: fortunately, both correspond to stock output decisions. The second decomposition consists in mixing continuous linear program solving and integer linear program solving. Besides, the first decisions to be taken are systematically favored, for they are based on data considered to be sure, when data supporting later decisions in known with less accuracy and confidence. (author)

  18. Using combinatorial problem decomposition for optimizing plutonium inventory management

    Energy Technology Data Exchange (ETDEWEB)

    Niquil, Y.; Gondran, M. [Electricite de France (EDF), 92 - Clamart (France). Direction des Etudes et Recherches; Voskanian, A. [Electricite de France (EDF), 92 - Clamart (France). Direction des Etudes et Recherches]|[Paris-11 Univ., 91 - Orsay (France). Lab. de Recherche en Informatique

    1997-03-01

    Plutonium Inventory Management Optimization can be modeled as a very large 0-1 linear program. To solve it, problem decomposition is necessary, since other classic techniques are not efficient for such a size. The first decomposition consists in favoring constraints that are the most difficult to reach and variables that have the highest influence on the cost: fortunately, both correspond to stock output decisions. The second decomposition consists in mixing continuous linear program solving and integer linear program solving. Besides, the first decisions to be taken are systematically favored, for they are based on data considered to be sure, when data supporting later decisions in known with less accuracy and confidence. (author) 7 refs.

  19. TRUST MODEL FOR SOCIAL NETWORK USING SINGULAR VALUE DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Davis Bundi Ntwiga

    2016-06-01

    Full Text Available For effective interactions to take place in a social network, trust is important. We model trust of agents using the peer to peer reputation ratings in the network that forms a real valued matrix. Singular value decomposition discounts the reputation ratings to estimate the trust levels as trust is the subjective probability of future expectations based on current reputation ratings. Reputation and trust are closely related and singular value decomposition can estimate trust using the real valued matrix of the reputation ratings of the agents in the network. Singular value decomposition is an ideal technique in error elimination when estimating trust from reputation ratings. Reputation estimation of trust is optimal at the discounting of 20 %.

  20. The effects of aging on compressive strength of low-level radioactive waste form samples

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.; Neilson, R.M. Jr.

    1996-06-01

    The Field Lysimeter Investigations: Low-Level Waste Data Base Development Program, funded by the US Nuclear Regulatory Commission (NRC), is (a) studying the degradation effects in organic ion-exchange resins caused by radiation, (b) examining the adequacy of test procedures recommended in the Branch Technical Position on Waste Form to meet the requirements of 10 CFR 61 using solidified ion-exchange resins, (c) obtaining performance information on solidified ion-exchange resins in a disposal environment, and (d) determining the condition of liners used to dispose ion-exchange resins. Compressive tests were performed periodically over a 12-year period as part of the Technical Position testing. Results of that compressive testing are presented and discussed. During the study, both portland type I-II cement and Dow vinyl ester-styrene waste form samples were tested. This testing was designed to examine the effects of aging caused by self-irradiation on the compressive strength of the waste forms. Also presented is a brief summary of the results of waste form characterization, which has been conducted in 1986, using tests recommended in the Technical Position on Waste Form. The aging test results are compared to the results of those earlier tests. 14 refs., 52 figs., 5 tabs

  1. Amplitude Modulated Sinusoidal Signal Decomposition for Audio Coding

    DEFF Research Database (Denmark)

    Christensen, M. G.; Jacobson, A.; Andersen, S. V.

    2006-01-01

    In this paper, we present a decomposition for sinusoidal coding of audio, based on an amplitude modulation of sinusoids via a linear combination of arbitrary basis vectors. The proposed method, which incorporates a perceptual distortion measure, is based on a relaxation of a nonlinear least......-squares minimization. Rate-distortion curves and listening tests show that, compared to a constant-amplitude sinusoidal coder, the proposed decomposition offers perceptually significant improvements in critical transient signals....

  2. Flux compression generators as plasma compression power sources

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.; Thomson, D.B.; Garn, W.B.

    1979-01-01

    A survey is made of applications where explosive-driven magnetic flux compression generators have been or can be used to directly power devices that produce dense plasmas. Representative examples are discussed that are specific to the theta pinch, the plasma gun, the dense plasma focus and the Z pinch. These examples are used to illustrate the high energy and power capabilities of explosive generators. An application employing a rocket-borne, generator-powered plasma gun emphasizes the size and weight potential of flux compression power supplies. Recent results from a local effort to drive a dense plasma focus are provided. Imploding liners ae discussed in the context of both the theta and Z pinches

  3. A review of plutonium oxalate decomposition reactions and effects of decomposition temperature on the surface area of the plutonium dioxide product

    Science.gov (United States)

    Orr, R. M.; Sims, H. E.; Taylor, R. J.

    2015-10-01

    Plutonium (IV) and (III) ions in nitric acid solution readily form insoluble precipitates with oxalic acid. The plutonium oxalates are then easily thermally decomposed to form plutonium dioxide powder. This simple process forms the basis of current industrial conversion or 'finishing' processes that are used in commercial scale reprocessing plants. It is also widely used in analytical or laboratory scale operations and for waste residues treatment. However, the mechanisms of the thermal decompositions in both air and inert atmospheres have been the subject of various studies over several decades. The nature of intermediate phases is of fundamental interest whilst understanding the evolution of gases at different temperatures is relevant to process control. The thermal decomposition is also used to control a number of powder properties of the PuO2 product that are important to either long term storage or mixed oxide fuel manufacturing. These properties are the surface area, residual carbon impurities and adsorbed volatile species whereas the morphology and particle size distribution are functions of the precipitation process. Available data and experience regarding the thermal and radiation-induced decompositions of plutonium oxalate to oxide are reviewed. The mechanisms of the thermal decompositions are considered with a particular focus on the likely redox chemistry involved. Also, whilst it is well known that the surface area is dependent on calcination temperature, there is a wide variation in the published data and so new correlations have been derived. Better understanding of plutonium (III) and (IV) oxalate decompositions will assist the development of more proliferation resistant actinide co-conversion processes that are needed for advanced reprocessing in future closed nuclear fuel cycles.

  4. Influence of nitrogen dioxide on the thermal decomposition of ammonium nitrate

    OpenAIRE

    Igor L. Kovalenko

    2015-01-01

    In this paper results of experimental studies of ammonium nitrate thermal decomposition in an open system under normal conditions and in NO2 atmosphere are presented. It is shown that nitrogen dioxide is the initiator of ammonium nitrate self-accelerating exothermic cyclic decomposition process. The insertion of NO2 from outside under the conditions of nonisothermal experiment reduces the characteristic temperature of the beginning of self-accelerating decomposition by 50...70 °C. Using metho...

  5. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  6. Anisotropic Concrete Compressive Strength

    DEFF Research Database (Denmark)

    Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao

    2017-01-01

    When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...

  7. Experiments with automata compression

    NARCIS (Netherlands)

    Daciuk, J.; Yu, S; Daley, M; Eramian, M G

    2001-01-01

    Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

  8. Interacting effects of insects and flooding on wood decomposition.

    Science.gov (United States)

    Michael Ulyshen

    2014-01-01

    Saproxylic arthropods are thought to play an important role in wood decomposition but very few efforts have been made to quantify their contributions to the process and the factors controlling their activities are not well understood. In the current study, mesh exclusion bags were used to quantify how arthropods affect loblolly pine (Pinus taeda L.) decomposition rates...

  9. On reliability of singular-value decomposition in attractor reconstruction

    International Nuclear Information System (INIS)

    Palus, M.; Dvorak, I.

    1990-12-01

    Applicability of singular-value decomposition for reconstructing the strange attractor from one-dimensional chaotic time series, proposed by Broomhead and King, is extensively tested and discussed. Previously published doubts about its reliability are confirmed: singular-value decomposition, by nature a linear method, is only of a limited power when nonlinear structures are studied. (author). 29 refs, 9 figs

  10. Limiting density ratios in piston-driven compressions

    International Nuclear Information System (INIS)

    Lee, S.

    1985-07-01

    By using global energy and pressure balance applied to a shock model it is shown that for a piston-driven fast compression, the maximum compression ratio is not dependent on the absolute magnitude of the piston power, but rather on the power pulse shape. Specific cases are considered and a maximum density compression ratio of 27 is obtained for a square-pulse power compressing a spherical pellet with specific heat ratio of 5/3. Double pulsing enhances the density compression ratio to 1750 in the case of linearly rising compression pulses. Using this method further enhancement by multiple pulsing becomes obvious. (author)

  11. Decomposition of oxalate precipitates by photochemical reaction

    International Nuclear Information System (INIS)

    Yoo, J.H.; Kim, E.H.

    1998-01-01

    A photo-radiation method was applied to decompose oxalate precipitates so that it can be dissolved into dilute nitric acid. This work has been studied as a part of partitioning of minor actinides. Minor actinides can be recovered from high-level wastes as oxalate precipitates, but they tend to be coprecipitated together with lanthanide oxalates. This requires another partitioning step for mutual separation of actinide and lanthanide groups. In this study, therefore, the photochemical decomposition mechanism of oxalates in the presence of nitric acid was elucidated by experimental work. The decomposition of oxalates was proved to be dominated by the reaction with hydroxyl radical generated from the nitric acid, rather than with nitrite ion also formed from nitrate ion. The decomposition rate of neodymium oxalate, which was chosen as a stand-in compound representing minor actinide and lanthanide oxalates, was found to be 0.003 M/hr at the conditions of 0.5 M HNO 3 and room temperature when a mercury lamp was used as a light source. (author)

  12. On the correspondence between data revision and trend-cycle decomposition

    NARCIS (Netherlands)

    Dungey, M.; Jacobs, J. P. A. M.; Tian, J.; van Norden, S.

    2013-01-01

    This article places the data revision model of Jacobs and van Norden (2011) within a class of trend-cycle decompositions relating directly to the Beveridge-Nelson decomposition. In both these approaches, identifying restrictions on the covariance matrix under simple and realistic conditions may

  13. Decompositions, partitions, and coverings with convex polygons and pseudo-triangles

    NARCIS (Netherlands)

    Aichholzer, O.; Huemer, C.; Kappes, S.; Speckmann, B.; Tóth, Cs.D.

    2007-01-01

    We propose a novel subdivision of the plane that consists of both convex polygons and pseudo-triangles. This pseudo-convex decomposition is significantly sparser than either convex decompositions or pseudo-triangulations for planar point sets and simple polygons. We also introduce pseudo-convex

  14. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2013-01-01

    Compressibility, Turbulence and High Speed Flow introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range, through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. The book provides the reader with the necessary background and current trends in the theoretical and experimental aspects of compressible turbulent flows and compressible turbulence. Detailed derivations of the pertinent equations describing the motion of such turbulent flows is provided and an extensive discussion of the various approaches used in predicting both free shear and wall bounded flows is presented. Experimental measurement techniques common to the compressible flow regime are introduced with particular emphasis on the unique challenges presented by high speed flows. Both experimental and numerical simulation work is supplied throughout to provide the reader with an overall perspective of current tre...

  15. The influence of preburial insect access on the decomposition rate.

    Science.gov (United States)

    Bachmann, Jutta; Simmons, Tal

    2010-07-01

    This study compared total body score (TBS) in buried remains (35 cm depth) with and without insect access prior to burial. Sixty rabbit carcasses were exhumed at 50 accumulated degree day (ADD) intervals. Weight loss, TBS, intra-abdominal decomposition, carcass/soil interface temperature, and below-carcass soil pH were recorded and analyzed. Results showed significant differences (p decomposition rates between carcasses with and without insect access prior to burial. An approximately 30% enhanced decomposition rate with insects was observed. TBS was the most valid tool in postmortem interval (PMI) estimation. All other variables showed only weak relationships to decomposition stages, adding little value to PMI estimation. Although progress in estimating the PMI for surface remains has been made, no previous studies have accomplished this for buried remains. This study builds a framework to which further comparable studies can contribute, to produce predictive models for PMI estimation in buried human remains.

  16. LEAF RESIDUE DECOMPOSITION OF SELECTED ATLANTIC FOREST TREE SPECIES

    Directory of Open Access Journals (Sweden)

    Helga Dias Arato

    2018-02-01

    Full Text Available ABSTRACT Biogeochemical cycling is essential to establish and maintain plant and animal communities. Litter is one of main compartments of this cycle, and the kinetics of leaf decomposition in forest litter depend on the chemical composition and environmental conditions. This study evaluated the effect of leaf composition and environmental conditions on leaf decomposition of native Atlantic Forest trees. The following species were analyzed: Mabea fistulifera Mart., Bauhinia forficata Link., Aegiphila sellowiana Cham., Zeyheria tuberculosa (Vell, Luehea grandiflora Mart. et. Zucc., Croton floribundus Spreng., Trema micrantha (L Blume, Cassia ferruginea (Schrad Schrad ex DC, Senna macranthera (DC ex Collad. H. S. Irwin and Barney and Schinus terebinthifolius Raddi (Anacardiaceae. For each species, litter bags were distributed on and fixed to the soil surface of soil-filled pots (in a greenhouse, or directly to the surface of the same soil type in a natural forest (field. Every 30 days, the dry weight and soil basal respiration in both environments were determined. The cumulative decomposition of leaves varied according to the species, leaf nutrient content and environment. In general, the decomposition rate was lowest for Aegiphila sellowiana and fastest for Bauhinia forficate and Schinus terebinthifolius. This trend was similar under the controlled conditions of a greenhouse and in the field. The selection of species with a differentiated decomposition pattern, suited for different stages of the recovery process, can help improve soil restoration.

  17. Basic dye decomposition kinetics in a photocatalytic slurry reactor

    International Nuclear Information System (INIS)

    Wu, C.-H.; Chang, H.-W.; Chern, J.-M.

    2006-01-01

    Wastewater effluent from textile plants using various dyes is one of the major water pollutants to the environment. Traditional chemical, physical and biological processes for treating textile dye wastewaters have disadvantages such as high cost, energy waste and generating secondary pollution during the treatment process. The photocatalytic process using TiO 2 semiconductor particles under UV light illumination has been shown to be potentially advantageous and applicable in the treatment of wastewater pollutants. In this study, the dye decomposition kinetics by nano-size TiO 2 suspension at natural solution pH was experimentally studied by varying the agitation speed (50-200 rpm), TiO 2 suspension concentration (0.25-1.71 g/L), initial dye concentration (10-50 ppm), temperature (10-50 deg. C), and UV power intensity (0-96 W). The experimental results show the agitation speed, varying from 50 to 200 rpm, has a slight influence on the dye decomposition rate and the pH history; the dye decomposition rate increases with the TiO 2 suspension concentration up to 0.98 g/L, then decrease with increasing TiO 2 suspension concentration; the initial dye decomposition rate increases with the initial dye concentration up to a certain value depending upon the temperature, then decreases with increasing initial dye concentration; the dye decomposition rate increases with the UV power intensity up to 64 W to reach a plateau. Kinetic models have been developed to fit the experimental kinetic data well

  18. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  19. 30 CFR 77.412 - Compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...

  20. Empirical projection-based basis-component decomposition method

    Science.gov (United States)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  1. Two divergent paths: compression vs. non-compression in deep venous thrombosis and post thrombotic syndrome

    Directory of Open Access Journals (Sweden)

    Eduardo Simões Da Matta

    Full Text Available Abstract Use of compression therapy to reduce the incidence of postthrombotic syndrome among patients with deep venous thrombosis is a controversial subject and there is no consensus on use of elastic versus inelastic compression, or on the levels and duration of compression. Inelastic devices with a higher static stiffness index, combine relatively small and comfortable pressure at rest with pressure while standing strong enough to restore the “valve mechanism” generated by plantar flexion and dorsiflexion of the foot. Since the static stiffness index is dependent on the rigidity of the compression system and the muscle strength within the bandaged area, improvement of muscle mass with muscle-strengthening programs and endurance training should be encouraged. Therefore, in the acute phase of deep venous thrombosis events, anticoagulation combined with inelastic compression therapy can reduce the extension of the thrombus. Notwithstanding, prospective studies evaluating the effectiveness of inelastic therapy in deep venous thrombosis and post-thrombotic syndrome are needed.

  2. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  3. Hourly forecasting of global solar radiation based on multiscale decomposition methods: A hybrid approach

    International Nuclear Information System (INIS)

    Monjoly, Stéphanie; André, Maïna; Calif, Rudy; Soubdhan, Ted

    2017-01-01

    This paper introduces a new approach for the forecasting of solar radiation series at 1 h ahead. We investigated on several techniques of multiscale decomposition of clear sky index K_c data such as Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD) and Wavelet Decomposition. From these differents methods, we built 11 decomposition components and 1 residu signal presenting different time scales. We performed classic forecasting models based on linear method (Autoregressive process AR) and a non linear method (Neural Network model). The choice of forecasting method is adaptative on the characteristic of each component. Hence, we proposed a modeling process which is built from a hybrid structure according to the defined flowchart. An analysis of predictive performances for solar forecasting from the different multiscale decompositions and forecast models is presented. From multiscale decomposition, the solar forecast accuracy is significantly improved, particularly using the wavelet decomposition method. Moreover, multistep forecasting with the proposed hybrid method resulted in additional improvement. For example, in terms of RMSE error, the obtained forecasting with the classical NN model is about 25.86%, this error decrease to 16.91% with the EMD-Hybrid Model, 14.06% with the EEMD-Hybid model and to 7.86% with the WD-Hybrid Model. - Highlights: • Hourly forecasting of GHI in tropical climate with many cloud formation processes. • Clear sky Index decomposition using three multiscale decomposition methods. • Combination of multiscale decomposition methods with AR-NN models to predict GHI. • Comparison of the proposed hybrid model with the classical models (AR, NN). • Best results using Wavelet-Hybrid model in comparison with classical models.

  4. The persistence of human DNA in soil following surface decomposition.

    Science.gov (United States)

    Emmons, Alexandra L; DeBruyn, Jennifer M; Mundorff, Amy Z; Cobaugh, Kelly L; Cabana, Graciela S

    2017-09-01

    Though recent decades have seen a marked increase in research concerning the impact of human decomposition on the grave soil environment, the fate of human DNA in grave soil has been relatively understudied. With the purpose of supplementing the growing body of literature in forensic soil taphonomy, this study assessed the relative persistence of human DNA in soil over the course of decomposition. Endpoint PCR was used to assess the presence or absence of human nuclear and mitochondrial DNA, while qPCR was used to evaluate the quantity of human DNA recovered from the soil beneath four cadavers at the University of Tennessee's Anthropology Research Facility (ARF). Human nuclear DNA from the soil was largely unrecoverable, while human mitochondrial DNA was detectable in the soil throughout all decomposition stages. Mitochondrial DNA copy abundances were not significantly different between decomposition stages and were not significantly correlated to soil edaphic parameters tested. There was, however, a significant positive correlation between mitochondrial DNA copy abundances and the human associated bacteria, Bacteroides, as estimated by 16S rRNA gene abundances. These results show that human mitochondrial DNA can persist in grave soil and be consistently detected throughout decomposition. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  5. Excess Sodium Tetraphenylborate and Intermediates Decomposition Studies

    Energy Technology Data Exchange (ETDEWEB)

    Barnes, M.J.

    1998-12-07

    The stability of excess amounts of sodium tetraphenylborate (NaTPB) in the In-Tank Precipitation (ITP) facility depends on a number of variables. Concentration of palladium, initial benzene, and sodium ion as well as temperature provide the best opportunities for controlling the decomposition rate. This study examined the influence of these four variable on the reactivity of palladium-catalyzed sodium tetraphenylborate decomposition. Also, single effects tests investigated the reactivity of simulants with continuous stirring and nitrogen ventilation, with very high benzene concentrations, under washed sodium concentrations, with very high palladium concentrations, and with minimal quantities of excess NaTPB.

  6. Gamma ray induced decomposition of lanthanide nitrates

    International Nuclear Information System (INIS)

    Joshi, N.G.; Garg, A.N.

    1992-01-01

    Gamma ray induced decomposition of the lanthanide nitrates, Ln(NO 3 ) 3 .xH 2 O where Ln=La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Tm and Yb has been studied at different absorbed doses up to 600 kGy. G(NO 2 - ) values depend on the absorbed dose and the nature of the outer cation. It has been observed that those lanthanides which exhibit variable valency (Ce and Eu) show lower G-values. An attempt has been made to correlate thermal and radiolytic decomposition processes. (author). 20 refs., 3 figs., 1 tab

  7. Basis of the biological decomposition of xenobiotica

    International Nuclear Information System (INIS)

    Mueller, R. von

    1993-01-01

    The ability of micro-organisms to decompose different molecules and to use them as a source of carbon, nitrogen, sulphur or energy is the basis for all biological processes for cleaning up contaminated soil. Therefore, the knowledge of these decomposition processes is an important precondition for judging which contamination can be treated biologically at all and which materials can be decomposed biologically. The decomposition schemes of the most important harmful material classes (aliphatic, aromatic and chlorinated hydrocarbons) are introduced and the consequences which arise for the practical application in biological cleaning up of contaminated soils are discussed. (orig.) [de

  8. Poor chest compression quality with mechanical compressions in simulated cardiopulmonary resuscitation: a randomized, cross-over manikin study.

    Science.gov (United States)

    Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob

    2011-10-01

    Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  10. Thermal decomposition of 2-methylbenzoates of rare earth elements

    International Nuclear Information System (INIS)

    Brzyska, W.; Szubartowski, L.

    1980-01-01

    The conditions of thermal decomposition of La, Ce(3), Pr, Nd, Sm and Y 2-methylbenzoates were examined. On the basis of obtained results it was stated that hydrated 2-methylbenzoates were subjected to dehydration passing into anhydrated salts and then they decomposed into oxides. The activation energy of dehydration and decomposition reactions of lanthanons, La and Y 2-methylbenzoates was determined. (author)

  11. Bimetallic catalysts for HI decomposition in the iodine-sulfur thermochemical cycle

    International Nuclear Information System (INIS)

    Wang Laijun; Hu Songzhi; Xu Lufei; Li Daocai; Han Qi; Chen Songzhe; Zhang Ping; Xu Jingming

    2014-01-01

    Among the different kinds of thermochemical water-splitting cycles, the iodine-sulfur (IS) cycle has attracted more and more interest because it is one of the promising candidates for economical and massive hydrogen production. However, there still exist some science and technical problems to be solved before industrialization of the IS process. One such problem is the catalytic decomposition of hydrogen iodide. Although the active carbon supported platinum has been verified to present the excellent performance for HI decomposition, it is very expensive and easy to agglomerate under the harsh condition. In order to decrease the cost and increase the stability of the catalysts for HI decomposition, a series of bimetallic catalysts were prepared and studied at INET. This paper summarized our present research advances on the bimetallic catalysts (Pt-Pd, Pd-Ir and Pt-Ir) for HI decomposition. In the course of the study, the physical properties, structure, and morphology of the catalysts were characterized by specific surface area, X-ray diffractometer; and transmission electron microscopy, respectively. The catalytic activity for HI decomposition was investigated in a fixed bed reactor under atmospheric pressure. The results show that due to the higher activity and better stability, the active carbon supported bimetallic catalyst is more potential candidate than mono metallic Pt catalyst for HI decomposition in the IS thermochemical cycle. (author)

  12. Treatment of off-gas evolved from thermal decomposition of sludge waste

    International Nuclear Information System (INIS)

    Doo-Seong Hwang; Yun-Dong Choi; Gyeong-Hwan Jeong; Jei-Kwon Moon

    2013-01-01

    Korea Atomic Energy Research Institute (KAERI) started a decommissioning program of a uranium conversion plant. The treatment of the sludge waste, which was generated during the operation of the plant, is one of the most important tasks in the decommissioning program of the plant. The major compounds of sludge waste are nitrate salts and uranium. The sludge waste is denitrated by thermal decomposition. The treatment of off-gas evolved from the thermal decomposition of nitrate salts in the sludge waste is investigated. The nitrate salts in the sludge were decomposed in two steps: the first decomposition is due to the ammonium nitrate, and the second is due to the sodium and calcium nitrate and calcium carbonate. The components of off-gas from the decomposition of ammonium nitrate at low temperature are NH 3 , N 2 O, NO 2 , and NO. In addition, the components from the decomposition of sodium and calcium nitrate at high temperature are NO 2 and NO. Off-gas from the thermal decomposition is treated by the catalytic oxidation of ammonia and selective catalytic reduction (SCR). Ammonia is converted into nitrogen oxides through the oxidation catalyst and all nitrogen oxides are removed by SCR treatment besides nitrous oxide, which is greenhouse gas. An additional process is needed to remove nitrous oxide, and the feeding rate of ammonia in SCR should be controlled properly for evolved nitrogen oxides. (author)

  13. Medullary compression syndrome

    International Nuclear Information System (INIS)

    Barriga T, L.; Echegaray, A.; Zaharia, M.; Pinillos A, L.; Moscol, A.; Barriga T, O.; Heredia Z, A.

    1994-01-01

    The authors made a retrospective study in 105 patients treated in the Radiotherapy Department of the National Institute of Neoplasmic Diseases from 1973 to 1992. The objective of this evaluation was to determine the influence of radiotherapy in patients with medullary compression syndrome in aspects concerning pain palliation and improvement of functional impairment. Treatment sheets of patients with medullary compression were revised: 32 out of 39 of patients (82%) came to hospital by their own means and continued walking after treatment, 8 out of 66 patients (12%) who came in a wheelchair or were bedridden, could mobilize by their own after treatment, 41 patients (64%) had partial alleviation of pain after treatment. In those who came by their own means and did not change their characteristics, functional improvement was observed. It is concluded that radiotherapy offers palliative benefit in patients with medullary compression syndrome. (authors). 20 refs., 5 figs., 6 tabs

  14. Comparison of changes in tidal volume associated with expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation.

    Science.gov (United States)

    Morino, Akira; Shida, Masahiro; Tanaka, Masashi; Sato, Kimihiro; Seko, Toshiaki; Ito, Shunsuke; Ogawa, Shunichi; Takahashi, Naoaki

    2015-07-01

    [Purpose] This study was designed to compare and clarify the relationship between expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation, with a focus on tidal volume. [Subjects and Methods] The subjects were 18 patients on prolonged mechanical ventilation, who had undergone tracheostomy. Each patient received expiratory rib cage compression and expiratory abdominal compression; the order of implementation was randomized. Subjects were positioned in a 30° lateral recumbent position, and a 2-kgf compression was applied. For expiratory rib cage compression, the rib cage was compressed unilaterally; for expiratory abdominal compression, the area directly above the navel was compressed. Tidal volume values were the actual measured values divided by body weight. [Results] Tidal volume values were as follows: at rest, 7.2 ± 1.7 mL/kg; during expiratory rib cage compression, 8.3 ± 2.1 mL/kg; during expiratory abdominal compression, 9.1 ± 2.2 mL/kg. There was a significant difference between the tidal volume during expiratory abdominal compression and that at rest. The tidal volume in expiratory rib cage compression was strongly correlated with that in expiratory abdominal compression. [Conclusion] These results indicate that expiratory abdominal compression may be an effective alternative to the manual breathing assist procedure.

  15. Focal decompositions for linear differential equations of the second order

    Directory of Open Access Journals (Sweden)

    L. Birbrair

    2003-01-01

    two-points problems to itself such that the image of the focal decomposition associated to the first equation is a focal decomposition associated to the second one. In this paper, we present a complete classification for linear second-order equations with respect to this equivalence relation.

  16. Nested grids ILU-decomposition (NGILU)

    NARCIS (Netherlands)

    Ploeg, A. van der; Botta, E.F.F.; Wubs, F.W.

    1996-01-01

    A preconditioning technique is described which shows, in many cases, grid-independent convergence. This technique only requires an ordering of the unknowns based on the different levels of multigrid, and an incomplete LU-decomposition based on a drop tolerance. The method is demonstrated on a

  17. Molecular Mechanisms in the shock induced decomposition of FOX-7

    Science.gov (United States)

    Mishra, Ankit; Tiwari, Subodh C.; Nakano, Aiichiro; Vashishta, Priya; Kalia, Rajiv; CACS Team

    Experimental and first principle computational studies on FOX 7 have either involved a very small system consisting of a few atoms or they did not take into account the decomposition mechanisms under extreme conditions of temperature and pressure. We have performed a large-scale reactive MD simulation using ReaxFF-lg force field to study the shock decomposition of FOX 7. The chemical composition of the principal decomposition products correlates well with experimental observations. Furthermore, we observed that the production of N2 and H2O was inter molecular in nature and was through different chemical pathways. Moreover, the production of CO and CO2 was delayed due to production of large stable C,O atoms cluster. These critical insights into the initial processes involved in the shock induced decomposition of FOX-7 will greatly help in understanding the factors playing an important role in the insensitiveness of this high energy material. This research is supported by AFOSR Award No. FA9550-16-1-0042.

  18. Pollutant content in marine debris and characterization by thermal decomposition.

    Science.gov (United States)

    Iñiguez, M E; Conesa, J A; Fullana, A

    2017-04-15

    Marine debris (MDs) produces a wide variety of negative environmental, economic, safety, health and cultural impacts. Most marine litter has a very low decomposition rate (plastics), leading to a gradual accumulation in the coastal and marine environment. Characterization of the MDs has been done in terms of their pollutant content: PAHs, ClBzs, ClPhs, BrPhs, PCDD/Fs and PCBs. The results show that MDs is not a very contaminated waste. Also, thermal decomposition of MDs materials has been studied in a thermobalance at different atmospheres and heating rates. Below 400-500K, the atmosphere does not affect the thermal degradation of the mentioned waste. However, at temperatures between 500 and 800K the presence of oxygen accelerates the decomposition. Also, a kinetic model is proposed for the combustion of the MDs, and the decomposition is compared with that of their main constituents, i.e., polyethylene (PE), polystyrene (PS), polypropylene (PP), nylon and polyethylene-terephthalate (PET). Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  20. Control of climate and litter quality on leaf litter decomposition in different climatic zones.

    Science.gov (United States)

    Zhang, Xinyue; Wang, Wei

    2015-09-01

    Climate and initial litter quality are the major factors influencing decomposition rates on large scales. We established a comprehensive database of terrestrial leaf litter decomposition, including 785 datasets, to examine the relationship between climate and litter quality and evaluate the factors controlling decomposition on a global scale, the arid and semi-arid (AS) zone, the humid middle and humid low (HL) latitude zones. Initial litter nitrogen (N) and phosphorus (P) concentration only increased with mean annual temperature (MAT) in the AS zone and decreased with mean annual precipitation (MAP) in the HL zone. Compared with nutrient content, MAT imposed less effect on initial litter lignin content than MAP. MAT were the most important decomposition driving factors on a global scale as well as in different climatic zones. MAP only significantly affected decomposition constants in AS zone. Although litter quality parameters also showed significant influence on decomposition, their importance was less than the climatic factors. Besides, different litter quality parameters exerted significant influence on decomposition in different climatic zones. Our results emphasized that climate consistently exerted important effects on decomposition constants across different climatic zones.

  1. Construct solitary solutions of discrete hybrid equation by Adomian Decomposition Method

    International Nuclear Information System (INIS)

    Wang Zhen; Zhang Hongqing

    2009-01-01

    In this paper, we apply the Adomian Decomposition Method to solving the differential-difference equations. A typical example is applied to illustrate the validity and the great potential of the Adomian Decomposition Method in solving differential-difference equation. Kink shaped solitary solution and Bell shaped solitary solution are presented. Comparisons are made between the results of the proposed method and exact solutions. The results show that the Adomian Decomposition Method is an attractive method in solving the differential-difference equations.

  2. MP3 compression of Doppler ultrasound signals.

    Science.gov (United States)

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  3. Effect of catalyst for the decomposition of VOCs in a NTP reactor

    International Nuclear Information System (INIS)

    Mohanty, Suchitra; Das, Smrutiprava; Paikaray, Rita; Sahoo, Gourishankar; Samantaray, Subrata

    2015-01-01

    Air pollution has become a major cause of human distress both directly and indirectly. VOCs are becoming the major air pollutants. So the decomposition of VOCs is present need of our society. Non-thermal plasma reactor (NTP) is proven to be effective for low concentration VOCs decomposition. For safe and effective application of DBD, optimization of treatment process requires different plasma parameter characterization. So electron temperature and electron density parameters of VOCs show the decomposition path ways. In this piece of work by taking the emission spectra and comparing the line intensity ratios, the electron temperature and density were determined. Also the decomposition rate in terms of the deposited products on the dielectric surface was studied. Decomposition rate increases in presence of catalyst as compared to the pure compound in presence of a carrier gas. Decomposition process was studied by UV-VIS, FTIR, OES Spectroscopic methods and by GCMS. Deposited products are analyzed by UV-VIS and FTIR spectroscopy. Plasma parameters like electron temperature, density are studied with OES. And gaseous products are studied by GCMS showing the peaks for the by products. (author)

  4. Plasma heating by adiabatic compression

    International Nuclear Information System (INIS)

    Ellis, R.A. Jr.

    1972-01-01

    These two lectures will cover the following three topics: (i) The application of adiabatic compression to toroidal devices is reviewed. The special case of adiabatic compression in tokamaks is considered in more detail, including a discussion of the equilibrium, scaling laws, and heating effects. (ii) The ATC (Adiabatic Toroidal Compressor) device which was completed in May 1972, is described in detail. Compression of a tokamak plasma across a static toroidal field is studied in this device. The device is designed to produce a pre-compression plasma with a major radius of 17 cm, toroidal field of 20 kG, and current of 90 kA. The compression leads to a plasma with major radius of 38 cm and minor radius of 10 cm. Scaling laws imply a density increase of a factor 6, temperature increase of a factor 3, and current increase of a factor 2.4. An additional feature of ATC is that it is a large tokamak which operates without a copper shell. (iii) Data which show that the expected MHD behavior is largely observed is presented and discussed. (U.S.)

  5. Concurrent data compression and protection

    International Nuclear Information System (INIS)

    Saeed, M.

    2009-01-01

    Data compression techniques involve transforming data of a given format, called source message, to data of a smaller sized format, called codeword. The primary objective of data encryption is to ensure security of data if it is intercepted by an eavesdropper. It transforms data of a given format, called plaintext, to another format, called ciphertext, using an encryption key or keys. Thus, combining the processes of compression and encryption together must be done in this order, that is, compression followed by encryption because all compression techniques heavily rely on the redundancies which are inherently a part of a regular text or speech. The aim of this research is to combine two processes of compression (using an existing scheme) with a new encryption scheme which should be compatible with encoding scheme embedded in encoder. The novel technique proposed by the authors is new, unique and is highly secured. The deployment of sentinel marker' enhances the security of the proposed TR-One algorithm from 2/sup 44/ ciphertexts to 2/sup 44/ +2/sub 20/ ciphertexts thus imposing extra challenges to the intruders. (author)

  6. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Groer, Christopher S [ORNL; Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.

  7. The nitric acid decomposition of calcined danburite concentrate of Ak-Arkhar Deposit

    International Nuclear Information System (INIS)

    Kurbonov, A.S.; Mamatov, E.D.; Suleymani, M.; Borudzherdi, A.; Mirsaidov, U.M.

    2011-01-01

    Present article is devoted to nitric acid decomposition of calcined danburite concentrate of Ak-Arkhar Deposit of Tajikistan. The obtaining of boric acid from pre backed danburite concentrate by decomposition of nitric acid was studied. The chemical composition of danburite concentrate was determined. The laboratory study of danburite leaching by nitric acid was conducted. The influence of temperature, process duration, nitric acid concentration on nitric acid decomposition of calcined danburite concentrate of Ak-Arkhar Deposit was studied as well. The optimal conditions of nitric acid decomposition of calcined danburite concentrate of Ak-Arkhar Deposit, including temperature, process duration, nitric acid concentration and particle size were proposed.

  8. Compressible Fluid Suspension Performance Testing

    National Research Council Canada - National Science Library

    Hoogterp, Francis

    2003-01-01

    ... compressible fluid suspension system that was designed and installed on the vehicle by DTI. The purpose of the tests was to evaluate the possible performance benefits of the compressible fluid suspension system...

  9. Systolic Compression of Epicardial Coronary and Intramural Arteries

    Science.gov (United States)

    Mohiddin, Saidi A.; Fananapazir, Lameh

    2002-01-01

    It has been suggested that systolic compression of epicardial coronary arteries is an important cause of myocardial ischemia and sudden death in children with hypertrophic cardiomyopathy. We examined the associations between sudden death, systolic coronary compression of intra- and epicardial arteries, myocardial perfusion abnormalities, and severity of hypertrophy in children with hypertrophic cardiomyopathy. We reviewed the angiograms from 57 children with hypertrophic cardiomyopathy for the presence of coronary and septal artery compression; coronary compression was present in 23 (40%). The left anterior descending artery was most often affected, and multiple sites were found in 4 children. Myocardial perfusion abnormalities were more frequently present in children with coronary compression than in those without (94% vs 47%, P = 0.002). Coronary compression was also associated with more severe septal hypertrophy and greater left ventricular outflow gradient. Septal branch compression was present in 65% of the children and was significantly associated with coronary compression, severity of septal hypertrophy, and outflow obstruction. Multivariate analysis showed that septal thickness and septal branch compression, but not coronary compression, were independent predictors of perfusion abnormalities. Coronary compression was not associated with symptom severity, ventricular tachycardia, or a worse prognosis. We conclude that compression of coronary arteries and their septal branches is common in children with hypertrophic cardiomyopathy and is related to the magnitude of left ventricular hypertrophy. Our findings suggest that coronary compression does not make an important contribution to myocardial ischemia in hypertrophic cardiomyopathy; however, left ventricular hypertrophy and compression of intramural arteries may contribute significantly. (Tex Heart Inst J 2002;29:290–8) PMID:12484613

  10. Insertion profiles of 4 headless compression screws.

    Science.gov (United States)

    Hart, Adam; Harvey, Edward J; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A

    2013-09-01

    In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. The peak compression occurs at an insertion depth of -3.1 mm, -2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of -2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of -2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws, and enable the surgeon to optimize compression. Copyright

  11. Radiation effects on thermal decomposition of inorganic solids

    International Nuclear Information System (INIS)

    Dedgaonkar, V.G.

    1985-01-01

    Radiation effects on the thermal decomposition characteristics of inorganic oxyanions like permanganates, nitrates, zeolites and particularly ammonium perchlorate (AP) have been highlighted.The last compound finds wide application as an oxidizer in solid rocket propellents and although several hundred papers have been published on it during the last 30-40 years, most of which from the point of view of understanding and controlling the decomposition behaviour, there are only a few reports available in this area following the radiation treatment. (author)

  12. Kinetics of Roasting Decomposition of the Rare Earth Elements by CaO and Coal

    Directory of Open Access Journals (Sweden)

    Shuai Yuan

    2017-06-01

    Full Text Available The roasting method of magnetic tailing mixed with CaO and coal was used to recycle the rare earth elements (REE in magnetic tailing. The phase transformation and decomposition process were researched during the roasting processes. The results showed that the decomposition processes of REE in magnetic tailing were divided into two steps. The first step from 380 to 431 °C mainly entailed the decomposition of bastnaesite (REFCO3. The second step from 605 to 716 °C mainly included the decomposition of monazite (REPO4. The decomposition products were primarily RE2O3, Ce0.75Nd0.25O1.875, CeO2, Ca5F(PO43, and CaF2. Adding CaO could reduce the decomposition temperature of REFCO3 and REPO4. Meanwhile, the decomposition effect of CaO on bastnaesite and monazite was significant. Besides, the effects of the roasting time, roasting temperature, and CaO addition level on the decomposition rate were studied. The optimum technological conditions were a roasting time of 60 min; roasting temperature of 750 °C; and CaO addition level of 20% (w/w. The maximum decomposition rate of REFCO3 and REPO4 was 99.87%. The roasting time and temperature were the major factors influencing the decomposition rate. The kinetics process of the decomposition of REFCO3 and REPO4 accorded with the interfacial reaction kinetics model. The reaction rate controlling steps were divided into two steps. The first step (at low temperature was controlled by a chemical reaction with an activation energy of 52.67 kJ/mol. The second step (at high temperature was controlled by diffusion with an activation energy of 8.5 kJ/mol.

  13. Effects of anthropogenic heavy metal contamination on litter decomposition in streams – A meta-analysis

    International Nuclear Information System (INIS)

    Ferreira, Verónica; Koricheva, Julia; Duarte, Sofia; Niyogi, Dev K.; Guérold, François

    2016-01-01

    Many streams worldwide are affected by heavy metal contamination, mostly due to past and present mining activities. Here we present a meta-analysis of 38 studies (reporting 133 cases) published between 1978 and 2014 that reported the effects of heavy metal contamination on the decomposition of terrestrial litter in running waters. Overall, heavy metal contamination significantly inhibited litter decomposition. The effect was stronger for laboratory than for field studies, likely due to better control of confounding variables in the former, antagonistic interactions between metals and other environmental variables in the latter or differences in metal identity and concentration between studies. For laboratory studies, only copper + zinc mixtures significantly inhibited litter decomposition, while no significant effects were found for silver, aluminum, cadmium or zinc considered individually. For field studies, coal and metal mine drainage strongly inhibited litter decomposition, while drainage from motorways had no significant effects. The effect of coal mine drainage did not depend on drainage pH. Coal mine drainage negatively affected leaf litter decomposition independently of leaf litter identity; no significant effect was found for wood decomposition, but sample size was low. Considering metal mine drainage, arsenic mines had a stronger negative effect on leaf litter decomposition than gold or pyrite mines. Metal mine drainage significantly inhibited leaf litter decomposition driven by both microbes and invertebrates, independently of leaf litter identity; no significant effect was found for microbially driven decomposition, but sample size was low. Overall, mine drainage negatively affects leaf litter decomposition, likely through negative effects on invertebrates. - Highlights: • A meta-analysis was done to assess the effects of heavy metals on litter decomposition. • Heavy metals significantly and strongly inhibited litter decomposition in streams.

  14. Energy Conservation In Compressed Air Systems

    International Nuclear Information System (INIS)

    Yusuf, I.Y.; Dewu, B.B.M.

    2004-01-01

    Compressed air is an essential utility that accounts for a substantial part of the electricity consumption (bill) in most industrial plants. Although the general saying Air is free of charge is not true for compressed air, the utility's cost is not accorded the rightful importance due to its by most industries. The paper will show that the cost of 1 unit of energy in the form of compressed air is at least 5 times the cost electricity (energy input) required to produce it. The paper will also provide energy conservation tips in compressed air systems

  15. Compressed Data Structures for Range Searching

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Vind, Søren Juhl

    2015-01-01

    matrices and web graphs. Our contribution is twofold. First, we show how to compress geometric repetitions that may appear in standard range searching data structures (such as K-D trees, Quad trees, Range trees, R-trees, Priority R-trees, and K-D-B trees), and how to implement subsequent range queries...... on the compressed representation with only a constant factor overhead. Secondly, we present a compression scheme that efficiently identifies geometric repetitions in point sets, and produces a hierarchical clustering of the point sets, which combined with the first result leads to a compressed representation...

  16. Compression therapy after ankle fracture surgery

    DEFF Research Database (Denmark)

    Winge, R; Bayer, L; Gottlieb, H

    2017-01-01

    PURPOSE: The main purpose of this systematic review was to investigate the effect of compression treatment on the perioperative course of ankle fractures and describe its effect on edema, pain, ankle joint mobility, wound healing complication, length of stay (LOS) and time to surgery (TTS). The aim...... undergoing surgery, testing either intermittent pneumatic compression, compression bandage and/or compression stocking and reporting its effect on edema, pain, ankle joint mobility, wound healing complication, LOS and TTS. To conclude on data a narrative synthesis was performed. RESULTS: The review included...

  17. Effect of Kollidon VA®64 particle size and morphology as directly compressible excipient on tablet compression properties.

    Science.gov (United States)

    Chaudhary, R S; Patel, C; Sevak, V; Chan, M

    2018-01-01

    The study evaluates use of Kollidon VA ® 64 and a combination of Kollidon VA ® 64 with Kollidon VA ® 64 Fine as excipient in direct compression process of tablets. The combination of the two grades of material is evaluated for capping, lamination and excessive friability. Inter particulate void space is higher for such excipient due to the hollow structure of the Kollidon VA ® 64 particles. During tablet compression air remains trapped in the blend exhibiting poor compression with compromised physical properties of the tablets. Composition of Kollidon VA ® 64 and Kollidon VA ® 64 Fine is evaluated by design of experiment (DoE). A scanning electron microscopy (SEM) of two grades of Kollidon VA ® 64 exhibits morphological differences between coarse and fine grade. The tablet compression process is evaluated with a mix consisting of entirely Kollidon VA ® 64 and two mixes containing Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23 and 65:35. A statistical modeling on the results from the DoE trials resulted in the optimum composition for direct tablet compression as combination of Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23. This combination compressed with the predicted parameters based on the statistical modeling and applying main compression force between 5 and 15 kN, pre-compression force between 2 and 3 kN, feeder speed fixed at 25 rpm and compression range of 45-49 rpm produced tablets with hardness ranging between 19 and 21 kp, with no friability, capping, or lamination issue.

  18. Cellulose and cutisin decomposition in soil of Alopecuretum meadow

    Directory of Open Access Journals (Sweden)

    Zuzana Hrevušová

    2012-01-01

    Full Text Available Plant litter decomposition is a fundamental process to ecosystem functioning regulated by both abiotic and biotic factors. The aim of this study was to determine the decomposition of cellulose and protein (cutisin substrates on permanent Alopecuretum meadow under different methods of management. The treatments were following: 2 × cut, 2 × cut + NPK, 2 × mulch, 1 × cut, 1 × mulch (frequency of mowing per year and no-treated plots. Cutting or mulching was carried out in October, under the 2 × cut management also in May. In 2007–2009, cellulose and cutisin in mesh bags were placed in the soil and kept from April to October. Total mean ratios of decomposed cellulose and cutisin were 83 % and 40 % of primal substrate weight, respectively. The cellulose decomposition was affected by weather conditions, but not by applied management. The highest mean ratio of decomposed cellulose was found in 2009 (with increased amount of precipitation in May and July, the lowest in 2007. Coefficients of variation within a year and over the years were up to 22 % and 20 %, respectively. The cutisin decomposition was significantly affected by applied management in all three years. Higher rates of decomposition were noted in two times mowed treatments compared to one or not mowed treatments. Significant differences were found between years in 2× cut and 2 × cut + NPK treatments. Coefficients of variation within the year and over the years were both higher by cutisin than by cellulose samples (up to 50 and 42 %, respectively.

  19. a Novel Two-Component Decomposition for Co-Polar Channels of GF-3 Quad-Pol Data

    Science.gov (United States)

    Kwok, E.; Li, C. H.; Zhao, Q. H.; Li, Y.

    2018-04-01

    Polarimetric target decomposition theory is the most dynamic and exploratory research area in the field of PolSAR. But most methods of target decomposition are based on fully polarized data (quad pol) and seldom utilize dual-polar data for target decomposition. Given this, we proposed a novel two-component decomposition method for co-polar channels of GF-3 quad-pol data. This method decomposes the data into two scattering contributions: surface, double bounce in dual co-polar channels. To save this underdetermined problem, a criterion for determining the model is proposed. The criterion can be named as second-order averaged scattering angle, which originates from the H/α decomposition. and we also put forward an alternative parameter of it. To validate the effectiveness of proposed decomposition, Liaodong Bay is selected as research area. The area is located in northeastern China, where it grows various wetland resources and appears sea ice phenomenon in winter. and we use the GF-3 quad-pol data as study data, which which is China's first C-band polarimetric synthetic aperture radar (PolSAR) satellite. The dependencies between the features of proposed algorithm and comparison decompositions (Pauli decomposition, An&Yang decomposition, Yamaguchi S4R decomposition) were investigated in the study. Though several aspects of the experimental discussion, we can draw the conclusion: the proposed algorithm may be suitable for special scenes with low vegetation coverage or low vegetation in the non-growing season; proposed decomposition features only using co-polar data are highly correlated with the corresponding comparison decomposition features under quad-polarization data. Moreover, it would be become input of the subsequent classification or parameter inversion.

  20. Isentropic Compression of Argon

    International Nuclear Information System (INIS)

    Oona, H.; Solem, J.C.; Veeser, L.R.; Ekdahl, C.A.; Rodriquez, P.J.; Younger, S.M.; Lewis, W.; Turley, W.D.

    1997-01-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal