WorldWideScience

Sample records for maximal principal compression

  1. Approaching maximal performance of longitudinal beam compression in induction accelerator drivers

    International Nuclear Information System (INIS)

    Mark, J.W.K.; Ho, D.D.M.; Brandon, S.T.; Chang, C.L.; Drobot, A.T.; Faltens, A.; Lee, E.P.; Krafft, G.A.

    1986-01-01

    Longitudinal beam compression occurs before final focus and fusion chamber beam transport and is a key process determining initial conditions for final focus hardware. Determining the limits for maximal performance of key accelerator components is an essential element of the effort to reduce driver costs. Studies directed towards defining the limits of final beam compression including considerations such as maximal available compression, effects of longitudinal dispersion and beam emittance, combining pulse-shaping with beam compression to reduce the total number of beam manipulators, etc., are given. Several possible techniques are illustrated for utilizing the beam compression process to provide the pulse shapes required by a number of targets. Without such capabilities to shape the pulse, an additional factor of two or so of beam energy would be required by the targets

  2. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Science.gov (United States)

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  3. Approaching maximal performance of longitudinal beam compression in induction accelerator drivers

    International Nuclear Information System (INIS)

    Mark, J.W.K.; Ho, D.D.M.; Brandon, S.T.; Chang, C.L.; Drobot, A.T.; Faltens, A.; Lee, E.P.; Krafft, G.A.

    1986-01-01

    Longitudinal beam compression is an integral part of the US induction accelerator development effort for heavy ion fusion. Producing maximal performance for key accelerator components is an essential element of the effort to reduce driver costs. We outline here initial studies directed towards defining the limits of final beam compression including considerations such as: maximal available compression, effects of longitudinal dispersion and beam emittance, combining pulse-shaping with beam compression to reduce the total number of beam manipulations, etc. The use of higher ion charge state Z greater than or equal to 3 is likely to test the limits of the previously envisaged beam compression and final focus hardware. A more conservative approach is to use additional beamlets in final compression and focus. On the other end of the spectrum of choices, alternate approaches might consider new final focus with greater tolerances for systematic momentum and current variations. Development of such final focus concepts would also allow more compact (and hopefully cheaper) hardware packages where the previously separate processes of beam compression, pulse-shaping and final focus occur as partially combined and nearly concurrent beam manipulations

  4. Maximal dissipation and well-posedness for the compressible Euler system

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard

    2014-01-01

    Roč. 16, č. 3 (2014), s. 447-461 ISSN 1422-6928 EU Projects: European Commission(XE) 320078 - MATHEF Keywords : maximal dissipation * compressible Euler system * weak solution Subject RIV: BA - General Mathematics Impact factor: 1.186, year: 2014 http://link.springer.com/article/10.1007/s00021-014-0163-8

  5. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yihang Yin

    2015-08-01

    Full Text Available Wireless sensor networks (WSNs have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA. First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  6. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    Science.gov (United States)

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  7. Joint Group Sparse PCA for Compressed Hyperspectral Imaging.

    Science.gov (United States)

    Khan, Zohaib; Shafait, Faisal; Mian, Ajmal

    2015-12-01

    A sparse principal component analysis (PCA) seeks a sparse linear combination of input features (variables), so that the derived features still explain most of the variations in the data. A group sparse PCA introduces structural constraints on the features in seeking such a linear combination. Collectively, the derived principal components may still require measuring all the input features. We present a joint group sparse PCA (JGSPCA) algorithm, which forces the basic coefficients corresponding to a group of features to be jointly sparse. Joint sparsity ensures that the complete basis involves only a sparse set of input features, whereas the group sparsity ensures that the structural integrity of the features is maximally preserved. We evaluate the JGSPCA algorithm on the problems of compressed hyperspectral imaging and face recognition. Compressed sensing results show that the proposed method consistently outperforms sparse PCA and group sparse PCA in reconstructing the hyperspectral scenes of natural and man-made objects. The efficacy of the proposed compressed sensing method is further demonstrated in band selection for face recognition.

  8. Understanding deformation mechanisms during powder compaction using principal component analysis of compression data.

    Science.gov (United States)

    Roopwani, Rahul; Buckner, Ira S

    2011-10-14

    Principal component analysis (PCA) was applied to pharmaceutical powder compaction. A solid fraction parameter (SF(c/d)) and a mechanical work parameter (W(c/d)) representing irreversible compression behavior were determined as functions of applied load. Multivariate analysis of the compression data was carried out using PCA. The first principal component (PC1) showed loadings for the solid fraction and work values that agreed with changes in the relative significance of plastic deformation to consolidation at different pressures. The PC1 scores showed the same rank order as the relative plasticity ranking derived from the literature for common pharmaceutical materials. The utility of PC1 in understanding deformation was extended to binary mixtures using a subset of the original materials. Combinations of brittle and plastic materials were characterized using the PCA method. The relationships between PC1 scores and the weight fractions of the mixtures were typically linear showing ideal mixing in their deformation behaviors. The mixture consisting of two plastic materials was the only combination to show a consistent positive deviation from ideality. The application of PCA to solid fraction and mechanical work data appears to be an effective means of predicting deformation behavior during compaction of simple powder mixtures. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Neural Network for Principal Component Analysis with Applications in Image Compression

    Directory of Open Access Journals (Sweden)

    Luminita State

    2007-04-01

    Full Text Available Classical feature extraction and data projection methods have been extensively investigated in the pattern recognition and exploratory data analysis literature. Feature extraction and multivariate data projection allow avoiding the "curse of dimensionality", improve the generalization ability of classifiers and significantly reduce the computational requirements of pattern classifiers. During the past decade a large number of artificial neural networks and learning algorithms have been proposed for solving feature extraction problems, most of them being adaptive in nature and well-suited for many real environments where adaptive approach is required. Principal Component Analysis, also called Karhunen-Loeve transform is a well-known statistical method for feature extraction, data compression and multivariate data projection and so far it has been broadly used in a large series of signal and image processing, pattern recognition and data analysis applications.

  10. IMNN: Information Maximizing Neural Networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  11. Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.

    Science.gov (United States)

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane

    2016-08-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.

  12. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  13. Multiscale principal component analysis

    International Nuclear Information System (INIS)

    Akinduko, A A; Gorban, A N

    2014-01-01

    Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis

  14. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  15. Maximal compression of the redshift-space galaxy power spectrum and bispectrum

    Science.gov (United States)

    Gualdi, Davide; Manera, Marc; Joachimi, Benjamin; Lahav, Ofer

    2018-05-01

    We explore two methods of compressing the redshift-space galaxy power spectrum and bispectrum with respect to a chosen set of cosmological parameters. Both methods involve reducing the dimension of the original data vector (e.g. 1000 elements) to the number of cosmological parameters considered (e.g. seven ) using the Karhunen-Loève algorithm. In the first case, we run MCMC sampling on the compressed data vector in order to recover the 1D and 2D posterior distributions. The second option, approximately 2000 times faster, works by orthogonalizing the parameter space through diagonalization of the Fisher information matrix before the compression, obtaining the posterior distributions without the need of MCMC sampling. Using these methods for future spectroscopic redshift surveys like DESI, Euclid, and PFS would drastically reduce the number of simulations needed to compute accurate covariance matrices with minimal loss of constraining power. We consider a redshift bin of a DESI-like experiment. Using the power spectrum combined with the bispectrum as a data vector, both compression methods on average recover the 68 {per cent} credible regions to within 0.7 {per cent} and 2 {per cent} of those resulting from standard MCMC sampling, respectively. These confidence intervals are also smaller than the ones obtained using only the power spectrum by 81 per cent, 80 per cent, and 82 per cent respectively, for the bias parameter b1, the growth rate f, and the scalar amplitude parameter As.

  16. Feature constrained compressed sensing CT image reconstruction from incomplete data via robust principal component analysis of the database

    International Nuclear Information System (INIS)

    Wu, Dufan; Li, Liang; Zhang, Li

    2013-01-01

    In computed tomography (CT), incomplete data problems such as limited angle projections often cause artifacts in the reconstruction results. Additional prior knowledge of the image has shown the potential for better results, such as a prior image constrained compressed sensing algorithm. While a pre-full-scan of the same patient is not always available, massive well-reconstructed images of different patients can be easily obtained from clinical multi-slice helical CTs. In this paper, a feature constrained compressed sensing (FCCS) image reconstruction algorithm was proposed to improve the image quality by using the prior knowledge extracted from the clinical database. The database consists of instances which are similar to the target image but not necessarily the same. Robust principal component analysis is employed to retrieve features of the training images to sparsify the target image. The features form a low-dimensional linear space and a constraint on the distance between the image and the space is used. A bi-criterion convex program which combines the feature constraint and total variation constraint is proposed for the reconstruction procedure and a flexible method is adopted for a good solution. Numerical simulations on both the phantom and real clinical patient images were taken to validate our algorithm. Promising results are shown for limited angle problems. (paper)

  17. Emittance Growth during Bunch Compression in the CTF-II

    Energy Technology Data Exchange (ETDEWEB)

    Raubenheimer, Tor O

    1999-02-26

    Measurements of the beam emittance during bunch compression in the CLIC Test Facility (CTF-II) are described. The measurements were made with different beam charges and different energy correlations versus the bunch compressor settings which were varied from no compression through the point of full compression and to over-compression. Significant increases in the beam emittance were observed with the maximum emittance occurring near the point of full (maximal) compression. Finally, evaluation of possible emittance dilution mechanisms indicate that coherent synchrotron radiation was the most likely cause.

  18. Acute Thoracolumbar Spinal Cord Injury: Relationship of Cord Compression to Neurological Outcome.

    Science.gov (United States)

    Skeers, Peta; Battistuzzo, Camila R; Clark, Jillian M; Bernard, Stephen; Freeman, Brian J C; Batchelor, Peter E

    2018-02-21

    Spinal cord injury in the cervical spine is commonly accompanied by cord compression and urgent surgical decompression may improve neurological recovery. However, the extent of spinal cord compression and its relationship to neurological recovery following traumatic thoracolumbar spinal cord injury is unclear. The purpose of this study was to quantify maximum cord compression following thoracolumbar spinal cord injury and to assess the relationship among cord compression, cord swelling, and eventual clinical outcome. The medical records of patients who were 15 to 70 years of age, were admitted with a traumatic thoracolumbar spinal cord injury (T1 to L1), and underwent a spinal surgical procedure were examined. Patients with penetrating injuries and multitrauma were excluded. Maximal osseous canal compromise and maximal spinal cord compression were measured on preoperative mid-sagittal computed tomography (CT) scans and T2-weighted magnetic resonance imaging (MRI) by observers blinded to patient outcome. The American Spinal Injury Association (ASIA) Impairment Scale (AIS) grades from acute hospital admission (≤24 hours of injury) and rehabilitation discharge were used to measure clinical outcome. Relationships among spinal cord compression, canal compromise, and initial and final AIS grades were assessed via univariate and multivariate analyses. Fifty-three patients with thoracolumbar spinal cord injury were included in this study. The overall mean maximal spinal cord compression (and standard deviation) was 40% ± 21%. There was a significant relationship between median spinal cord compression and final AIS grade, with grade-A patients (complete injury) exhibiting greater compression than grade-C and D patients (incomplete injury) (p compression as independently influencing the likelihood of complete spinal cord injury (p compression. Greater cord compression is associated with an increased likelihood of severe neurological deficits (complete injury) following

  19. The principal Hugoniot of Mg2SiO4 to 950 GPa

    Science.gov (United States)

    Townsend, J. P.; Root, S.; Shulenburger, L.; Lemke, R. W.; Kraus, R. G.; Jacobsen, S. B.; Spaulding, D.; Davies, E.; Stewart, S. T.

    2017-12-01

    We present new measurements and ab-initio calculations of the principal Hugoniot states of forsterite Mg2SiO4 in the liquid regime between 200-950 GPa.Forsterite samples were shock compressed along the principal Hugoniot using plate-impact shock compression experiments on the Sandia National Laboratories Z machine facility.In order to gain insight into the physical state of the liquid, we performed quantum molecular dynamics calculations of the Hugoniot and compare the results to experiment.We show that the principal Hugoniot is consistent with that of a single molecular fluid phase of Mg2SiO4, and compare our results to previous dynamic compression experiments and QMD calculations.Finally, we discuss how the results inform planetary accretion and impact models.Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  20. Principal Component Analysis Based Measure of Structural Holes

    Science.gov (United States)

    Deng, Shiguo; Zhang, Wenqing; Yang, Huijie

    2013-02-01

    Based upon principal component analysis, a new measure called compressibility coefficient is proposed to evaluate structural holes in networks. This measure incorporates a new effect from identical patterns in networks. It is found that compressibility coefficient for Watts-Strogatz small-world networks increases monotonically with the rewiring probability and saturates to that for the corresponding shuffled networks. While compressibility coefficient for extended Barabasi-Albert scale-free networks decreases monotonically with the preferential effect and is significantly large compared with that for corresponding shuffled networks. This measure is helpful in diverse research fields to evaluate global efficiency of networks.

  1. Automatic physical inference with information maximizing neural networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.

  2. Cardiorespiratory Coordination in Repeated Maximal Exercise

    Directory of Open Access Journals (Sweden)

    Sergi Garcia-Retortillo

    2017-06-01

    Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC

  3. Compressive Load Resistance Characteristics of Rice Grain

    OpenAIRE

    Sumpun Chaitep; Chaiy R. Metha Pathawee; Pipatpong Watanawanyoo

    2008-01-01

    Investigation was made to observe the compressive load property of rice gain both rough rice and brown grain. Six rice varieties (indica and japonica) were examined with the moisture content at 10-12%. A compressive load with reference to a principal axis normal to the thickness of the grain were conducted at selected inclined angles of 0°, 15°, 30°, 45°, 60° and 70°. The result showed the compressive load resistance of rice grain based on its characteristic of yield s...

  4. Does team lifting increase the variability in peak lumbar compression in ironworkers?

    NARCIS (Netherlands)

    Faber, Gert; Visser, Steven; van der Molen, Henk F.; Kuijer, P. Paul F. M.; Hoozemans, Marco J. M.; van Dieën, Jaap H.; Frings-Dresen, Monique H. W.

    2012-01-01

    Ironworkers frequently perform heavy lifting tasks in teams of two or four workers. Team lifting could potentially lead to a higher variation in peak lumbar compression forces than lifts performed by one worker, resulting in higher maximal peak lumbar compression forces. This study compared

  5. Faster tissue interface analysis from Raman microscopy images using compressed factorisation

    Science.gov (United States)

    Palmer, Andrew D.; Bannerman, Alistair; Grover, Liam; Styles, Iain B.

    2013-06-01

    The structure of an artificial ligament was examined using Raman microscopy in combination with novel data analysis. Basis approximation and compressed principal component analysis are shown to provide efficient compression of confocal Raman microscopy images, alongside powerful methods for unsupervised analysis. This scheme allows the acceleration of data mining, such as principal component analysis, as they can be performed on the compressed data representation, providing a decrease in the factorisation time of a single image from five minutes to under a second. Using this workflow the interface region between a chemically engineered ligament construct and a bone-mimic anchor was examined. Natural ligament contains a striated interface between the bone and tissue that provides improved mechanical load tolerance, a similar interface was found in the ligament construct.

  6. Compressibility of rotating black holes

    International Nuclear Information System (INIS)

    Dolan, Brian P.

    2011-01-01

    Interpreting the cosmological constant as a pressure, whose thermodynamically conjugate variable is a volume, modifies the first law of black hole thermodynamics. Properties of the resulting thermodynamic volume are investigated: the compressibility and the speed of sound of the black hole are derived in the case of nonpositive cosmological constant. The adiabatic compressibility vanishes for a nonrotating black hole and is maximal in the extremal case--comparable with, but still less than, that of a cold neutron star. A speed of sound v s is associated with the adiabatic compressibility, which is equal to c for a nonrotating black hole and decreases as the angular momentum is increased. An extremal black hole has v s 2 =0.9 c 2 when the cosmological constant vanishes, and more generally v s is bounded below by c/√(2).

  7. Improved forecasting with leading indicators: the principal covariate index

    NARCIS (Netherlands)

    C. Heij (Christiaan)

    2007-01-01

    textabstractWe propose a new method of leading index construction that combines the need for data compression with the objective of forecasting. This so-called principal covariate index is constructed to forecast growth rates of the Composite Coincident Index. The forecast performance is compared

  8. Comparative assessment of intrinsic mechanical stimuli on knee cartilage and compressed agarose constructs.

    Science.gov (United States)

    Completo, A; Bandeiras, C; Fonseca, F

    2017-06-01

    A well-established cue for improving the properties of tissue-engineered cartilage is mechanical stimulation. However, the explicit ranges of mechanical stimuli that correspond to favorable metabolic outcomes are elusive. Usually, these outcomes have only been associated with the applied strain and frequency, an oversimplification that can hide the fundamental relationship between the intrinsic mechanical stimuli and the metabolic outcomes. This highlights two important key issues: the firstly is related to the evaluation of the intrinsic mechanical stimuli of native cartilage; the second, assuming that the intrinsic mechanical stimuli will be important, deals with the ability to replicate them on the tissue-engineered constructs. This study quantifies and compares the volume of cartilage and agarose subjected to a given magnitude range of each intrinsic mechanical stimulus, through a numerical simulation of a patient-specific knee model coupled with experimental data of contact during the stance phase of gait, and agarose constructs under direct-dynamic compression. The results suggest that direct compression loading needs to be parameterized with time-dependence during the initial culture period in order to better reproduce each one of the intrinsic mechanical stimuli developed in the patient-specific cartilage. A loading regime which combines time periods of low compressive strain (5%) and frequency (0.5Hz), in order to approach the maximal principal strain and fluid velocity stimulus of the patient-specific cartilage, with time periods of high compressive strain (20%) and frequency (3Hz), in order to approach the pore pressure values, may be advantageous relatively to a single loading regime throughout the full culture period. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  9. Music analysis and point-set compression

    DEFF Research Database (Denmark)

    Meredith, David

    2015-01-01

    COSIATEC, SIATECCompress and Forth’s algorithm are point-set compression algorithms developed for discovering repeated patterns in music, such as themes and motives that would be of interest to a music analyst. To investigate their effectiveness and versatility, these algorithms were evaluated...... on three analytical tasks that depend on the discovery of repeated patterns: classifying folk song melodies into tune families, discovering themes and sections in polyphonic music, and discovering subject and countersubject entries in fugues. Each algorithm computes a compressed encoding of a point......-set representation of a musical object in the form of a list of compact patterns, each pattern being given with a set of vectors indicating its occurrences. However, the algorithms adopt different strategies in their attempts to discover encodings that maximize compression.The best-performing algorithm on the folk...

  10. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    Science.gov (United States)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  11. Compressive Online Robust Principal Component Analysis with Multiple Prior Information

    DEFF Research Database (Denmark)

    Van Luong, Huynh; Deligiannis, Nikos; Seiler, Jürgen

    -rank components. Unlike conventional batch RPCA, which processes all the data directly, our method considers a small set of measurements taken per data vector (frame). Moreover, our method incorporates multiple prior information signals, namely previous reconstructed frames, to improve these paration...... and thereafter, update the prior information for the next frame. Using experiments on synthetic data, we evaluate the separation performance of the proposed algorithm. In addition, we apply the proposed algorithm to online video foreground and background separation from compressive measurements. The results show...

  12. Plans for longitudinal and transverse neutralized beam compression experiments, and initial results from solenoid transport experiments

    International Nuclear Information System (INIS)

    Seidl, P.A.; Armijo, J.; Baca, D.; Bieniosek, F.M.; Coleman, J.; Davidson, R.C.; Efthimion, P.C.; Friedman, A.; Gilson, E.P.; Grote, D.; Haber, I.; Henestroza, E.; Kaganovich, I.; Leitner, M.; Logan, B.G.; Molvik, A.W.; Rose, D.V.; Roy, P.K.; Sefkow, A.B.; Sharp, W.M.; Vay, J.L.; Waldron, W.L.; Welch, D.R.; Yu, S.S.

    2007-01-01

    This paper presents plans for neutralized drift compression experiments, precursors to future target heating experiments. The target-physics objective is to study warm dense matter (WDM) using short-duration (∼1 ns) ion beams that enter the targets at energies just above that at which dE/dx is maximal. High intensity on target is to be achieved by a combination of longitudinal compression and transverse focusing. This work will build upon recent success in longitudinal compression, where the ion beam was compressed lengthwise by a factor of more than 50 by first applying a linear head-to-tail velocity tilt to the beam, and then allowing the beam to drift through a dense, neutralizing background plasma. Studies on a novel pulse line ion accelerator were also carried out. It is planned to demonstrate simultaneous transverse focusing and longitudinal compression in a series of future experiments, thereby achieving conditions suitable for future WDM target experiments. Future experiments may use solenoids for transverse focusing of un-neutralized ion beams during acceleration. Recent results are reported in the transport of a high-perveance heavy ion beam in a solenoid transport channel. The principal objectives of this solenoid transport experiment are to match and transport a space-charge-dominated ion beam, and to study associated electron-cloud and gas effects that may limit the beam quality in a solenoid transport system. Ideally, the beam will establish a Brillouin-flow condition (rotation at one-half the cyclotron frequency). Other mechanisms that potentially degrade beam quality are being studied, such as focusing-field aberrations, beam halo, and separation of lattice focusing elements

  13. Principal bundles on the projective line

    Indian Academy of Sciences (India)

    M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22

    LetX be a complete nonsingular curve over the algebraic closurek ofk andGa reductive group over k. Let E → X be a principal G-bundle on X. E is said to be semistable if, for every reduction of structure group EP ⊂ E to a maximal parabolic subgroup P of G, we have degree EP (p) ≤ 0, where p is the Lie algebra of P and EP ...

  14. Optimal interface between principal deterrent systems and material accounting

    International Nuclear Information System (INIS)

    Deiermann, P.J.; Opelka, J.H.

    1983-01-01

    The purpose of this study is to find an optimal blend between three safeguards systems for special nuclear material (SNM), the material accounting system and the physical security and material control systems. The latter two are denoted as principal deterrent systems. The optimization methodology employed is a two-stage decision algorithm, first an explicit maximization of expected diverter benefits and subsequently a minimization of expected defender costs for changes in material accounting procedures and incremental improvements in the principal deterrent systems. The probability of diverter success function dependent upon the principal deterrents and material accounting system variables is developed. Within the range of certainty of the model, existing material accounting, material control and physical security practices are justified

  15. Numerical approach to solar ejector-compression refrigeration system

    Directory of Open Access Journals (Sweden)

    Zheng Hui-Fan

    2016-01-01

    Full Text Available A model was established for solar ejector-compression refrigeration system. The influence of generator temperature, middle-temperature, and evaporator temperature on the performance of the refrigerant system was analyzed. An optimal generator temperature is found for maximal energy efficiency ratio and minimal power consumption.

  16. Energy Efficient Precoding C-RAN Downlink with Compression at Fronthaul

    OpenAIRE

    Nguyen, Kien-Giang; Vu, Quang-Doanh; Juntti, Markku; Tran, Le-Nam

    2017-01-01

    This paper considers a downlink transmission of cloud radio access network (C-RAN) in which precoded baseband signals at a common baseband unit are compressed before being forwarded to radio units (RUs) through limited fronthaul capacity links. We investigate the joint design of precoding, multivariate compression and RU-user selection which maximizes the energy efficiency of downlink C-RAN networks. The considered problem is inherently a rank-constrained mixed Boolean nonconvex program for w...

  17. Effect of lower limb compression on blood flow and performance in elite wheelchair rugby athletes.

    Science.gov (United States)

    Vaile, Joanna; Stefanovic, Brad; Askew, Christopher D

    2016-01-01

    To investigate the effects of compression socks worn during exercise on performance and physiological responses in elite wheelchair rugby athletes. In a non-blinded randomized crossover design, participants completed two exercise trials (4 × 8 min bouts of submaximal exercise, each finishing with a timed maximal sprint) separated by 24 hr, with or without compression socks. National Sports Training Centre, Queensland, Australia. Ten national representative male wheelchair rugby athletes with cervical spinal cord injuries volunteered to participate. Participants wore medical grade compression socks on both legs during the exercise task (COMP), and during the control trial no compression was worn (CON). The efficacy of the compression socks was determined by assessments of limb blood flow, core body temperature, heart rate, and ratings of perceived exertion, perceived thermal strain, and physical performance. While no significant differences between conditions were observed for maximal sprint time, average lap time was better maintained in COMP compared to CON (Pbenefit may be associated with an augmentation of upper limb blood flow.

  18. Mechanics of the Compression Wood Response: II. On the Location, Action, and Distribution of Compression Wood Formation.

    Science.gov (United States)

    Archer, R R; Wilson, B F

    1973-04-01

    A new method for simulation of cross-sectional growth provided detailed information on the location of normal wood and compression wood increments in two tilted white pine (Pinus strobus L.) leaders. These data were combined with data on stiffness, slope, and curvature changes over a 16-week period to make the mechanical analysis. The location of compression wood changed from the under side to a flank side and then to the upper side of the leader as the geotropic stimulus decreased, owing to compression wood action. Its location shifted back to a flank side when the direction of movement of the leader reversed. A model for this action, based on elongation strains, was developed and predicted the observed curvature changes with elongation strains of 0.3 to 0.5%, or a maximal compressive stress of 60 to 300 kilograms per square centimeter. After tilting, new wood formation was distributed so as to maintain consistent strain levels along the leaders in bending under gravitational loads. The computed effective elastic moduli were about the same for the two leaders throughout the season.

  19. Generalized principal resonance in oscillatory systems of second order; Resonancia principal generalizada en sistemas oscilatorios de segundo orden

    Energy Technology Data Exchange (ETDEWEB)

    Munoz Aguirre, E. [Universidad Autonoma de Puebla, Oaxaca (Mexico); Alexandrov, V. V. [Benemerita Universidad Autonoma de Puebla, Puebla (Mexico)

    2001-02-01

    This paper will describe the generalized principal resonance of systems as described by the second order of ordinary differential equations and proved by the Pontriaguin maximal principle to coincide with the lengthened solution of an external problem of the same system. The results are verified in special cases of general resonance and parametric resonance for the Mathieu equation. [Spanish] En el presente articulo se estudia la resonancia principal generalizada para sistemas descritos por ecuaciones diferenciales ordinarias de segundo orden y se demuestra con ayuda del principio del maximo de Pontriaguin, la coincidencia de esta con la solucion prolongada de un problema extremal para el mismo sistema. Ademas se verifican estos resultados en los casos particulares de resonancia general y resonancia parametrica para la ecuacion de Mathieu.

  20. Tri-maximal vs. bi-maximal neutrino mixing

    International Nuclear Information System (INIS)

    Scott, W.G

    2000-01-01

    It is argued that data from atmospheric and solar neutrino experiments point strongly to tri-maximal or bi-maximal lepton mixing. While ('optimised') bi-maximal mixing gives an excellent a posteriori fit to the data, tri-maximal mixing is an a priori hypothesis, which is not excluded, taking account of terrestrial matter effects

  1. The FRX-C/LSM compression experiment

    International Nuclear Information System (INIS)

    Rej, D.J.; Siemon, R.E.; Taggart, D.P.

    1989-01-01

    After two years of preparation, hardware for high-power FRC compression heating studies is now being installed onto FRX-C/LSM. FRCs will be formed and translated out of the θ-pinch source, and into a compressor where the external B-field will be increased from 0.4 to 2 T in 55 μs. The compressed FRC can then be translated into a third stage for further study. A principal experimental goal is to study FRC confinement at the high energy density, n(T/sub e/ + T/sub i/) ≤ 1.0 /times/ 10 22 keV/m 3 , associated with the large external field. Experiments are scheduled to begin in April. 11 refs., 5 figs

  2. A Streaming PCA VLSI Chip for Neural Data Compression.

    Science.gov (United States)

    Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi

    2017-12-01

    Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.

  3. A Note on McDonald's Generalization of Principal Components Analysis

    Science.gov (United States)

    Shine, Lester C., II

    1972-01-01

    It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…

  4. Video on the Internet: An introduction to the digital encoding, compression, and transmission of moving image data.

    Science.gov (United States)

    Boudier, T; Shotton, D M

    1999-01-01

    In this paper, we seek to provide an introduction to the fast-moving field of digital video on the Internet, from the viewpoint of the biological microscopist who might wish to store or access videos, for instance in image databases such as the BioImage Database (http://www.bioimage.org). We describe and evaluate the principal methods used for encoding and compressing moving image data for digital storage and transmission over the Internet, which involve compromises between compression efficiency and retention of image fidelity, and describe the existing alternate software technologies for downloading or streaming compressed digitized videos using a Web browser. We report the results of experiments on video microscopy recordings and three-dimensional confocal animations of biological specimens to evaluate the compression efficiencies of the principal video compression-decompression algorithms (codecs) and to document the artefacts associated with each of them. Because MPEG-1 gives very high compression while yet retaining reasonable image quality, these studies lead us to recommend that video databases should store both a high-resolution original version of each video, ideally either uncompressed or losslessly compressed, and a separate edited and highly compressed MPEG-1 preview version that can be rapidly downloaded for interactive viewing by the database user. Copyright 1999 Academic Press.

  5. The Principal's Role in Leading Instructional Change: A Case Study in New Program Adoption

    Science.gov (United States)

    Breon, Amy

    2016-01-01

    The noise in generating an agreed upon definition of instructional leadership that extends beyond theory to the practice of principals has been almost deafening in the last few decades. Many emphasize the need for the role of the principal to adapt to meet the demands of leadership that maximizes student achievement, but lack the specificity to…

  6. Mechanical behavior of silicon carbide nanoparticles under uniaxial compression

    Energy Technology Data Exchange (ETDEWEB)

    He, Qiuxiang; Fei, Jing; Tang, Chao; Zhong, Jianxin; Meng, Lijun, E-mail: ljmeng@xtu.edu.cn [Xiangtan University, Hunan Key Laboratory for Micro-Nano Energy Materials and Devices, Faculty of School of Physics and Optoelectronics (China)

    2016-03-15

    The mechanical behavior of SiC nanoparticles under uniaxial compression was investigated using an atomic-level compression simulation technique. The results revealed that the mechanical deformation of SiC nanocrystals is highly dependent on compression orientation, particle size, and temperature. A structural transformation from the original zinc-blende to a rock-salt phase is identified for SiC nanoparticles compressed along the [001] direction at low temperature. However, the rock-salt phase is not observed for SiC nanoparticles compressed along the [110] and [111] directions irrespective of size and temperature. The high-pressure-generated rock-salt phase strongly affects the mechanical behavior of the nanoparticles, including their hardness and deformation process. The hardness of [001]-compressed nanoparticles decreases monotonically as their size increases, different from that of [110] and [111]-compressed nanoparticles, which reaches a maximal value at a critical size and then decreases. Additionally, a temperature-dependent mechanical response was observed for all simulated SiC nanoparticles regardless of compression orientation and size. Interestingly, the hardness of SiC nanocrystals with a diameter of 8 nm compressed in [001]-orientation undergoes a steep decrease at 0.1–200 K and then a gradual decline from 250 to 1500 K. This trend can be attributed to different deformation mechanisms related to phase transformation and dislocations. Our results will be useful for practical applications of SiC nanoparticles under high pressure.

  7. Compression of magnetohydrodynamic simulation data using singular value decomposition

    International Nuclear Information System (INIS)

    Castillo Negrete, D. del; Hirshman, S.P.; Spong, D.A.; D'Azevedo, E.F.

    2007-01-01

    Numerical calculations of magnetic and flow fields in magnetohydrodynamic (MHD) simulations can result in extensive data sets. Particle-based calculations in these MHD fields, needed to provide closure relations for the MHD equations, will require communication of this data to multiple processors and rapid interpolation at numerous particle orbit positions. To facilitate this analysis it is advantageous to compress the data using singular value decomposition (SVD, or principal orthogonal decomposition, POD) methods. As an example of the compression technique, SVD is applied to magnetic field data arising from a dynamic nonlinear MHD code. The performance of the SVD compression algorithm is analyzed by calculating Poincare plots for electron orbits in a three-dimensional magnetic field and comparing the results with uncompressed data

  8. Compression map, functional groups and fossilization: A chemometric approach (Pennsylvanian neuropteroid foliage, Canada)

    Science.gov (United States)

    D'Angelo, J. A.; Zodrow, E.L.; Mastalerz, Maria

    2012-01-01

    Nearly all of the spectrochemical studies involving Carboniferous foliage of seed-ferns are based on a limited number of pinnules, mainly compressions. In contrast, in this paper we illustrate working with a larger pinnate segment, i.e., a 22-cm long neuropteroid specimen, compression-preserved with cuticle, the compression map. The objective is to study preservation variability on a larger scale, where observation of transparency/opacity of constituent pinnules is used as a first approximation for assessing the degree of pinnule coalification/fossilization. Spectrochemical methods by Fourier transform infrared spectrometry furnish semi-quantitative data for principal component analysis.The compression map shows a high degree of preservation variability, which ranges from comparatively more coalified pinnules to less coalified pinnules that resemble fossilized-cuticles, noting that the pinnule midveins are preserved more like fossilized-cuticles. A general overall trend of coalified pinnules towards fossilized-cuticles, i.e., variable chemistry, is inferred from the semi-quantitative FTIR data as higher contents of aromatic compounds occur in the visually more opaque upper location of the compression map. The latter also shows a higher condensation of the aromatic nuclei along with some variation in both ring size and degree of aromatic substitution. From principal component analysis we infer correspondence between transparency/opacity observation and chemical information which correlate with varying degree to fossilization/coalification among pinnules. ?? 2011 Elsevier B.V.

  9. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  10. Crystallographic cut that maximizes of the birefringence in photorefractive crystals

    OpenAIRE

    Rueda-Parada, Jorge Enrique

    2017-01-01

    The electro-optical birefringence effect depends on the crystal type, cut crystal, applied electric field and the incidence direction of light on the principal crystal faces. It is presented a study of maximizing the birefringence in photorefractive crystals of cubic crystallographic symmetry, in terms of these three parameters. General analytical expressions for the birefringence were obtained, from which birefringence can be established for any type of cut. A new crystallographic cut was en...

  11. Phenomenology of maximal and near-maximal lepton mixing

    International Nuclear Information System (INIS)

    Gonzalez-Garcia, M. C.; Pena-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.

    2001-01-01

    The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ε(equivalent to)1-2sin 2 θ ex and quantify the present experimental status for |ε| e mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10 -8 eV 2 ∼ 2 ∼ -7 eV 2 . In the mass ranges Δm 2 ∼>1.5x10 -5 eV 2 and 4x10 -10 eV 2 ∼ 2 ∼ -7 eV 2 the full interval |ε| e mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay

  12. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    International Nuclear Information System (INIS)

    Braunschweig, R.; Kaden, Ingmar; Schwarzer, J.; Sprengel, C.; Klose, K.

    2009-01-01

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  13. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    Energy Technology Data Exchange (ETDEWEB)

    Braunschweig, R.; Kaden, Ingmar [Klinik fuer Bildgebende Diagnostik und Interventionsradiologie, BG-Kliniken Bergmannstrost Halle (Germany); Schwarzer, J.; Sprengel, C. [Dept. of Management Information System and Operations Research, Martin-Luther-Univ. Halle Wittenberg (Germany); Klose, K. [Medizinisches Zentrum fuer Radiologie, Philips-Univ. Marburg (Germany)

    2009-07-15

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  14. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    Science.gov (United States)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  15. Thermal reservoir sizing for adiabatic compressed air energy storage

    Energy Technology Data Exchange (ETDEWEB)

    Kere, Amelie; Goetz, Vincent; Py, Xavier; Olives, Regis; Sadiki, Najim [Perpignan Univ. (France). PROMES CNRS UPR 8521; Mercier-Allart, Eric [EDF R et D, Chatou (France)

    2012-07-01

    Despite the operation of the two existing industrial facilities to McIntosh (Alabama), and for more than thirty years, Huntorf (Germany), electricity storage in the form of compressed air in underground cavern (CAES) has not seen the development that was expected in the 80s. The efficiency of this form of storage was with the first generation CAES, less than 50%. The evolving context technique can significantly alter this situation. The new generation so-called Adiabatic CAES (A-CAES) is to retrieve the heat produced by the compression via thermal storage, thus eliminating the necessity of gas to burn and would allow consideration efficiency overall energy of the order of 70%. To date, there is no existing installation of A-CAES. Many studies describe the principal and the general working mode of storage systems by adiabatic compression of air. So, efficiencies of different configurations of adiabatic compression process were analyzed. The aim of this paper is to simulate and analyze the performances of a thermal storage reservoir integrated in the system and adapted to the working conditions of a CAES.

  16. Femoral Neck Strain during Maximal Contraction of Isolated Hip-Spanning Muscle Groups

    Directory of Open Access Journals (Sweden)

    Saulo Martelli

    2017-01-01

    Full Text Available The aim of the study was to investigate femoral neck strain during maximal isometric contraction of the hip-spanning muscles. The musculoskeletal and the femur finite-element models from an elderly white woman were taken from earlier studies. The hip-spanning muscles were grouped by function in six hip-spanning muscle groups. The peak hip and knee moments in the model were matched to corresponding published measurements of the hip and knee moments during maximal isometric exercises about the hip and the knee in elderly participants. The femoral neck strain was calculated using full activation of the agonist muscles at fourteen physiological joint angles. The 5%±0.8% of the femoral neck volume exceeded the 90th percentile of the strain distribution across the 84 studied scenarios. Hip extensors, flexors, and abductors generated the highest tension in the proximal neck (2727 με, tension (986 με and compression (−2818 με in the anterior and posterior neck, and compression (−2069 με in the distal neck, respectively. Hip extensors and flexors generated the highest neck strain per unit of joint moment (63–67 με·m·N−1 at extreme hip angles. Therefore, femoral neck strain is heterogeneous and muscle contraction and posture dependent.

  17. Shock compression experiments on Lithium Deuteride single crystals.

    Energy Technology Data Exchange (ETDEWEB)

    Knudson, Marcus D.; Desjarlais, Michael Paul; Lemke, Raymond W.

    2014-10-01

    S hock compression exper iments in the few hundred GPa (multi - Mabr) regime were performed on Lithium Deuteride (LiD) single crystals . This study utilized the high velocity flyer plate capability of the Sandia Z Machine to perform impact experiments at flyer plate velocities in the range of 17 - 32 km/s. Measurements included pressure, density, and temperature between %7E200 - 600 GPa along the Principal Hugoniot - the locus of end states achievable through compression by large amplitude shock waves - as well as pressure and density of re - shock states up to %7E900 GPa . The experimental measurements are compared with recent density functional theory calculations as well as a new tabular equation of state developed at Los Alamos National Labs.

  18. Compression of realistic laser pulses in hollow-core photonic bandgap fibers

    DEFF Research Database (Denmark)

    Lægsgaard, Jesper; Roberts, John

    2009-01-01

    Dispersive compression of chirped few-picosecond pulses at the microjoule level in a hollow-core photonic bandgap fiber is studied numerically. The performance of ideal parabolic input pulses is compared to pulses from a narrowband picosecond oscillator broadened by self-phase modulation during...... amplification. It is shown that the parabolic pulses are superior for compression of high-quality femtosecond pulses up to the few-megawatts level. With peak powers of 5-10 MW or higher, there is no significant difference in power scaling and pulse quality between the two pulse types for comparable values...... of power, duration, and bandwidth. The same conclusion is found for the peak power and energy of solitons formed beyond the point of maximal compression. Long-pass filtering of these solitons is shown to be a promising route to clean solitonlike output pulses with peak powers of several MW....

  19. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  20. Does team lifting increase the variability in peak lumbar compression in ironworkers?

    Science.gov (United States)

    Faber, Gert; Visser, Steven; van der Molen, Henk F; Kuijer, P Paul F M; Hoozemans, Marco J M; Van Dieën, Jaap H; Frings-Dresen, Monique H W

    2012-01-01

    Ironworkers frequently perform heavy lifting tasks in teams of two or four workers. Team lifting could potentially lead to a higher variation in peak lumbar compression forces than lifts performed by one worker, resulting in higher maximal peak lumbar compression forces. This study compared single-worker lifts (25-kg, iron bar) to two-worker lifts (50-kg, two iron bars) and to four-worker lifts (100-kg, iron lattice). Inverse dynamics was used to calculate peak lumbar compression forces. To assess the variability in peak lumbar loading, all three lifting tasks were performed six times. Results showed that the variability in peak lumbar loading was somewhat higher in the team lifts compared to the single-worker lifts. However, despite this increased variability, team lifts did not result in larger maximum peak lumbar compression forces. Therefore, it was concluded that, from a biomechanical point of view, team lifting does not result in an additional risk for low back complaints in ironworkers.

  1. Teaching Principal Components Using Correlations.

    Science.gov (United States)

    Westfall, Peter H; Arias, Andrea L; Fulton, Lawrence V

    2017-01-01

    Introducing principal components (PCs) to students is difficult. First, the matrix algebra and mathematical maximization lemmas are daunting, especially for students in the social and behavioral sciences. Second, the standard motivation involving variance maximization subject to unit length constraint does not directly connect to the "variance explained" interpretation. Third, the unit length and uncorrelatedness constraints of the standard motivation do not allow re-scaling or oblique rotations, which are common in practice. Instead, we propose to motivate the subject in terms of optimizing (weighted) average proportions of variance explained in the original variables; this approach may be more intuitive, and hence easier to understand because it links directly to the familiar "R-squared" statistic. It also removes the need for unit length and uncorrelatedness constraints, provides a direct interpretation of "variance explained," and provides a direct answer to the question of whether to use covariance-based or correlation-based PCs. Furthermore, the presentation can be made without matrix algebra or optimization proofs. Modern tools from data science, including heat maps and text mining, provide further help in the interpretation and application of PCs; examples are given. Together, these techniques may be used to revise currently used methods for teaching and learning PCs in the behavioral sciences.

  2. Maximal Bell's inequality violation for non-maximal entanglement

    International Nuclear Information System (INIS)

    Kobayashi, M.; Khanna, F.; Mann, A.; Revzen, M.; Santana, A.

    2004-01-01

    Bell's inequality violation (BIQV) for correlations of polarization is studied for a product state of two two-mode squeezed vacuum (TMSV) states. The violation allowed is shown to attain its maximal limit for all values of the squeezing parameter, ζ. We show via an explicit example that a state whose entanglement is not maximal allow maximal BIQV. The Wigner function of the state is non-negative and the average value of either polarization is nil

  3. Analysing Music with Point-Set Compression Algorithms

    DEFF Research Database (Denmark)

    Meredith, David

    2016-01-01

    Several point-set pattern-discovery and compression algorithms designed for analysing music are reviewed and evaluated. Each algorithm takes as input a point-set representation of a score in which each note is represented as a point in pitch-time space. Each algorithm computes the maximal...... and sections in pieces of classical music. On the first task, the best-performing algorithms achieved success rates of around 84%. In the second task, the best algorithms achieved mean F1 scores of around 0.49, with scores for individual pieces rising as high as 0.71....

  4. Maximizing and customer loyalty: Are maximizers less loyal?

    Directory of Open Access Journals (Sweden)

    Linda Lai

    2011-06-01

    Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.

  5. Lagrangian investigations of vorticity dynamics in compressible turbulence

    Science.gov (United States)

    Parashar, Nishant; Sinha, Sawan Suman; Danish, Mohammad; Srinivasan, Balaji

    2017-10-01

    In this work, we investigate the influence of compressibility on vorticity-strain rate dynamics. Well-resolved direct numerical simulations of compressible homogeneous isotropic turbulence performed over a cubical domain of 10243 are employed for this study. To clearly identify the influence of compressibility on the time-dependent dynamics (rather than on the one-time flow field), we employ a well-validated Lagrangian particle tracker. The tracker is used to obtain time correlations between the instantaneous vorticity vector and the strain-rate eigenvector system of an appropriately chosen reference time. In this work, compressibility is parameterized in terms of both global (turbulent Mach number) and local parameters (normalized dilatation-rate and flow field topology). Our investigations reveal that the local dilatation rate significantly influences these statistics. In turn, this observed influence of the dilatation rate is predominantly associated with rotation dominated topologies (unstable-focus-compressing, stable-focus-stretching). We find that an enhanced dilatation rate (in both contracting and expanding fluid elements) significantly enhances the tendency of the vorticity vector to align with the largest eigenvector of the strain-rate. Further, in fluid particles where the vorticity vector is maximally misaligned (perpendicular) at the reference time, vorticity does show a substantial tendency to align with the intermediate eigenvector as well. The authors make an attempt to provide physical explanations of these observations (in terms of moment of inertia and angular momentum) by performing detailed calculations following tetrads {approach of Chertkov et al. ["Lagrangian tetrad dynamics and the phenomenology of turbulence," Phys. Fluids 11(8), 2394-2410 (1999)] and Xu et al. ["The pirouette effect in turbulent flows," Nat. Phys. 7(9), 709-712 (2011)]} in a compressible flow field.

  6. Maximizing Expected Achievable Rates for Block-Fading Buffer-Aided Relay Channels

    KAUST Repository

    Shaqfeh, Mohammad

    2016-05-25

    In this paper, the long-term average achievable rate over block-fading buffer-aided relay channels is maximized using a hybrid scheme that combines three essential transmission strategies, which are decode-and-forward, compress-and-forward, and direct transmission. The proposed hybrid scheme is dynamically adapted based on the channel state information. The integration and optimization of these three strategies provide a more generic and fundamental solution and give better achievable rates than the known schemes in the literature. Despite the large number of optimization variables, the proposed hybrid scheme can be optimized using simple closed-form formulas that are easy to apply in practical relay systems. This includes adjusting the transmission rate and compression when compress-and-forward is the selected strategy based on the channel conditions. Furthermore, in this paper, the hybrid scheme is applied to three different models of the Gaussian block-fading buffer-aided relay channels, depending on whether the relay is half or full duplex and whether the source and the relay have orthogonal or non-orthogonal channel access. Several numerical examples are provided to demonstrate the achievable rate results and compare them to the upper bounds of the ergodic capacity for each one of the three channel models under consideration.

  7. Maximizing Expected Achievable Rates for Block-Fading Buffer-Aided Relay Channels

    KAUST Repository

    Shaqfeh, Mohammad; Zafar, Ammar; Alnuweiri, Hussein; Alouini, Mohamed-Slim

    2016-01-01

    In this paper, the long-term average achievable rate over block-fading buffer-aided relay channels is maximized using a hybrid scheme that combines three essential transmission strategies, which are decode-and-forward, compress-and-forward, and direct transmission. The proposed hybrid scheme is dynamically adapted based on the channel state information. The integration and optimization of these three strategies provide a more generic and fundamental solution and give better achievable rates than the known schemes in the literature. Despite the large number of optimization variables, the proposed hybrid scheme can be optimized using simple closed-form formulas that are easy to apply in practical relay systems. This includes adjusting the transmission rate and compression when compress-and-forward is the selected strategy based on the channel conditions. Furthermore, in this paper, the hybrid scheme is applied to three different models of the Gaussian block-fading buffer-aided relay channels, depending on whether the relay is half or full duplex and whether the source and the relay have orthogonal or non-orthogonal channel access. Several numerical examples are provided to demonstrate the achievable rate results and compare them to the upper bounds of the ergodic capacity for each one of the three channel models under consideration.

  8. Pressurizer safety valve serviceability enhancement by spring compression stability

    Energy Technology Data Exchange (ETDEWEB)

    Ratiu, M.D.; Moisidis, N.T. [California Consulting Engineering and Technology (CALCET), San Leandro, California (United States)

    2007-07-01

    The proactive maintenance of the spring-loaded-self-actuated Pressurizer Safety Valve (PSV) has caused frequent concerns pertaining the spring self actuated reliability due to set point drift, spurious openings, and seat leakage. The exhaustive testing performed on a Crosby PSV model 6M6 has revealed that the principal cause of these malfunctions is the spring compression elastic instability during service. The spring lateral deformations measurements performed validated the analytical shapes for spring compression: symmetrical bending - for coaxial supported ends - restraining any support displacement, and asymmetrical bending induced by the potential misalignment of the supported top end. The source of the spring compression instability appears on the tested Crosby PSV induced by the top end lateral displacement during long term operation. The testing with restrained displacement at the spring top has shown consistent set-point reproducibility, less than +/- 1 per cent. To eliminate the asymmetrical spring buckling, a design review of the PSV is proposed including the guided fixture at the top and the decrease of spring coil slenderness ratio H/D, corresponding to the general analytical elastic stability for the asymmetrical compression. (authors)

  9. Pressurizer safety valve serviceability enhancement by spring compression stability

    International Nuclear Information System (INIS)

    Ratiu, M.D.; Moisidis, N.T.

    2007-01-01

    The proactive maintenance of the spring-loaded-self-actuated Pressurizer Safety Valve (PSV) has caused frequent concerns pertaining the spring self actuated reliability due to set point drift, spurious openings, and seat leakage. The exhaustive testing performed on a Crosby PSV model 6M6 has revealed that the principal cause of these malfunctions is the spring compression elastic instability during service. The spring lateral deformations measurements performed validated the analytical shapes for spring compression: symmetrical bending - for coaxial supported ends - restraining any support displacement, and asymmetrical bending induced by the potential misalignment of the supported top end. The source of the spring compression instability appears on the tested Crosby PSV induced by the top end lateral displacement during long term operation. The testing with restrained displacement at the spring top has shown consistent set-point reproducibility, less than +/- 1 per cent. To eliminate the asymmetrical spring buckling, a design review of the PSV is proposed including the guided fixture at the top and the decrease of spring coil slenderness ratio H/D, corresponding to the general analytical elastic stability for the asymmetrical compression. (authors)

  10. Operability test procedure for 241-U compressed air system and heat pump

    International Nuclear Information System (INIS)

    Freeman, R.D.

    1994-01-01

    The 241-U-701 compressed air system supplies instrument quality compressed air to Tank Farm 241-U. The supply piping to the 241-U Tank Farm is not included in the modification. Modifications to the 241-U-701 compressed air system include installation of a 15 HP Reciprocating Air Compressor, Ingersoll-Rand Model 10T3NLM-E15; an air dryer, Hankinson, Model DH-45; and miscellaneous system equipment and piping (valves, filters, etc.) to meet the design. A newly installed heat pump allows the compressor to operate within an enclosed relatively dust free atmosphere and keeps the compressor room within a standard acceptable temperature range, which makes possible efficient compressor operation, reduces maintenance, and maximizes compressor operating life. This document is an Operability Test Procedure (OTP) which will further verify (in addition to the Acceptance Test Procedure) that the 241-U-701 compressed air system and heat pump operate within their intended design parameters. The activities defined in this OTP will be performed to ensure the performance of the new compressed air system will be adequate, reliable and efficient. Completion of this OTP and sign off of the OTP Acceptance of Test Results is necessary for turnover of the compressed air system from Engineering to Operations

  11. Real-time dynamic MR image reconstruction using compressed sensing and principal component analysis (CS-PCA): Demonstration in lung tumor tracking.

    Science.gov (United States)

    Dietz, Bryson; Yip, Eugene; Yun, Jihyun; Fallone, B Gino; Wachowicz, Keith

    2017-08-01

    This work presents a real-time dynamic image reconstruction technique, which combines compressed sensing and principal component analysis (CS-PCA), to achieve real-time adaptive radiotherapy with the use of a linac-magnetic resonance imaging system. Six retrospective fully sampled dynamic data sets of patients diagnosed with non-small-cell lung cancer were used to investigate the CS-PCA algorithm. Using a database of fully sampled k-space, principal components (PC's) were calculated to aid in the reconstruction of undersampled images. Missing k-space data were calculated by projecting the current undersampled k-space data onto the PC's to generate the corresponding PC weights. The weighted PC's were summed together, and the missing k-space was iteratively updated. To gain insight into how the reconstruction might proceed at lower fields, 6× noise was added to the 3T data to investigate how the algorithm handles noisy data. Acceleration factors ranging from 2 to 10× were investigated using CS-PCA and Split Bregman CS for comparison. Metrics to determine the reconstruction quality included the normalized mean square error (NMSE), as well as the dice coefficients (DC) and centroid displacement of the tumor segmentations. Our results demonstrate that CS-PCA performed superior than CS alone. The CS-PCA patient averaged DC for 3T and 6× noise added data remained above 0.9 for acceleration factors up to 10×. The patient averaged NMSE gradually increased with increasing acceleration; however, it remained below 0.06 up to an acceleration factor of 10× for both 3T and 6× noise added data. The CS-PCA reconstruction speed ranged from 5 to 20 ms (Intel i7-4710HQ CPU @ 2.5 GHz), depending on the chosen parameters. A real-time reconstruction technique was developed for adaptive radiotherapy using a Linac-MRI system. Our CS-PCA algorithm can achieve tumor contours with DC greater than 0.9 and NMSE less than 0.06 at acceleration factors of up to, and including, 10×. The

  12. PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture

    Directory of Open Access Journals (Sweden)

    Kanokmon Rujirakul

    2014-01-01

    Full Text Available Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages’ complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  13. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    Science.gov (United States)

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  14. Principles of maximally classical and maximally realistic quantum ...

    Indian Academy of Sciences (India)

    Principles of maximally classical and maximally realistic quantum mechanics. S M ROY. Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India. Abstract. Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2N-dimensional phase space, ...

  15. Biomechanical characteristics of handballing maximally in Australian football.

    Science.gov (United States)

    Parrington, Lucy; Ball, Kevin; MacMahon, Clare

    2014-11-01

    The handball pass is influential in Australian football, and achieving higher ball speeds in flight is an advantage in increasing distance and reducing the chance of interceptions. The purpose of this study was to provide descriptive kinematic data and identify key technical aspects of maximal handball performance. Three-dimensional full body kinematic data from 19 professional Australian football players performing handball pass for maximal speed were collected, and the hand speed at ball contact was used to determine performance. Sixty-four kinematic parameters initially obtained were reduced to 15, and then grouped into like components through a two-stage supervised principal components analysis procedure. These components were then entered into a multiple regression analysis, which indicated that greater hand speed was associated with greater shoulder angular velocity and separation angle between the shoulders and pelvis at ball contact, as well as an earlier time of maximum upper-trunk rotation velocity. These data suggested that in order to increase the speed of the handball pass in Australian football, strategies like increased shoulder angular velocity, increased separation angle at ball contact, and earlier achievement of upper-trunk rotation speed might be beneficial.

  16. Fast algorithm for exploring and compressing of large hyperspectral images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    2011-01-01

    A new method for calculation of latent variable space for exploratory analysis and dimension reduction of large hyperspectral images is proposed. The method is based on significant downsampling of image pixels with preservation of pixels’ structure in feature (variable) space. To achieve this, in...... can be used first of all for fast compression of large data arrays with principal component analysis or similar projection techniques....

  17. Profit maximization mitigates competition

    DEFF Research Database (Denmark)

    Dierker, Egbert; Grodal, Birgit

    1996-01-01

    We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...

  18. Implications of maximal Jarlskog invariant and maximal CP violation

    International Nuclear Information System (INIS)

    Rodriguez-Jauregui, E.; Universidad Nacional Autonoma de Mexico

    2001-04-01

    We argue here why CP violating phase Φ in the quark mixing matrix is maximal, that is, Φ=90 . In the Standard Model CP violation is related to the Jarlskog invariant J, which can be obtained from non commuting Hermitian mass matrices. In this article we derive the conditions to have Hermitian mass matrices which give maximal Jarlskog invariant J and maximal CP violating phase Φ. We find that all squared moduli of the quark mixing elements have a singular point when the CP violation phase Φ takes the value Φ=90 . This special feature of the Jarlskog invariant J and the quark mixing matrix is a clear and precise indication that CP violating Phase Φ is maximal in order to let nature treat democratically all of the quark mixing matrix moduli. (orig.)

  19. Empirical and Statistical Evaluation of the Effectiveness of Four Lossless Data Compression Algorithms

    Directory of Open Access Journals (Sweden)

    N. A. Azeez

    2017-04-01

    Full Text Available Data compression is the process of reducing the size of a file to effectively reduce storage space and communication cost. The evolvement in technology and digital age has led to an unparalleled usage of digital files in this current decade. The usage of data has resulted to an increase in the amount of data being transmitted via various channels of data communication which has prompted the need to look into the current lossless data compression algorithms to check for their level of effectiveness so as to maximally reduce the bandwidth requirement in communication and transfer of data. Four lossless data compression algorithm: Lempel-Ziv Welch algorithm, Shannon-Fano algorithm, Adaptive Huffman algorithm and Run-Length encoding have been selected for implementation. The choice of these algorithms was based on their similarities, particularly in application areas. Their level of efficiency and effectiveness were evaluated using some set of predefined performance evaluation metrics namely compression ratio, compression factor, compression time, saving percentage, entropy and code efficiency. The algorithms implementation was done in the NetBeans Integrated Development Environment using Java as the programming language. Through the statistical analysis performed using Boxplot and ANOVA and comparison made on the four algo

  20. Design and manufacturing rules for maximizing the performance of polycrystalline piezoelectric bending actuators

    International Nuclear Information System (INIS)

    Jafferis, Noah T; Smith, Michael J; Wood, Robert J

    2015-01-01

    Increasing the energy and power density of piezoelectric actuators is very important for any weight-sensitive application, and is especially crucial for enabling autonomy in micro/milli-scale robots and devices utilizing this technology. This is achieved by maximizing the mechanical flexural strength and electrical dielectric strength through the use of laser-induced melting or polishing, insulating edge coating, and crack-arresting features, combined with features for rigid ground attachments to maximize force output. Manufacturing techniques have also been developed to enable mass customization, in which sheets of material are pre-stacked to form a laminate from which nearly arbitrary planar actuator designs can be fabricated using only laser cutting. These techniques have led to a 70% increase in energy density and an increase in mean lifetime of at least 15× compared to prior manufacturing methods. In addition, measurements have revealed a doubling of the piezoelectric coefficient when operating at the high fields necessary to achieve maximal energy densities, along with an increase in the Young’s modulus at the high compressive strains encountered—these two effects help to explain the higher performance of our actuators as compared to that predicted by linear models. (paper)

  1. Thermophysical properties of multi-shock compressed dense argon.

    Science.gov (United States)

    Chen, Q F; Zheng, J; Gu, Y J; Chen, Y L; Cai, L C; Shen, Z J

    2014-02-21

    In contrast to the single shock compression state that can be obtained directly via experimental measurements, the multi-shock compression states, however, have to be calculated with the aid of theoretical models. In order to determine experimentally the multiple shock states, a diagnostic approach with the Doppler pins system (DPS) and the pyrometer was used to probe multiple shocks in dense argon plasmas. Plasma was generated by a shock reverberation technique. The shock was produced using the flyer plate impact accelerated up to ∼6.1 km/s by a two-stage light gas gun and introduced into the plenum argon gas sample, which was pre-compressed from the environmental pressure to about 20 MPa. The time-resolved optical radiation histories were determined using a multi-wavelength channel optical transience radiance pyrometer. Simultaneously, the particle velocity profiles of the LiF window was measured with multi-DPS. The states of multi-shock compression argon plasma were determined from the measured shock velocities combining the particle velocity profiles. We performed the experiments on dense argon plasmas to determine the principal Hugonoit up to 21 GPa, the re-shock pressure up to 73 GPa, and the maximum measure pressure of the fourth shock up to 158 GPa. The results are used to validate the existing self-consistent variational theory model in the partial ionization region and create new theoretical models.

  2. Use of compression garments by women with lymphoedema secondary to breast cancer treatment.

    Science.gov (United States)

    Longhurst, E; Dylke, E S; Kilbreath, S L

    2018-02-19

    This aim of this study was to determine the use of compression garments by women with lymphoedema secondary to breast cancer treatment and factors which underpin use. An online survey was distributed to the Survey and Review group of the Breast Cancer Network Australia. The survey included questions related to the participants' demographics, breast cancer and lymphoedema medical history, prescription and use of compression garments and their beliefs about compression and lymphoedema. Data were analysed using principal component analysis and multivariable logistic regression. Compression garments had been prescribed to 83% of 201 women with lymphoedema within the last 5 years, although 37 women had discontinued their use. Even when accounting for severity of swelling, type of garment(s) and advice given for use varied across participants. Use of compression garments was driven by women's beliefs that they were vulnerable to progression of their disease and that compression would prevent its worsening. Common reasons given as to why women had discontinued their use included discomfort, and their lymphoedema was stable. Participant characteristics associated with discontinuance of compression garments included their belief that (i) the garments were not effective in managing their condition, (ii) experienced mild-moderate swelling and/or (iii) had experienced swelling for greater than 5 years. The prescription of compression garments for lymphoedema is highly varied and may be due to lack of underpinning evidence to inform treatment.

  3. Quality and loudness judgments for music subjected to compression limiting.

    Science.gov (United States)

    Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M

    2012-08-01

    Dynamic-range compression (DRC) is used in the music industry to maximize loudness. The amount of compression applied to commercial recordings has increased over time due to a motivating perspective that louder music is always preferred. In contrast to this viewpoint, artists and consumers have argued that using large amounts of DRC negatively affects the quality of music. However, little research evidence has supported the claims of either position. The present study investigated how DRC affects the perceived loudness and sound quality of recorded music. Rock and classical music samples were peak-normalized and then processed using different amounts of DRC. Normal-hearing listeners rated the processed and unprocessed samples on overall loudness, dynamic range, pleasantness, and preference, using a scaled paired-comparison procedure in two conditions: un-equalized, in which the loudness of the music samples varied, and loudness-equalized, in which loudness differences were minimized. Results indicated that a small amount of compression was preferred in the un-equalized condition, but the highest levels of compression were generally detrimental to quality, whether loudness was equalized or varied. These findings are contrary to the "louder is better" mentality in the music industry and suggest that more conservative use of DRC may be preferred for commercial music.

  4. Maximizers versus satisficers

    Directory of Open Access Journals (Sweden)

    Andrew M. Parker

    2007-12-01

    Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.

  5. Optimal Image Data Compression For Whole Slide Images

    Directory of Open Access Journals (Sweden)

    J. Isola

    2016-06-01

    Differences in WSI file sizes of scanned images deemed “visually lossless” were significant. If we set Hamamatsu Nanozoomer .NDPI file size (using its default “jpeg80 quality” as 100%, the size of a “visually lossless” JPEG2000 file was only 15-20% of that. Comparisons to Aperio and 3D-Histech files (.svs and .mrxs at their default settings yielded similar results. A further optimization of JPEG2000 was done by treating empty slide area as uniform white-grey surface, which could be maximally compressed. Using this algorithm, JPEG2000 file sizes were only half, or even smaller, of original JPEG2000. Variation was due to the proportion of empty slide area on the scan. We anticipate that wavelet-based image compression methods, such as JPEG2000, have a significant advantage in saving storage costs of scanned whole slide image. In routine pathology laboratories applying WSI technology widely to their histology material, absolute cost savings can be substantial.  

  6. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  7. Developing maximal neuromuscular power: Part 1--biological basis of maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-01-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances, the ability to generate maximal muscular power. Part 1 focuses on the factors that affect maximal power production, while part 2, which will follow in a forthcoming edition of Sports Medicine, explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability of the neuromuscular system to generate maximal power is affected by a range of interrelated factors. Maximal muscular power is defined and limited by the force-velocity relationship and affected by the length-tension relationship. The ability to generate maximal power is influenced by the type of muscle action involved and, in particular, the time available to develop force, storage and utilization of elastic energy, interactions of contractile and elastic elements, potentiation of contractile and elastic filaments as well as stretch reflexes. Furthermore, maximal power production is influenced by morphological factors including fibre type contribution to whole muscle area, muscle architectural features and tendon properties as well as neural factors including motor unit recruitment, firing frequency, synchronization and inter-muscular coordination. In addition, acute changes in the muscle environment (i.e. alterations resulting from fatigue, changes in hormone milieu and muscle temperature) impact the ability to generate maximal power. Resistance training has been shown to impact each of these neuromuscular factors in quite specific ways. Therefore, an understanding of the biological basis of maximal power production is essential for developing training programmes that effectively enhance maximal power production in the human.

  8. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    Science.gov (United States)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  9. Effects of patient-controlled abdominal compression on standing systolic blood pressure in adults with orthostatic hypotension.

    Science.gov (United States)

    Figueroa, Juan J; Singer, Wolfgang; Sandroni, Paola; Sletten, David M; Gehrking, Tonette L; Gehrking, Jade A; Low, Phillip; Basford, Jeffrey R

    2015-03-01

    To assess the effects of patient-controlled abdominal compression on postural changes in systolic blood pressure (SBP) associated with orthostatic hypotension (OH). Secondary variables included subject assessments of their preferences and the ease-of-use. Randomized crossover trial. Clinical research laboratory. Adults with neurogenic OH (N=13). Four maneuvers were performed: moving from supine to standing without abdominal compression; moving from supine to standing with either a conventional or an adjustable abdominal binder in place; application of subject-determined maximal tolerable abdominal compression while standing; and while still erect, subsequent reduction of abdominal compression to a level the subject believed would be tolerable for a prolonged period. The primary outcome variable included postural changes in SBP. Secondary outcome variables included subject assessments of their preferences and ease of use. Baseline median SBP in the supine position was not affected by mild (10mmHg) abdominal compression prior to rising (without abdominal compression: 146mmHg; interquartile range, 124-164mmHg; with the conventional binder: 145mmHg; interquartile range, 129-167mmHg; with the adjustable binder: 153mmHg, interquartile range, 129-160mmHg; P=.85). Standing without a binder was associated with an -57mmHg (interquartile range, -40 to -76mmHg) SBP decrease. Levels of compression of 10mmHg applied prior to rising with the conventional and adjustable binders blunted these drops to -50mmHg (interquartile range, -33 to -70mmHg; P=.03) and -46mmHg (interquartile range, -34 to -75mmHg; P=.01), respectively. Increasing compression to subject-selected maximal tolerance while standing did not provide additional benefit and was associated with drops of -53mmHg (interquartile range, -26 to -71mmHg; P=.64) and -59mmHg (interquartile range, -49 to -76mmHg; P=.52) for the conventional and adjustable binders, respectively. Subsequent reduction of compression to more

  10. High-power rf pulse compression with SLED-II at SLAC

    International Nuclear Information System (INIS)

    Nantista, C.

    1993-04-01

    Increasing the peak rf power available from X-band microwave tubes by means of rf pulse compression is envisioned as a way of achieving the few-hundred-megawatt power levels needed to drive a next-generation linear collider with 50--100 MW klystrons. SLED-II is a method of pulse compression similar in principal to the SLED method currently in use on the SLC and the LEP injector linac. It utilizes low-los resonant delay lines in place of the storage cavities of the latter. This produces the added benefit of a flat-topped output pulse. At SLAC, we have designed and constructed a prototype SLED-II pulse-compression system which operates in the circular TE 01 mode. It includes a circular-guide 3-dB coupler and other novel components. Low-power and initial high-power tests have been made, yielding a peak power multiplication of 4.8 at an efficiency of 40%. The system will be used in providing power for structure tests in the ASTA (Accelerator Structures Test Area) bunker. An upgraded second prototype will have improved efficiency and will serve as a model for the pulse compression system of the NLCTA (Next Linear Collider Test Accelerator)

  11. Visible Leading: Principal Academy Connects and Empowers Principals

    Science.gov (United States)

    Hindman, Jennifer; Rozzelle, Jan; Ball, Rachel; Fahey, John

    2015-01-01

    The School-University Research Network (SURN) Principal Academy at the College of William & Mary in Williamsburg, Virginia, has a mission to build a leadership development program that increases principals' instructional knowledge and develops mentor principals to sustain the program. The academy is designed to connect and empower principals…

  12. Shock compression experiments on Lithium Deuteride (LiD) single crystals

    Science.gov (United States)

    Knudson, M. D.; Desjarlais, M. P.; Lemke, R. W.

    2016-12-01

    Shock compression experiments in the few hundred GPa (multi-Mbar) regime were performed on Lithium Deuteride single crystals. This study utilized the high velocity flyer plate capability of the Sandia Z Machine to perform impact experiments at flyer plate velocities in the range of 17-32 km/s. Measurements included pressure, density, and temperature between ˜190 and 570 GPa along the Principal Hugoniot—the locus of end states achievable through compression by large amplitude shock waves—as well as pressure and density of reshock states up to ˜920 GPa. The experimental measurements are compared with density functional theory calculations, tabular equation of state models, and legacy nuclear driven results that have been reanalyzed using modern equations of state for the shock wave standards used in the experiments.

  13. A comparative meta-analysis of maximal aerobic metabolism of vertebrates: implications for respiratory and cardiovascular limits to gas exchange.

    Science.gov (United States)

    Hillman, Stanley S; Hancock, Thomas V; Hedrick, Michael S

    2013-02-01

    Maximal aerobic metabolic rates (MMR) in vertebrates are supported by increased conductive and diffusive fluxes of O(2) from the environment to the mitochondria necessitating concomitant increases in CO(2) efflux. A question that has received much attention has been which step, respiratory or cardiovascular, provides the principal rate limitation to gas flux at MMR? Limitation analyses have principally focused on O(2) fluxes, though the excess capacity of the lung for O(2) ventilation and diffusion remains unexplained except as a safety factor. Analyses of MMR normally rely upon allometry and temperature to define these factors, but cannot account for much of the variation and often have narrow phylogenetic breadth. The unique aspect of our comparative approach was to use an interclass meta-analysis to examine cardio-respiratory variables during the increase from resting metabolic rate to MMR among vertebrates from fish to mammals, independent of allometry and phylogeny. Common patterns at MMR indicate universal principles governing O(2) and CO(2) transport in vertebrate cardiovascular and respiratory systems, despite the varied modes of activities (swimming, running, flying), different cardio-respiratory architecture, and vastly different rates of metabolism (endothermy vs. ectothermy). Our meta-analysis supports previous studies indicating a cardiovascular limit to maximal O(2) transport and also implicates a respiratory system limit to maximal CO(2) efflux, especially in ectotherms. Thus, natural selection would operate on the respiratory system to enhance maximal CO(2) excretion and the cardiovascular system to enhance maximal O(2) uptake. This provides a possible evolutionary explanation for the conundrum of why the respiratory system appears functionally over-designed from an O(2) perspective, a unique insight from previous work focused solely on O(2) fluxes. The results suggest a common gas transport blueprint, or Bauplan, in the vertebrate clade.

  14. Maximizers versus satisficers

    OpenAIRE

    Andrew M. Parker; Wandi Bruine de Bruin; Baruch Fischhoff

    2007-01-01

    Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...

  15. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  16. Generation of large-scale vortives in compressible helical turbulence

    International Nuclear Information System (INIS)

    Chkhetiani, O.G.; Gvaramadze, V.V.

    1989-01-01

    We consider generation of large-scale vortices in compressible self-gravitating turbulent medium. The closed equation describing evolution of the large-scale vortices in helical turbulence with finite correlation time is obtained. This equation has the form similar to the hydromagnetic dynamo equation, which allows us to call the vortx genertation effect the vortex dynamo. It is possible that principally the same mechanism is responsible both for amplification and maintenance of density waves and magnetic fields in gaseous disks of spiral galaxies. (author). 29 refs

  17. Progress with lossy compression of data from the Community Earth System Model

    Science.gov (United States)

    Xu, H.; Baker, A.; Hammerling, D.; Li, S.; Clyne, J.

    2017-12-01

    Climate models, such as the Community Earth System Model (CESM), generate massive quantities of data, particularly when run at high spatial and temporal resolutions. The burden of storage is further exacerbated by creating large ensembles, generating large numbers of variables, outputting at high frequencies, and duplicating data archives (to protect against disk failures). Applying lossy compression methods to CESM datasets is an attractive means of reducing data storage requirements, but ensuring that the loss of information does not negatively impact science objectives is critical. In particular, test methods are needed to evaluate whether critical features (e.g., extreme values and spatial and temporal gradients) have been preserved and to boost scientists' confidence in the lossy compression process. We will provide an overview on our progress in applying lossy compression to CESM output and describe our unique suite of metric tests that evaluate the impact of information loss. Further, we will describe our processes how to choose an appropriate compression algorithm (and its associated parameters) given the diversity of CESM data (e.g., variables may be constant, smooth, change abruptly, contain missing values, or have large ranges). Traditional compression algorithms, such as those used for images, are not necessarily ideally suited for floating-point climate simulation data, and different methods may have different strengths and be more effective for certain types of variables than others. We will discuss our progress towards our ultimate goal of developing an automated multi-method parallel approach for compression of climate data that both maximizes data reduction and minimizes the impact of data loss on science results.

  18. Entropy maximization

    Indian Academy of Sciences (India)

    Abstract. It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf f that satisfy. ∫ fhi dμ = λi for i = 1, 2,...,...k the maximizer of entropy is an f0 that is pro- portional to exp(. ∑ ci hi ) for some choice of ci . An extension of this to a continuum of.

  19. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  20. Tokamak heating by neutral beams and adiabatic compression

    International Nuclear Information System (INIS)

    Furth, H.P.

    1973-08-01

    ''Realistic'' models of tokamak energy confinement strongly favor reactor operation at the maximum MHD-stable β-value, in order to maximize plasma density. Ohmic heating is unsuitable for this purpose. Neutral-beam heating plus compression is well suited; however, very large requirements on device size and injection power seem likely for a DT ignition experiment using a Maxwellian plasma. Results of the ATC experiment are reviewed, including Ohmic heating, neutral-beam heating, and production of two-energy-component plasmas (energetic deuteron population in deuterium ''target plasma''). A modest extrapolation of present ATC parameters could give zero-power conditions in a DT experiment of the two-energy-component type. (U.S.)

  1. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  2. Accelerated whole-brain multi-parameter mapping using blind compressed sensing.

    Science.gov (United States)

    Bhave, Sampada; Lingala, Sajan Goud; Johnson, Casey P; Magnotta, Vincent A; Jacob, Mathews

    2016-03-01

    To introduce a blind compressed sensing (BCS) framework to accelerate multi-parameter MR mapping, and demonstrate its feasibility in high-resolution, whole-brain T1ρ and T2 mapping. BCS models the evolution of magnetization at every pixel as a sparse linear combination of bases in a dictionary. Unlike compressed sensing, the dictionary and the sparse coefficients are jointly estimated from undersampled data. Large number of non-orthogonal bases in BCS accounts for more complex signals than low rank representations. The low degree of freedom of BCS, attributed to sparse coefficients, translates to fewer artifacts at high acceleration factors (R). From 2D retrospective undersampling experiments, the mean square errors in T1ρ and T2 maps were observed to be within 0.1% up to R = 10. BCS was observed to be more robust to patient-specific motion as compared to other compressed sensing schemes and resulted in minimal degradation of parameter maps in the presence of motion. Our results suggested that BCS can provide an acceleration factor of 8 in prospective 3D imaging with reasonable reconstructions. BCS considerably reduces scan time for multiparameter mapping of the whole brain with minimal artifacts, and is more robust to motion-induced signal changes compared to current compressed sensing and principal component analysis-based techniques. © 2015 Wiley Periodicals, Inc.

  3. Effect of cold compress application on tissue temperature in healthy dogs.

    Science.gov (United States)

    Millard, Ralph P; Towle-Millard, Heather A; Rankin, David C; Roush, James K

    2013-03-01

    To measure the effect of cold compress application on tissue temperature in healthy dogs. 10 healthy mixed-breed dogs. Dogs were sedated with hydromorphone (0.1 mg/kg, IV) and diazepam (0.25 mg/kg, IV). Three 24-gauge thermocouple needles were inserted to a depth of 0.5 (superficial), 1.0 (middle), and 1.5 (deep) cm into a shaved, lumbar, epaxial region to measure tissue temperature. Cold (-16.8°C) compresses were applied with gravity dependence for periods of 5, 10, and 20 minutes. Tissue temperature was recorded before compress application and at intervals for up to 80 minutes after application. Control data were collected while dogs received identical sedation but with no cold compress. Mean temperature associated with 5 minutes of application at the superficial depth was significantly decreased, compared with control temperatures. Application for 10 and 20 minutes significantly reduced the temperature at all depths, compared with controls and 5 minutes of application. Twenty minutes of application significantly decreased temperature at only the middle depth, compared with 10 minutes of application. With this method of cold treatment, increasing application time from 10 to 20 minutes caused a further significant temperature change at only the middle tissue depth; however, for maximal cooling, the minimum time of application should be 20 minutes. Possible changes in tissue temperature and adverse effects of application > 20 minutes require further evaluation.

  4. Entropy Maximization

    Indian Academy of Sciences (India)

    It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy ∫ f h i d = i for i = 1 , 2 , … , … k the maximizer of entropy is an f 0 that is proportional to exp ⁡ ( ∑ c i h i ) for some choice of c i . An extension of this to a continuum of ...

  5. Endogenous Market Structures and Contract Theory. Delegation, principal-agent contracts, screening, franchising and tying

    OpenAIRE

    Etro Federico

    2010-01-01

    I study the role of unilateral strategic contracts for firms active in markets with price competition and endogenous entry. Traditional results change substantially when the market structure is endogenous rather than exogenous. They concern 1) contracts of managerial delegation to non-profit maximizers, 2) incentive principal-agent contracts in the presence of moral hazard on cost reducing activities, 3) screening contracts in case of asymmetric information on the productivity of the managers...

  6. Optimization of wavelet decomposition for image compression and feature preservation.

    Science.gov (United States)

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T

    2003-09-01

    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  7. Maximally incompatible quantum observables

    Energy Technology Data Exchange (ETDEWEB)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)

    2014-05-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  8. Maximally incompatible quantum observables

    International Nuclear Information System (INIS)

    Heinosaari, Teiko; Schultz, Jussi; Toigo, Alessandro; Ziman, Mario

    2014-01-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  9. A method of vehicle license plate recognition based on PCANet and compressive sensing

    Science.gov (United States)

    Ye, Xianyi; Min, Feng

    2018-03-01

    The manual feature extraction of the traditional method for vehicle license plates has no good robustness to change in diversity. And the high feature dimension that is extracted with Principal Component Analysis Network (PCANet) leads to low classification efficiency. For solving these problems, a method of vehicle license plate recognition based on PCANet and compressive sensing is proposed. First, PCANet is used to extract the feature from the images of characters. And then, the sparse measurement matrix which is a very sparse matrix and consistent with Restricted Isometry Property (RIP) condition of the compressed sensing is used to reduce the dimensions of extracted features. Finally, the Support Vector Machine (SVM) is used to train and recognize the features whose dimension has been reduced. Experimental results demonstrate that the proposed method has better performance than Convolutional Neural Network (CNN) in the recognition and time. Compared with no compression sensing, the proposed method has lower feature dimension for the increase of efficiency.

  10. Lossless compression of waveform data for efficient storage and transmission

    International Nuclear Information System (INIS)

    Stearns, S.D.; Tan, Li Zhe; Magotra, Neeraj

    1993-01-01

    Compression of waveform data is significant in many engineering and research areas since it can be used to alleviate data storage and transmission bandwidth. For example, seismic data are widely recorded and transmitted so that analysis can be performed on large amounts of data for numerous applications such as petroleum exploration, determination of the earth's core structure, seismic event detection and discrimination of underground nuclear explosions, etc. This paper describes a technique for lossless wave form data compression. The technique consists of two stages. The first stage is a modified form of linear prediction with discrete coefficients and the second stage is bi-level sequence coding. The linear predictor generates an error or residue sequence in a way such that exact reconstruction of the original data sequence can be accomplished with a simple algorithm. The residue sequence is essentially white Gaussian with seismic or other similar waveform data. Bi-level sequence coding, in which two sample sizes are chosen and the residue sequence is encoded into subsequences that alternate from one level to the other, further compresses the residue sequence. The principal feature of the two-stage data compression algorithm is that it is lossless, that is, it allows exact, bit-for-bit recovery of the original data sequence. The performance of the lossless compression algorithm at each stage is analyzed. The advantages of using bi-level sequence coding in the second stage are its simplicity of implementation, its effectiveness on data with large amplitude variations, and its near-optimal performance in encoding Gaussian sequences. Applications of the two-stage technique to typical seismic data indicates that an average number of compressed bits per sample close to the lower bound is achievable in practical situations

  11. Experimental study on ultimate strength and strain behavior of concrete under biaxial compressive stresses

    International Nuclear Information System (INIS)

    Onuma, Hiroshi; Aoyagi, Yukio

    1976-01-01

    The purpose of this investigation was to study the ultimate strength failure mode and deformation behavior of concrete under short-term biaxial compressive stresses, as an aid to design and analyze the concrete structures subjected to multiaxial compression such as prestressed or reinforced concrete vessel structures. The experimental work on biaxial compression was carried out on the specimens of three mix proportions and different ages with 10cm x 10cm x 10cm cubic shape in a room controlled at 20 0 C. The results are summarized as follows. (1) To minimize the surface friction between specimens and loading platens, the pads of teflon sheets coated with silicone grease were used. The coefficient of friction was measured and was 3 percent on the average. (2) The test data showed that the strength of the concrete subjected to biaxial compression increased as compared to uniaxial compressive strength, and that the biaxial strength increase was mainly dependent on the ratio of principal stresses, and it was hardly affected by mix proportions and ages. (3) The maximum increase of strength, which occurred at the stress ratio of approximately sigma 2 /sigma 1 = 0.6, was about 27 percent higher than the uniaxial strength of concrete. (4) The ultimate strength in case of biaxial compression could be approximated by the parabolic equation. (Kako, I.)

  12. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Maximal combustion temperature estimation

    International Nuclear Information System (INIS)

    Golodova, E; Shchepakina, E

    2006-01-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models

  14. Developing maximal neuromuscular power: part 2 - training considerations for improving maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-02-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and

  15. Performance Evaluation of An Innovative-Vapor- Compression-Desalination System

    Directory of Open Access Journals (Sweden)

    Mirna R. Lubis

    2012-04-01

    Full Text Available Two dominant desalination methods are reverse osmosis (RO and multi-stage flash (MSF. RO requires large capital investment and maintenance, whereas MSF is too energy-intensive. Innovative system of vapor compression desalination is proposed in this study. Comprehensive mathematics model for evaporator is also described. From literature study, it is indicated that very high overall-heat-transfer coefficient for evaporator can be obtained at specific condition by using dropwise condensation in the steam side, and pool boiling in the liquid side. Smooth titanium surface is selected in order to increase dropwise condensation, and resist corrosion. To maximize energy efficiency, a cogeneration scheme of a combined cycle consisting of gas turbine, boiler heat recovery, and steam turbine that drivescompressor is used. The resource for combined cycle is relatively too high for the compressor requirement. Excess power can be used to generate electricity for internal and/or externalconsumptions, and sold to open market. Four evaporator stages are used. Evaporator is fed by seawater, with assumption of 3.5% salt contents. Boiling brine (7% salt is boiled in low pressure side of the heat exchanger, and condensed vapor is condensed in high pressure side of the heat exchanger. Condensed steam flows at velocity of 1.52 m/s, so that it maximize the heat transfer coefficient. This unit is designed in order to produce 10 million gallon/day, and assumed it is financed with 5%, 30 years of passive obligation. Three cases are evaluated in order to determine recommended condition to obtain the lowest fixed capital investment. Based on the evaluation, it is possible to establish four-stage unit of mechanical vapor compression distillation with capital $31,723,885.

  16. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  17. AUC-Maximizing Ensembles through Metalearning.

    Science.gov (United States)

    LeDell, Erin; van der Laan, Mark J; Petersen, Maya

    2016-05-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.

  18. Wavelet Compressed PCA Models for Real-Time Image Registration in Augmented Reality Applications

    OpenAIRE

    Christopher Cooper; Kent Wise; John Cooper; Makarand Deo

    2015-01-01

    The use of augmented reality (AR) has shown great promise in enhancing medical training and diagnostics via interactive simulations. This paper presents a novel method to perform accurate and inexpensive image registration (IR) utilizing a pre-constructed database of reference objects in conjunction with a principal component analysis (PCA) model. In addition, a wavelet compression algorithm is utilized to enhance the speed of the registration process. The proposed method is used to perform r...

  19. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  20. Female Traditional Principals and Co-Principals: Experiences of Role Conflict and Job Satisfaction

    Science.gov (United States)

    Eckman, Ellen Wexler; Kelber, Sheryl Talcott

    2010-01-01

    This paper presents a secondary analysis of survey data focusing on role conflict and job satisfaction of 102 female principals. Data were collected from 51 female traditional principals and 51 female co-principals. By examining the traditional and co-principal leadership models as experienced by female principals, this paper addresses the impact…

  1. Is CP violation maximal

    International Nuclear Information System (INIS)

    Gronau, M.

    1984-01-01

    Two ambiguities are noted in the definition of the concept of maximal CP violation. The phase convention ambiguity is overcome by introducing a CP violating phase in the quark mixing matrix U which is invariant under rephasing transformations. The second ambiguity, related to the parametrization of U, is resolved by finding a single empirically viable definition of maximal CP violation when assuming that U does not single out one generation. Considerable improvement in the calculation of nonleptonic weak amplitudes is required to test the conjecture of maximal CP violation. 21 references

  2. Quark enables semi-reference-based compression of RNA-seq data.

    Science.gov (United States)

    Sarkar, Hirak; Patro, Rob

    2017-11-01

    The past decade has seen an exponential increase in biological sequencing capacity, and there has been a simultaneous effort to help organize and archive some of the vast quantities of sequencing data that are being generated. Although these developments are tremendous from the perspective of maximizing the scientific utility of available data, they come with heavy costs. The storage and transmission of such vast amounts of sequencing data is expensive. We present Quark, a semi-reference-based compression tool designed for RNA-seq data. Quark makes use of a reference sequence when encoding reads, but produces a representation that can be decoded independently, without the need for a reference. This allows Quark to achieve markedly better compression rates than existing reference-free schemes, while still relieving the burden of assuming a specific, shared reference sequence between the encoder and decoder. We demonstrate that Quark achieves state-of-the-art compression rates, and that, typically, only a small fraction of the reference sequence must be encoded along with the reads to allow reference-free decompression. Quark is implemented in C ++11, and is available under a GPLv3 license at www.github.com/COMBINE-lab/quark. rob.patro@cs.stonybrook.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  3. Shareholder, stakeholder-owner or broad stakeholder maximization

    OpenAIRE

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating stakeholder-owner. Maximization of shareholder value is a special case of owner-maximization, and only under quite re-strictive assumptions shareholder maximization is larger or equal to stakeholder-owner...

  4. Benefits of Compression Garments Worn During Handball-Specific Circuit on Short-Term Fatigue in Professional Players.

    Science.gov (United States)

    Ravier, Gilles; Bouzigon, Romain; Beliard, Samuel; Tordi, Nicolas; Grappe, Frederic

    2018-04-04

    Ravier, G, Bouzigon, R, Beliard, S, Tordi, N, and Grappe, F. Benefits of compression garments worn during handball-specific circuit on short-term fatigue in professional players. J Strength Cond Res XX(X): 000-000, 2016-The purpose of this study was to investigate the benefits of full-leg length compression garments (CGs) worn during a handball-specific circuit exercises on athletic performance and acute fatigue-induced changes in strength and muscle soreness in professional handball players. Eighteen men (mean ± SD: age 23.22 ± 4.97 years; body mass: 82.06 ± 9.69 kg; height: 184.61 ± 4.78 cm) completed 2 identical sessions either wearing regular gym short or CGs in a randomized crossover design. Exercise circuits of explosive activities included 3 periods of 12 minutes of sprints, jumps, and agility drills every 25 seconds. Before, immediately after and 24 hours postexercise, maximal voluntary knee extension (maximal voluntary contraction, MVC), rate of force development (RFD), and muscle soreness were assessed. During the handball-specific circuit sprint and jump performances were unchanged in both conditions. Immediately after performing the circuit exercises MVC, RFD, and PPT decreased significantly compared with preexercise with CGs and noncompression clothes. Decrement was similar in both conditions for RFD (effect size, ES = 0.40) and PPT for the soleus (ES = 0.86). However, wearing CGs attenuated decrement in MVC (p handball-specific circuit provides benefits on the impairment of the maximal muscle force characteristics and is likely to be worthwhile for handball players involved in activities such as tackles.

  5. Efficient JPEG 2000 Image Compression Scheme for Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Halim Sghaier

    2011-08-01

    Full Text Available When using wireless sensor networks for real-time data transmission, some critical points should be considered. Restricted computational power, reduced memory, narrow bandwidth and energy supplied present strong limits in sensor nodes. Therefore, maximizing network lifetime and minimizing energy consumption are always optimization goals. To overcome the computation and energy limitation of individual sensor nodes during image transmission, an energy efficient image transport scheme is proposed, taking advantage of JPEG2000 still image compression standard using MATLAB and C from Jasper. JPEG2000 provides a practical set of features, not necessarily available in the previous standards. These features were achieved using techniques: the discrete wavelet transform (DWT, and embedded block coding with optimized truncation (EBCOT. Performance of the proposed image transport scheme is investigated with respect to image quality and energy consumption. Simulation results are presented and show that the proposed scheme optimizes network lifetime and reduces significantly the amount of required memory by analyzing the functional influence of each parameter of this distributed image compression algorithm.

  6. Data Compression of Seismic Images by Neural Networks Compression d'images sismiques par des réseaux neuronaux

    Directory of Open Access Journals (Sweden)

    Epping W. J. M.

    2006-11-01

    Full Text Available Neural networks with the multi-layered perceptron architecture were trained on an autoassociation task to compress 2D seismic data. Networks with linear transfer functions outperformed nonlinear neural nets with single or multiple hidden layers. This indicates that the correlational structure of the seismic data is predominantly linear. A compression factor of 5 to 7 can be achieved if a reconstruction error of 10% is allowed. The performance on new test data was similar to that achieved with the training data. The hidden units developed feature-detecting properties that resemble oriented line, edge and more complex feature detectors. The feature detectors of linear neural nets are near-orthogonal rotations of the principal eigenvectors of the Karhunen-Loève transformation. Des réseaux neuronaux à architecture de perceptron multicouches ont été expérimentés en auto-association pour permettre la compression de données sismiques bidimensionnelles. Les réseaux neuronaux à fonctions de transfert linéaires s'avèrent plus performants que les réseaux neuronaux non linéaires, à une ou plusieurs couches cachées. Ceci indique que la structure corrélative des données sismiques est à prédominance linéaire. Un facteur de compression de 5 à 7 peut être obtenu si une erreur de reconstruction de 10 % est admise. La performance sur les données de test est très proche de celle obtenue sur les données d'apprentissage. Les unités cachées développent des propriétés de détection de caractéristiques ressemblant à des détecteurs de lignes orientées, de bords et de figures plus complexes. Les détecteurs de caractéristique des réseaux neuronaux linéaires sont des rotations quasi orthogonales des vecteurs propres principaux de la transformation de Karhunen-Loève.

  7. Task-oriented maximally entangled states

    International Nuclear Information System (INIS)

    Agrawal, Pankaj; Pradhan, B

    2010-01-01

    We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.

  8. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Science.gov (United States)

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-09-01

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  9. Multistage principal component analysis based method for abdominal ECG decomposition

    International Nuclear Information System (INIS)

    Petrolis, Robertas; Krisciukaitis, Algimantas; Gintautas, Vladas

    2015-01-01

    Reflection of fetal heart electrical activity is present in registered abdominal ECG signals. However this signal component has noticeably less energy than concurrent signals, especially maternal ECG. Therefore traditionally recommended independent component analysis, fails to separate these two ECG signals. Multistage principal component analysis (PCA) is proposed for step-by-step extraction of abdominal ECG signal components. Truncated representation and subsequent subtraction of cardio cycles of maternal ECG are the first steps. The energy of fetal ECG component then becomes comparable or even exceeds energy of other components in the remaining signal. Second stage PCA concentrates energy of the sought signal in one principal component assuring its maximal amplitude regardless to the orientation of the fetus in multilead recordings. Third stage PCA is performed on signal excerpts representing detected fetal heart beats in aim to perform their truncated representation reconstructing their shape for further analysis. The algorithm was tested with PhysioNet Challenge 2013 signals and signals recorded in the Department of Obstetrics and Gynecology, Lithuanian University of Health Sciences. Results of our method in PhysioNet Challenge 2013 on open data set were: average score: 341.503 bpm 2 and 32.81 ms. (paper)

  10. Functional Principal Components Analysis of Shanghai Stock Exchange 50 Index

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2014-01-01

    Full Text Available The main purpose of this paper is to explore the principle components of Shanghai stock exchange 50 index by means of functional principal component analysis (FPCA. Functional data analysis (FDA deals with random variables (or process with realizations in the smooth functional space. One of the most popular FDA techniques is functional principal component analysis, which was introduced for the statistical analysis of a set of financial time series from an explorative point of view. FPCA is the functional analogue of the well-known dimension reduction technique in the multivariate statistical analysis, searching for linear transformations of the random vector with the maximal variance. In this paper, we studied the monthly return volatility of Shanghai stock exchange 50 index (SSE50. Using FPCA to reduce dimension to a finite level, we extracted the most significant components of the data and some relevant statistical features of such related datasets. The calculated results show that regarding the samples as random functions is rational. Compared with the ordinary principle component analysis, FPCA can solve the problem of different dimensions in the samples. And FPCA is a convenient approach to extract the main variance factors.

  11. Sparse principal component analysis in medical shape modeling

    Science.gov (United States)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  12. FLOUTING MAXIMS IN INDONESIA LAWAK KLUB CONVERSATION

    Directory of Open Access Journals (Sweden)

    Rahmawati Sukmaningrum

    2017-04-01

    Full Text Available This study aims to identify the types of maxims flouted in the conversation in famous comedy show, Indonesia Lawak Club. Likewise, it also tries to reveal the speakers‘ intention of flouting the maxim in the conversation during the show. The writers use descriptive qualitative method in conducting this research. The data is taken from the dialogue of Indonesia Lawak club and then analyzed based on Grice‘s cooperative principles. The researchers read the dialogue‘s transcripts, identify the maxims, and interpret the data to find the speakers‘ intention for flouting the maxims in the communication. The results show that there are four types of maxims flouted in the dialogue. Those are maxim of quality (23%, maxim of quantity (11%, maxim of manner (31%, and maxim of relevance (35. Flouting the maxims in the conversations is intended to make the speakers feel uncomfortable with the conversation, show arrogances, show disagreement or agreement, and ridicule other speakers.

  13. The compressed breast during mammography and breast tomosynthesis: in vivo shape characterization and modeling

    Science.gov (United States)

    Rodríguez-Ruiz, Alejandro; Agasthya, Greeshma A.; Sechopoulos, Ioannis

    2017-09-01

    To characterize and develop a patient-based 3D model of the compressed breast undergoing mammography and breast tomosynthesis. During this IRB-approved, HIPAA-compliant study, 50 women were recruited to undergo 3D breast surface imaging with structured light (SL) during breast compression, along with simultaneous acquisition of a tomosynthesis image. A pair of SL systems were used to acquire 3D surface images by projecting 24 different patterns onto the compressed breast and capturing their reflection off the breast surface in approximately 12-16 s. The 3D surface was characterized and modeled via principal component analysis. The resulting surface model was combined with a previously developed 2D model of projected compressed breast shapes to generate a full 3D model. Data from ten patients were discarded due to technical problems during image acquisition. The maximum breast thickness (found at the chest-wall) had an average value of 56 mm, and decreased 13% towards the nipple (breast tilt angle of 5.2°). The portion of the breast not in contact with the compression paddle or the support table extended on average 17 mm, 18% of the chest-wall to nipple distance. The outermost point along the breast surface lies below the midline of the total thickness. A complete 3D model of compressed breast shapes was created and implemented as a software application available for download, capable of generating new random realistic 3D shapes of breasts undergoing compression. Accurate characterization and modeling of the breast curvature and shape was achieved and will be used for various image processing and clinical tasks.

  14. VIOLATION OF CONVERSATION MAXIM ON TV ADVERTISEMENTS

    Directory of Open Access Journals (Sweden)

    Desak Putu Eka Pratiwi

    2015-07-01

    Full Text Available Maxim is a principle that must be obeyed by all participants textually and interpersonally in order to have a smooth communication process. Conversation maxim is divided into four namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner of speaking. Violation of the maxim may occur in a conversation in which the information the speaker has is not delivered well to his speaking partner. Violation of the maxim in a conversation will result in an awkward impression. The example of violation is the given information that is redundant, untrue, irrelevant, or convoluted. Advertisers often deliberately violate the maxim to create unique and controversial advertisements. This study aims to examine the violation of maxims in conversations of TV ads. The source of data in this research is food advertisements aired on TV media. Documentation and observation methods are applied to obtain qualitative data. The theory used in this study is a maxim theory proposed by Grice (1975. The results of the data analysis are presented with informal method. The results of this study show an interesting fact that the violation of maxim in a conversation found in the advertisement exactly makes the advertisements very attractive and have a high value.

  15. Finding Maximal Quasiperiodicities in Strings

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Pedersen, Christian N. S.

    2000-01-01

    of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...... in the suffix tree that have a superprimitive path-label....

  16. A framework for data compression and damage detection in structural health monitoring applied on a laboratory three-story structure

    Directory of Open Access Journals (Sweden)

    Manoel Afonso Pereira de Lima

    2016-09-01

    Full Text Available Structural Health Monitoring (SHM is an important technique used to preserve many types of structures in the short and long run, using sensor networks to continuously gather the desired data. However, this causes a strong impact in the data size to be stored and processed. A common solution to this is using compression algorithms, where the level of data compression should be adequate enough to allow the correct damage identification. In this work, we use the data sets from a laboratory three-story structure to evaluate the performance of common compression algorithms which, then, are combined with damage detection algorithms used in SHM. We also analyze how the use of Independent Component Analysis, a common technique to reduce noise in raw data, can assist the detection performance. The results showed that Piecewise Linear Histogram combined with Nonlinear PCA have the best trade-off between compression and detection for small error thresholds while Adaptive PCA with Principal Component Analysis perform better with higher values.

  17. Molecular Dynamics Modeling of the Effect of Axial and Transverse Compression on the Residual Tensile Properties of Ballistic Fiber

    Directory of Open Access Journals (Sweden)

    Sanjib C. Chowdhury

    2017-02-01

    Full Text Available Ballistic impact induces multiaxial loading on Kevlar® and polyethylene fibers used in protective armor systems. The influence of multiaxial loading on fiber failure is not well understood. Experiments show reduction in the tensile strength of these fibers after axial and transverse compression. In this paper, we use molecular dynamics (MD simulations to explain and develop a fundamental understanding of this experimental observation since the property reduction mechanism evolves from the atomistic level. An all-atom MD method is used where bonded and non-bonded atomic interactions are described through a state-of-the-art reactive force field. Monotonic tension simulations in three principal directions of the models are conducted to determine the anisotropic elastic and strength properties. Then the models are subjected to multi-axial loads—axial compression, followed by axial tension and transverse compression, followed by axial tension. MD simulation results indicate that pre-compression distorts the crystal structure, inducing preloading of the covalent bonds and resulting in lower tensile properties.

  18. Shareholder, stakeholder-owner or broad stakeholder maximization

    DEFF Research Database (Denmark)

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating...... including the shareholders of a company. Although it may be the ultimate goal for Corporate Social Responsibility to achieve this kind of maximization, broad stakeholder maximization is quite difficult to give a precise definition. There is no one-dimensional measure to add different stakeholder benefits...... not traded on the mar-ket, and therefore there is no possibility for practical application. Broad stakeholder maximization instead in practical applications becomes satisfying certain stakeholder demands, so that the practical application will be stakeholder-owner maximization un-der constraints defined...

  19. On the maximal superalgebras of supersymmetric backgrounds

    International Nuclear Information System (INIS)

    Figueroa-O'Farrill, Jose; Hackett-Jones, Emily; Moutsopoulos, George; Simon, Joan

    2009-01-01

    In this paper we give a precise definition of the notion of a maximal superalgebra of certain types of supersymmetric supergravity backgrounds, including the Freund-Rubin backgrounds, and propose a geometric construction extending the well-known construction of its Killing superalgebra. We determine the structure of maximal Lie superalgebras and show that there is a finite number of isomorphism classes, all related via contractions from an orthosymplectic Lie superalgebra. We use the structure theory to show that maximally supersymmetric waves do not possess such a maximal superalgebra, but that the maximally supersymmetric Freund-Rubin backgrounds do. We perform the explicit geometric construction of the maximal superalgebra of AdS 4 X S 7 and find that it is isomorphic to osp(1|32). We propose an algebraic construction of the maximal superalgebra of any background asymptotic to AdS 4 X S 7 and we test this proposal by computing the maximal superalgebra of the M2-brane in its two maximally supersymmetric limits, finding agreement.

  20. Compressive strain induced enhancement in thermoelectric-power-factor in monolayer MoS2 nanosheet

    International Nuclear Information System (INIS)

    Dimple; Jena, Nityasagar; De Sarkar, Abir

    2017-01-01

    Strain and temperature induced tunability in the thermoelectric properties in monolayer MoS 2 (ML-MoS 2 ) has been demonstrated using density functional theory coupled to semi-classical Boltzmann transport theory. Compressive strain, in general and uniaxial compressive strain (along the zig-zag direction), in particular, is found to be most effective in enhancing the thermoelectric power factor, owing to the higher electronic mobility and its sensitivity to lattice compression along this direction. Variation in the Seebeck coefficient and electronic band gap with strain is found to follow the Goldsmid–Sharp relation. n-type doping is found to raise the relaxation time-scaled thermoelectric power factor higher than p-type doping and this divide widens with increasing temperature. The relaxation time-scaled thermoelectric power factor in optimally n-doped ML-MoS 2 is found to undergo maximal enhancement under the application of 3% uniaxial compressive strain along the zig-zag direction, when both the ( direct ) electronic band gap and the Seebeck coefficient reach their maximum, while the electron mobility drops down drastically from 73.08 to 44.15 cm 2 V −1 s −1 . Such strain sensitive thermoelectric responses in ML-MoS 2 could open doorways for a variety of applications in emerging areas in 2D-thermoelectrics, such as on-chip thermoelectric power generation and waste thermal energy harvesting. (paper)

  1. The impact of chest compression rates on quality of chest compressions - a manikin study.

    Science.gov (United States)

    Field, Richard A; Soar, Jasmeet; Davies, Robin P; Akhtar, Naheed; Perkins, Gavin D

    2012-03-01

    Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables. Twenty healthcare professionals performed 2 min of continuous compressions on an instrumented manikin at rates of 80, 100, 120, 140 and 160 min(-1) in a random order. An electronic metronome was used to guide compression rate. Compression data were analysed by repeated measures ANOVA and are presented as mean (SD). Non-parametric data was analysed by Friedman test. At faster compression rates there were significant improvements in the number of compressions delivered (160(2) at 80 min(-1) vs. 312(13) compressions at 160 min(-1), P<0.001); and compression duty-cycle (43(6)% at 80 min(-1) vs. 50(7)% at 160 min(-1), P<0.001). This was at the cost of a significant reduction in compression depth (39.5(10)mm at 80 min(-1) vs. 34.5(11)mm at 160 min(-1), P<0.001); and earlier decay in compression quality (median decay point 120 s at 80 min(-1) vs. 40s at 160 min(-1), P<0.001). Additionally not all participants achieved the target rate (100% at 80 min(-1) vs. 70% at 160 min(-1)). Rates above 120 min(-1) had the greatest impact on reducing chest compression quality. For Guidelines 2005 trained rescuers, a chest compression rate of 100-120 min(-1) for 2 min is feasible whilst maintaining adequate chest compression quality in terms of depth, duty-cycle, leaning, and decay in compression performance. Further studies are needed to assess the impact of the Guidelines 2010 recommendation for deeper and faster chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. Maximally multipartite entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Parisi, Giorgio; Pascazio, Saverio

    2008-06-01

    We introduce the notion of maximally multipartite entangled states of n qubits as a generalization of the bipartite case. These pure states have a bipartite entanglement that does not depend on the bipartition and is maximal for all possible bipartitions. They are solutions of a minimization problem. Examples for small n are investigated, both analytically and numerically.

  3. Maximally Symmetric Composite Higgs Models.

    Science.gov (United States)

    Csáki, Csaba; Ma, Teng; Shu, Jing

    2017-09-29

    Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.

  4. A method for predicting the impact velocity of a projectile fired from a compressed air gun facility

    International Nuclear Information System (INIS)

    Attwood, G.J.

    1988-03-01

    This report describes the development and use of a method for calculating the velocity at impact of a projectile fired from a compressed air gun. The method is based on a simple but effective approach which has been incorporated into a computer program. The method was developed principally for use with the Horizontal Impact Facility at AEE Winfrith but has been adapted so that it can be applied to any compressed air gun of a similar design. The method has been verified by comparison of predicted velocities with test data and the program is currently being used in a predictive manner to specify test conditions for the Horizontal Impact Facility at Winfrith. (author)

  5. Maximal quantum Fisher information matrix

    International Nuclear Information System (INIS)

    Chen, Yu; Yuan, Haidong

    2017-01-01

    We study the existence of the maximal quantum Fisher information matrix in the multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher information matrix by deriving various trade-off relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit. (paper)

  6. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest.

    Science.gov (United States)

    Monsieurs, Koenraad G; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F; Calle, Paul A

    2012-11-01

    BACKGROUND AND GOAL OF STUDY: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with decreased depth. In patients undergoing prehospital cardiopulmonary resuscitation by health care professionals, chest compression rate and depth were recorded using an accelerometer (E-series monitor-defibrillator, Zoll, U.S.A.). Compression depth was compared for rates 120/min. A difference in compression depth ≥0.5 cm was considered clinically significant. Mixed models with repeated measurements of chest compression depth and rate (level 1) nested within patients (level 2) were used with compression rate as a continuous and as a categorical predictor of depth. Results are reported as means and standard error (SE). One hundred and thirty-three consecutive patients were analysed (213,409 compressions). Of all compressions 2% were 120/min, 36% were 5 cm. In 77 out of 133 (58%) patients a statistically significant lower depth was observed for rates >120/min compared to rates 80-120/min, in 40 out of 133 (30%) this difference was also clinically significant. The mixed models predicted that the deepest compression (4.5 cm) occurred at a rate of 86/min, with progressively lower compression depths at higher rates. Rates >145/min would result in a depth compression depth for rates 80-120/min was on average 4.5 cm (SE 0.06) compared to 4.1 cm (SE 0.06) for compressions >120/min (mean difference 0.4 cm, Pcompression rates and lower compression depths. Avoiding excessive compression rates may lead to more compressions of sufficient depth. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation

    International Nuclear Information System (INIS)

    Takeda, Koujin; Kabashima, Yoshiyuki

    2013-01-01

    We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ 1 -norm minimization using a standard linear programming algorithm is O(N 3 ). We show that this cost can be reduced to O(N 2 ) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach

  8. Understanding Violations of Gricean Maxims in Preschoolers and Adults

    Directory of Open Access Journals (Sweden)

    Mako eOkanda

    2015-07-01

    Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.

  9. Experimental Study on Optimization of Absorber Configuration in Compression/Absorption Heat Pump with NH{sub 3}/H{sub 2}O Mixture

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Young; Kim, Min Sung; Baik, Young Jin; Park, Seong Ryong; Chang, Ki Chang; Ra, Ho Sang [Korea Institute of Energy Research, Daejeon (Korea, Republic of); Kim, Yong Chan [Korea University, Seoul (Korea, Republic of)

    2011-03-15

    This research aims to develop a compression/absorption hybrid heat pump system using an NH{sub 3}/H{sub 2}O as working fluid. The heat pump cycle is based on a combination of compression and absorption cycles. The cycle consists of two-stage compressors, absorbers, a de superheater, solution heat exchangers, a solution pump, a rectifier, and a liquid/vapor separator. The compression/absorption hybrid heat pump was designed to produce hot water above 90 .deg. C using high-temperature glide during a two-phase heat transfer. Distinct characteristics of the nonlinear temperature profile should be considered to maximize the performance of the absorber. In this study, the performance of the absorber was investigated depending on the capacity, shape, and arrangement of the plate heat exchangers with regard to the concentration and distribution at the inlet of the absorber.

  10. Adiabatic compression and radiative compression of magnetic fields

    International Nuclear Information System (INIS)

    Woods, C.H.

    1980-01-01

    Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape

  11. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  12. Formation of Sheeting Joints as a Result of Compression Parallel to Convex Surfaces, With Examples from Yosemite National Park, California

    Science.gov (United States)

    Martel, S. J.

    2008-12-01

    The formation of sheeting joints has been an outstanding problem in geology. New observations and analyses indicate that sheeting joints develop in response to a near-surface tension induced by compressive stresses parallel to a convex slope (hypothesis 1) rather than by removal of overburden by erosion, as conventionally assumed (hypothesis 2). Opening mode displacements across the joints together with the absence of mineral precipitates within the joints mean that sheeting joints open in response to a near-surface tension normal to the surface rather than a pressurized fluid. Consideration of a plot of this tensile stress as a function of depth normal to the surface reveals that a true tension must arise in the shallow subsurface if the rate of that tensile stress change with depth is positive at the surface. Static equilibrium requires this rate (derivative) to equal P22 k2 + P33 k3 - ρ g cosβ, where k2 and k3 are the principal curvatures of the surface, P22 and P33 are the respective surface- parallel normal stresses along the principal curvatures, ρ is the material density, g is gravitational acceleration, and β is the slope. This derivative will be positive and sheeting joints can open if at least one principal curvature is sufficiently convex (negative) and the surface-parallel stresses are sufficiently compressive (negative). At several sites with sheeting joints (e.g., Yosemite National Park in California), the measured topographic curvatures and the measured surface-parallel stresses of about -10 MPa combine to meet this condition. In apparent violation of hypothesis 1, sheeting joints occur locally at the bottom of Tenaya Canyon, one of the deepest glaciated, U-shaped (concave) canyons in the park. The canyon-bottom sheeting joints only occur, however, where the canyon is convex downstream, a direction that nearly coincides with direction of the most compressive stress measured in the vicinity. The most compressive stress acting along the convex

  13. Compression stockings

    Science.gov (United States)

    Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...

  14. Compression for radiological images

    Science.gov (United States)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  15. 29 CFR 1471.995 - Principal.

    Science.gov (United States)

    2010-07-01

    ... SUSPENSION (NONPROCUREMENT) Definitions § 1471.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or... 29 Labor 4 2010-07-01 2010-07-01 false Principal. 1471.995 Section 1471.995 Labor Regulations...

  16. On maximal massive 3D supergravity

    OpenAIRE

    Bergshoeff , Eric A; Hohm , Olaf; Rosseel , Jan; Townsend , Paul K

    2010-01-01

    ABSTRACT We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric " general massive supergravity " and the maximally supersymmetric N = 8 " new massive supergravity ". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level. (Bergshoeff, Eric A) (Hohm, Olaf) (Rosseel, Jan) P.K.Townsend@da...

  17. Portraits of Principal Practice: Time Allocation and School Principal Work

    Science.gov (United States)

    Sebastian, James; Camburn, Eric M.; Spillane, James P.

    2018-01-01

    Purpose: The purpose of this study was to examine how school principals in urban settings distributed their time working on critical school functions. We also examined who principals worked with and how their time allocation patterns varied by school contextual characteristics. Research Method/Approach: The study was conducted in an urban school…

  18. Optimum Compressive Strength of Hardened Sandcrete Building Blocks with Steel Chips

    Directory of Open Access Journals (Sweden)

    Alohan Omoregie

    2013-02-01

    Full Text Available The recycling of steel chips into an environmentally friendly, responsive, and profitable commodity in the manufacturing and construction industries is a huge and difficult challenge. Several strategies designed for the management and processing of this waste in developed countries have been largely unsuccessful in developing countries mainly due to its capital-intensive nature. To this end, this investigation attempts to provide an alternative solution to the recycling of this material by maximizing its utility value in the building construction industry. This is to establish their influence on the compressive strength of sandcrete hollow blocks and solid cubes with the aim of specifying the range percent of steel chips for the sandcrete optimum compressive strength value. This is particularly important for developing countries in sub-Saharan Africa, and even Latin America where most sandcrete blocks exhibit compressive strengths far below standard requirements. Percentages of steel chips relative to the weight of cement were varied and blended with the sand in an attempt to improve the sand grading parameters. The steel chips variations were one, two, three, four, five, ten and fifteen percent respectively. It was confirmed that the grading parameters were improved and there were significant increases in the compressive strength of the blocks and cube samples. The greatest improvement was noticed at four percent steel chips and sand combination. Using the plotted profile, the margin of steel chips additions for the optimum compressive strength was also established. It is recommended that steel chip sandcrete blocks are suitable for both internal load bearing, and non-load bearing walls, in areas where they are not subjected to moisture ingress. However, for external walls, and in areas where they are liable to moisture attack after laying, the surfaces should be well rendered. Below ground level, the surfaces should be coated with a water

  19. 31 CFR 19.995 - Principal.

    Science.gov (United States)

    2010-07-01

    ... SUSPENSION (NONPROCUREMENT) Definitions § 19.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Principal. 19.995 Section 19.995...

  20. 22 CFR 208.995 - Principal.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Principal. 208.995 Section 208.995 Foreign...) Definitions § 208.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

  1. 22 CFR 1006.995 - Principal.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Principal. 1006.995 Section 1006.995 Foreign... § 1006.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

  2. 2 CFR 180.995 - Principal.

    Science.gov (United States)

    2010-01-01

    ... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false Principal. 180.995 Section 180.995 Grants and Agreements OFFICE OF MANAGEMENT AND BUDGET GOVERNMENTWIDE GUIDANCE FOR GRANTS AND AGREEMENTS... § 180.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator...

  3. 22 CFR 1508.995 - Principal.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Principal. 1508.995 Section 1508.995 Foreign...) Definitions § 1508.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

  4. Principal stratification in causal inference.

    Science.gov (United States)

    Frangakis, Constantine E; Rubin, Donald B

    2002-03-01

    Many scientific problems require that treatment comparisons be adjusted for posttreatment variables, but the estimands underlying standard methods are not causal effects. To address this deficiency, we propose a general framework for comparing treatments adjusting for posttreatment variables that yields principal effects based on principal stratification. Principal stratification with respect to a posttreatment variable is a cross-classification of subjects defined by the joint potential values of that posttreatment variable tinder each of the treatments being compared. Principal effects are causal effects within a principal stratum. The key property of principal strata is that they are not affected by treatment assignment and therefore can be used just as any pretreatment covariate. such as age category. As a result, the central property of our principal effects is that they are always causal effects and do not suffer from the complications of standard posttreatment-adjusted estimands. We discuss briefly that such principal causal effects are the link between three recent applications with adjustment for posttreatment variables: (i) treatment noncompliance, (ii) missing outcomes (dropout) following treatment noncompliance. and (iii) censoring by death. We then attack the problem of surrogate or biomarker endpoints, where we show, using principal causal effects, that all current definitions of surrogacy, even when perfectly true, do not generally have the desired interpretation as causal effects of treatment on outcome. We go on to forrmulate estimands based on principal stratification and principal causal effects and show their superiority.

  5. Inclusive fitness maximization: An axiomatic approach.

    Science.gov (United States)

    Okasha, Samir; Weymark, John A; Bossert, Walter

    2014-06-07

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Principals' Salaries, 2007-2008

    Science.gov (United States)

    Cooke, Willa D.; Licciardi, Chris

    2008-01-01

    How do salaries of elementary and middle school principals compare with those of other administrators and classroom teachers? Are increases in salaries of principals keeping pace with increases in salaries of classroom teachers? And how have principals' salaries fared over the years when the cost of living is taken into account? There are reliable…

  7. 21 CFR 1404.995 - Principal.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Principal. 1404.995 Section 1404.995 Food and...) Definitions § 1404.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

  8. 34 CFR 85.995 - Principal.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Principal. 85.995 Section 85.995 Education Office of...) Definitions § 85.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

  9. Neonatal CPR: room at the top—a mathematical study of optimal chest compression frequency versus body size

    OpenAIRE

    Babbs, Charles F; Meyer, Andrew; Nadkarni, Vinay

    2009-01-01

    Objective: To explore in detail the expected magnitude of systemic perfusion pressure during standard CPR as a function of compression frequency for different sized people from neonate to adult. Method: A 7-compartment mathematical model of the human cardiopulmonary system—upgraded to include inertance of blood columns in the aorta and vena cavae—was exercised with parameters scaled to reflect changes in body weight from 1 to 70 kg. Results: Maximal systemic perfusion pressure occurs at chest...

  10. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Science.gov (United States)

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  11. Principal Self-Efficacy and Work Engagement: Assessing a Norwegian Principal Self-Efficacy Scale

    Science.gov (United States)

    Federici, Roger A.; Skaalvik, Einar M.

    2011-01-01

    One purpose of the present study was to develop and test the factor structure of a multidimensional and hierarchical Norwegian Principal Self-Efficacy Scale (NPSES). Another purpose of the study was to investigate the relationship between principal self-efficacy and work engagement. Principal self-efficacy was measured by the 22-item NPSES. Work…

  12. Maximal Entanglement in High Energy Physics

    Directory of Open Access Journals (Sweden)

    Alba Cervera-Lierta, José I. Latorre, Juan Rojo, Luca Rottoli

    2017-11-01

    Full Text Available We analyze how maximal entanglement is generated at the fundamental level in QED by studying correlations between helicity states in tree-level scattering processes at high energy. We demonstrate that two mechanisms for the generation of maximal entanglement are at work: i $s$-channel processes where the virtual photon carries equal overlaps of the helicities of the final state particles, and ii the indistinguishable superposition between $t$- and $u$-channels. We then study whether requiring maximal entanglement constrains the coupling structure of QED and the weak interactions. In the case of photon-electron interactions unconstrained by gauge symmetry, we show how this requirement allows reproducing QED. For $Z$-mediated weak scattering, the maximal entanglement principle leads to non-trivial predictions for the value of the weak mixing angle $\\theta_W$. Our results are a first step towards understanding the connections between maximal entanglement and the fundamental symmetries of high-energy physics.

  13. Influence of the menstrual cycle on compression-induced pain during mammography: correlation with the thickness and volume of the mammary gland.

    Science.gov (United States)

    Kitaoka, Hitomi; Kawashima, Hiroko

    2018-03-01

    In mammography, breast compression is necessary and an important factor influencing image quality. The purpose of this study was to determine the influence of the menstrual cycle on compression-induced pain during mammography and to evaluate the association between the thickness and volume of the mammary gland and pain. We examined basal body temperature and categorized the menstrual cycle into five phases. We executed breast compression in the craniocaudal view using a customized compression plate, to which we introduced an opening. We measured the thickness of the mammary gland under compression using echography. Immediately after releasing the compression, we evaluated pain using the visual analogue scale. We performed magnetic resonance imaging (MRI) on the same day and measured the volume of the mammary gland. The thickness of the mammary gland, pain, and the volume of the mammary gland were minimal in the late follicular phase and maximal in the late luteal and early follicular phases. It was shown that the changes in the thickness and volume of the mammary gland during the menstrual cycle accounted for the changes in compression-induced pain. On MRI examination of each breast quadrant, the same changes were observed in areas A and C. In area A, it was shown that both the anatomical characteristics and the increase in volume of the mammary gland were associated with pain. We concluded that the late follicular phase constitutes the optimal timing for mammography.

  14. Maximal Inequalities for Dependent Random Variables

    DEFF Research Database (Denmark)

    Hoffmann-Jorgensen, Jorgen

    2016-01-01

    Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X......-k. Then a maximal inequality gives conditions ensuring that the maximal partial sum M-n = max(1) (...

  15. An ethical justification of profit maximization

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2010-01-01

    In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...... utility) maximizing actions are ruled out, e.g., by behavioural norms or formal institutions....

  16. An Investigation of Teacher, Principal, and Superintendent Perceptions on the Ability of the National Framework for Principal Evaluations to Measure Principals' Leadership Competencies

    Science.gov (United States)

    Lamb, Lori D.

    2014-01-01

    The purpose of this qualitative study was to investigate the perceptions of effective principals' leadership competencies; determine if the perceptions of teachers, principals, and superintendents aligned with the proposed National Framework for Principal Evaluations initiative. This study examined the six domains of leadership outlined by the…

  17. Mammographic compression in Asian women.

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  18. Improvements of an objective model of compressed breasts undergoing mammography: Generation and characterization of breast shapes.

    Science.gov (United States)

    Rodríguez-Ruiz, Alejandro; Feng, Steve Si Jia; van Zelst, Jan; Vreemann, Suzan; Mann, Jessica Rice; D'Orsi, Carl Joseph; Sechopoulos, Ioannis

    2017-06-01

    To develop a set of accurate 2D models of compressed breasts undergoing mammography or breast tomosynthesis, based on objective analysis, to accurately characterize mammograms with few linearly independent parameters, and to generate novel clinically realistic paired cranio-caudal (CC) and medio-lateral oblique (MLO) views of the breast. We seek to improve on an existing model of compressed breasts by overcoming detector size bias, removing the nipple and non-mammary tissue, pairing the CC and MLO views from a single breast, and incorporating the pectoralis major muscle contour into the model. The outer breast shapes in 931 paired CC and MLO mammograms were automatically detected with an in-house developed segmentation algorithm. From these shapes three generic models (CC-only, MLO-only, and joint CC/MLO) with linearly independent components were constructed via principal component analysis (PCA). The ability of the models to represent mammograms not used for PCA was tested via leave-one-out cross-validation, by measuring the average distance error (ADE). The individual models based on six components were found to depict breast shapes with accuracy (mean ADE-CC = 0.81 mm, ADE-MLO = 1.64 mm, ADE-Pectoralis = 1.61 mm), outperforming the joint CC/MLO model (P ≤ 0.001). The joint model based on 12 principal components contains 99.5% of the total variance of the data, and can be used to generate new clinically realistic paired CC and MLO breast shapes. This is achieved by generating random sets of 12 principal components, following the Gaussian distributions of the histograms of each component, which were obtained from the component values determined from the images in the mammography database used. Our joint CC/MLO model can successfully generate paired CC and MLO view shapes of the same simulated breast, while the individual models can be used to represent with high accuracy clinical acquired mammograms with a small set of parameters. This is the first

  19. Inclusive Fitness Maximization:An Axiomatic Approach

    OpenAIRE

    Okasha, Samir; Weymark, John; Bossert, Walter

    2014-01-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...

  20. Does mental exertion alter maximal muscle activation?

    Directory of Open Access Journals (Sweden)

    Vianney eRozand

    2014-09-01

    Full Text Available Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 minutes each: i high mental exertion (incongruent Stroop task, ii moderate mental exertion (congruent Stroop task, iii low mental exertion (watching a movie. In each condition, mental exertion was combined with ten intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 minutes. Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors.

  1. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    Science.gov (United States)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2017-12-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  2. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  3. Final Report 02-ERD-033: Rapid Resolidification of Metals using Dynamic Compression

    International Nuclear Information System (INIS)

    Streitz, F H; Nguyen, J H; Orlikowski, D; Minich, R; Moriarty, J A; Holmes, N C

    2005-01-01

    microseconds and makes accessible states beyond the principal Hugoniot and isentrope. The strain rate in these quasi-isentropic compression experiments vary from 10 4 - 10 6 s -1 , effectively bridging the gap between static compression and previous quasi-isentropic compression techniques [4, 7]. The primary deliverable associated with this LDRD-ER is the creation a new experimental capability for the lab: the ability to control pressure and temperature loading rates in a dynamic compression experiment by using functionally graded impactors in the light gas gun facility. The new capability will enable dynamic experiments exploring a broader area of pressure and temperature phase space, ultimately enabling further experiments on the kinetics of phase transitions at high temperature and pressure. Using our unique arbitrary-density graded impactors, scientists can now investigate various aspects of the solidification phase transition including (a) time scale, (b) loading rate dependence and (c) sample size effects

  4. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  5. Microbunching and RF Compression

    International Nuclear Information System (INIS)

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-01-01

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  6. Optical pulse compression

    International Nuclear Information System (INIS)

    Glass, A.J.

    1975-01-01

    The interest in using large lasers to achieve a very short and intense pulse for generating fusion plasma has provided a strong impetus to reexamine the possibilities of optical pulse compression at high energy. Pulse compression allows one to generate pulses of long duration (minimizing damage problems) and subsequently compress optical pulses to achieve the short pulse duration required for specific applications. The ideal device for carrying out this program has not been developed. Of the two approaches considered, the Gires--Tournois approach is limited by the fact that the bandwidth and compression are intimately related, so that the group delay dispersion times the square of the bandwidth is about unity for all simple Gires--Tournois interferometers. The Treacy grating pair does not suffer from this limitation, but is inefficient because diffraction generally occurs in several orders and is limited by the problem of optical damage to the grating surfaces themselves. Nonlinear and parametric processes were explored. Some pulse compression was achieved by these techniques; however, they are generally difficult to control and are not very efficient. (U.S.)

  7. On maximal surfaces in asymptotically flat space-times

    International Nuclear Information System (INIS)

    Bartnik, R.; Chrusciel, P.T.; O Murchadha, N.

    1990-01-01

    Existence of maximal and 'almost maximal' hypersurfaces in asymptotically flat space-times is established under boundary conditions weaker than those considered previously. We show in particular that every vacuum evolution of asymptotically flat data for Einstein equations can be foliated by slices maximal outside a spatially compact set and that every (strictly) stationary asymptotically flat space-time can be foliated by maximal hypersurfaces. Amongst other uniqueness results, we show that maximal hypersurface can be used to 'partially fix' an asymptotic Poincare group. (orig.)

  8. Latitude-Time Total Electron Content Anomalies as Precursors to Japan's Large Earthquakes Associated with Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Jyh-Woei Lin

    2011-01-01

    Full Text Available The goal of this study is to determine whether principal component analysis (PCA can be used to process latitude-time ionospheric TEC data on a monthly basis to identify earthquake associated TEC anomalies. PCA is applied to latitude-time (mean-of-a-month ionospheric total electron content (TEC records collected from the Japan GEONET network to detect TEC anomalies associated with 18 earthquakes in Japan (M≥6.0 from 2000 to 2005. According to the results, PCA was able to discriminate clear TEC anomalies in the months when all 18 earthquakes occurred. After reviewing months when no M≥6.0 earthquakes occurred but geomagnetic storm activity was present, it is possible that the maximal principal eigenvalues PCA returned for these 18 earthquakes indicate earthquake associated TEC anomalies. Previously PCA has been used to discriminate earthquake-associated TEC anomalies recognized by other researchers, who found that statistical association between large earthquakes and TEC anomalies could be established in the 5 days before earthquake nucleation; however, since PCA uses the characteristics of principal eigenvalues to determine earthquake related TEC anomalies, it is possible to show that such anomalies existed earlier than this 5-day statistical window.

  9. Insulin resistance and maximal oxygen uptake

    DEFF Research Database (Denmark)

    Seibaek, Marie; Vestergaard, Henrik; Burchardt, Hans

    2003-01-01

    BACKGROUND: Type 2 diabetes, coronary atherosclerosis, and physical fitness all correlate with insulin resistance, but the relative importance of each component is unknown. HYPOTHESIS: This study was undertaken to determine the relationship between insulin resistance, maximal oxygen uptake......, and the presence of either diabetes or ischemic heart disease. METHODS: The study population comprised 33 patients with and without diabetes and ischemic heart disease. Insulin resistance was measured by a hyperinsulinemic euglycemic clamp; maximal oxygen uptake was measured during a bicycle exercise test. RESULTS......: There was a strong correlation between maximal oxygen uptake and insulin-stimulated glucose uptake (r = 0.7, p = 0.001), and maximal oxygen uptake was the only factor of importance for determining insulin sensitivity in a model, which also included the presence of diabetes and ischemic heart disease. CONCLUSION...

  10. Synthesis of magnetic systems producing field with maximal scalar characteristics

    International Nuclear Information System (INIS)

    Klevets, Nickolay I.

    2005-01-01

    A method of synthesis of the magnetic systems (MSs) consisting of uniformly magnetized blocks is proposed. This method allows to synthesize MSs providing maximum value of any magnetic field scalar characteristic. In particular, it is possible to synthesize the MSs providing the maximum of a field projection on a given vector, a gradient of a field modulus and a gradient of a field energy on a given directing vector, a field magnitude, a magnetic flux through a given surface, a scalar product of a field or a force by a directing function given in some area of space, etc. The synthesized MSs provide maximal efficiency of permanent magnets utilization. The usage of the proposed method of MSs synthesis allows to change a procedure of projecting in principal, namely, to execute it according to the following scheme: (a) to choose the sizes, a form and a number of blocks of a system proceeding from technological (economical) reasons; (b) using the proposed synthesis method, to find an orientation of site magnetization providing maximum possible effect of magnet utilization in a system obtained in (a). Such approach considerably reduces a time of MSs projecting and guarantees maximal possible efficiency of magnets utilization. Besides it provides absolute assurance in 'ideality' of a MS design and allows to obtain an exact estimate of the limit parameters of a field in a working area of a projected MS. The method is applicable to a system containing the components from soft magnetic material with linear magnetic properties

  11. POLITENESS MAXIM OF MAIN CHARACTER IN SECRET FORGIVEN

    Directory of Open Access Journals (Sweden)

    Sang Ayu Isnu Maharani

    2017-06-01

    Full Text Available Maxim of Politeness is an interesting subject to be discussed, since politeness has been criticized from our childhood. We are obliques to be polite to anyone either in speaking or in acting. Somehow we are manage to show politeness in our spoken expression though our intention might be not so polite. For example we must appriciate others opinion although we feel objection toward the opinion. In this article the analysis of politeness is based on maxim proposes by Leech. He proposed six types of politeness maxim. The discussion shows that the main character (Kristen and Kami use all types of maxim in their conversation. The most commonly used are approbation maxim and agreement maxim

  12. Sprint running: how changes in step frequency affect running mechanics and leg spring behaviour at maximal speed.

    Science.gov (United States)

    Monte, Andrea; Muollo, Valentina; Nardello, Francesca; Zamparo, Paola

    2017-02-01

    The purpose of this study was to investigate the changes in selected biomechanical variables in 80-m maximal sprint runs while imposing changes in step frequency (SF) and to investigate if these adaptations differ based on gender and training level. A total of 40 athletes (10 elite men and 10 women, 10 intermediate men and 10 women) participated in this study; they were requested to perform 5 trials at maximal running speed (RS): at the self-selected frequency (SF s ) and at SF ±15% and ±30%SF s . Contact time (CT) and flight time (FT) as well as step length (SL) decreased with increasing SF, while k vert increased with it. At SF s , k leg was the lowest (a 20% decrease at ±30%SF s ), while RS was the largest (a 12% decrease at ±30%SF s ). Only small changes (1.5%) in maximal vertical force (F max ) were observed as a function of SF, but maximum leg spring compression (ΔL) was largest at SF s and decreased by about 25% at ±30%SF s . Significant differences in F max , Δy, k leg and k vert were observed as a function of skill and gender (P < 0.001). Our results indicate that RS is optimised at SF s and that, while k vert follows the changes in SF, k leg is lowest at SF s .

  13. Principal component and spatial correlation analysis of spectroscopic-imaging data in scanning probe microscopy

    International Nuclear Information System (INIS)

    Jesse, Stephen; Kalinin, Sergei V

    2009-01-01

    An approach for the analysis of multi-dimensional, spectroscopic-imaging data based on principal component analysis (PCA) is explored. PCA selects and ranks relevant response components based on variance within the data. It is shown that for examples with small relative variations between spectra, the first few PCA components closely coincide with results obtained using model fitting, and this is achieved at rates approximately four orders of magnitude faster. For cases with strong response variations, PCA allows an effective approach to rapidly process, de-noise, and compress data. The prospects for PCA combined with correlation function analysis of component maps as a universal tool for data analysis and representation in microscopy are discussed.

  14. RE Rooted in Principal's Biography

    NARCIS (Netherlands)

    ter Avest, Ina; Bakker, C.

    2017-01-01

    Critical incidents in the biography of principals appear to be steering in their innovative way of constructing InterReligious Education in their schools. In this contribution, the authors present the biographical narratives of 4 principals: 1 principal introducing interreligious education in a

  15. Optimizing pulse compressibility in completely all-fibered Ytterbium chirped pulse amplifiers for in vivo two photon laser scanning microscopy.

    Science.gov (United States)

    Fernández, A; Grüner-Nielsen, L; Andreana, M; Stadler, M; Kirchberger, S; Sturtzel, C; Distel, M; Zhu, L; Kautek, W; Leitgeb, R; Baltuska, A; Jespersen, K; Verhoef, A

    2017-08-01

    A simple and completely all-fiber Yb chirped pulse amplifier that uses a dispersion matched fiber stretcher and a spliced-on hollow core photonic bandgap fiber compressor is applied in nonlinear optical microscopy. This stretching-compression approach improves compressibility and helps to maximize the fluorescence signal in two-photon laser scanning microscopy as compared with approaches that use standard single mode fibers as stretcher. We also show that in femtosecond all-fiber systems, compensation of higher order dispersion terms is relevant even for pulses with relatively narrow bandwidths for applications relying on nonlinear optical effects. The completely all-fiber system was applied to image green fluorescent beads, a stained lily-of-the-valley root and rat-tail tendon. We also demonstrated in vivo imaging in zebrafish larvae, where we simultaneously measure second harmonic and fluorescence from two-photon excited red-fluorescent protein. Since the pulses are compressed in a fiber, this source is especially suited for upgrading existing laser scanning (confocal) microscopes with multiphoton imaging capabilities in space restricted settings or for incorporation in endoscope-based microscopy.

  16. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  17. LZ-Compressed String Dictionaries

    OpenAIRE

    Arz, Julian; Fischer, Johannes

    2013-01-01

    We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

  18. SU-E-I-58: Objective Models of Breast Shape Undergoing Mammography and Tomosynthesis Using Principal Component Analysis.

    Science.gov (United States)

    Feng, Ssj; Sechopoulos, I

    2012-06-01

    To develop an objective model of the shape of the compressed breast undergoing mammographic or tomosynthesis acquisition. Automated thresholding and edge detection was performed on 984 anonymized digital mammograms (492 craniocaudal (CC) view mammograms and 492 medial lateral oblique (MLO) view mammograms), to extract the edge of each breast. Principal Component Analysis (PCA) was performed on these edge vectors to identify a limited set of parameters and eigenvectors that. These parameters and eigenvectors comprise a model that can be used to describe the breast shapes present in acquired mammograms and to generate realistic models of breasts undergoing acquisition. Sample breast shapes were then generated from this model and evaluated. The mammograms in the database were previously acquired for a separate study and authorized for use in further research. The PCA successfully identified two principal components and their corresponding eigenvectors, forming the basis for the breast shape model. The simulated breast shapes generated from the model are reasonable approximations of clinically acquired mammograms. Using PCA, we have obtained models of the compressed breast undergoing mammographic or tomosynthesis acquisition based on objective analysis of a large image database. Up to now, the breast in the CC view has been approximated as a semi-circular tube, while there has been no objectively-obtained model for the MLO view breast shape. Such models can be used for various breast imaging research applications, such as x-ray scatter estimation and correction, dosimetry estimates, and computer-aided detection and diagnosis. © 2012 American Association of Physicists in Medicine.

  19. Natural maximal νμ-ντ mixing

    International Nuclear Information System (INIS)

    Wetterich, C.

    1999-01-01

    The naturalness of maximal mixing between myon- and tau-neutrinos is investigated. A spontaneously broken nonabelian generation symmetry can explain a small parameter which governs the deviation from maximal mixing. In many cases all three neutrino masses are almost degenerate. Maximal ν μ -ν τ -mixing suggests that the leading contribution to the light neutrino masses arises from the expectation value of a heavy weak triplet rather than from the seesaw mechanism. In this scenario the deviation from maximal mixing is predicted to be less than about 1%. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  20. Thermophysical properties of liquid carbon dioxide under shock compressions: quantum molecular dynamic simulations.

    Science.gov (United States)

    Wang, Cong; Zhang, Ping

    2010-10-07

    Quantum molecular dynamics were used to calculate the equation of state, electrical, and optical properties of liquid carbon dioxide along the Hugoniot at shock pressures up to 74 GPa. The principal Hugoniot derived from the calculated equation of state is in good agreement with experimental results. Molecular dissociation and recombination are investigated through pair correlation functions and decomposition of carbon dioxide is found to be between 40 and 50 GPa along the Hugoniot, where nonmetal-metal transition is observed. In addition, the optical properties of shock compressed carbon dioxide are also theoretically predicted along the Hugoniot.

  1. Gaussian maximally multipartite-entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio

    2009-12-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .

  2. Gaussian maximally multipartite-entangled states

    International Nuclear Information System (INIS)

    Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano

    2009-01-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7.

  3. Utility maximization and mode of payment

    NARCIS (Netherlands)

    Koning, R.H.; Ridder, G.; Heijmans, R.D.H.; Pollock, D.S.G.; Satorra, A.

    2000-01-01

    The implications of stochastic utility maximization in a model of choice of payment are examined. Three types of compatibility with utility maximization are distinguished: global compatibility, local compatibility on an interval, and local compatibility on a finite set of points. Keywords:

  4. Redesigning Principal Internships: Practicing Principals' Perspectives

    Science.gov (United States)

    Anast-May, Linda; Buckner, Barbara; Geer, Gregory

    2011-01-01

    Internship programs too often do not provide the types of experiences that effectively bridge the gap between theory and practice and prepare school leaders who are capable of leading and transforming schools. To help address this problem, the current study is directed at providing insight into practicing principals' views of the types of…

  5. The Future of Principal Evaluation

    Science.gov (United States)

    Clifford, Matthew; Ross, Steven

    2012-01-01

    The need to improve the quality of principal evaluation systems is long overdue. Although states and districts generally require principal evaluations, research and experience tell that many state and district evaluations do not reflect current standards and practices for principals, and that evaluation is not systematically administered. When…

  6. Preparing Principals as Instructional Leaders: Perceptions of University Faculty, Expert Principals, and Expert Teacher Leaders

    Science.gov (United States)

    Taylor Backor, Karen; Gordon, Stephen P.

    2015-01-01

    Although research has established links between the principal's instructional leadership and student achievement, there is considerable concern in the literature concerning the capacity of principal preparation programs to prepare instructional leaders. This study interviewed educational leadership faculty as well as expert principals and teacher…

  7. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression

    Science.gov (United States)

    Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping

    2015-10-01

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.

  8. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  9. School Principals' Emotional Coping Process

    Science.gov (United States)

    Poirel, Emmanuel; Yvon, Frédéric

    2014-01-01

    The present study examines the emotional coping of school principals in Quebec. Emotional coping was measured by stimulated recall; six principals were filmed during a working day and presented a week later with their video showing stressful encounters. The results show that school principals experience anger because of reproaches from staff…

  10. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest

    NARCIS (Netherlands)

    Monsieurs, Koenraad G.; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F.; Calle, Paul A.

    2012-01-01

    Background and goal of study: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with

  11. Activity versus outcome maximization in time management.

    Science.gov (United States)

    Malkoc, Selin A; Tonietto, Gabriela N

    2018-04-30

    Feeling time-pressed has become ubiquitous. Time management strategies have emerged to help individuals fit in more of their desired and necessary activities. We provide a review of these strategies. In doing so, we distinguish between two, often competing, motives people have in managing their time: activity maximization and outcome maximization. The emerging literature points to an important dilemma: a given strategy that maximizes the number of activities might be detrimental to outcome maximization. We discuss such factors that might hinder performance in work tasks and enjoyment in leisure tasks. Finally, we provide theoretically grounded recommendations that can help balance these two important goals in time management. Published by Elsevier Ltd.

  12. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  13. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  14. Principal Time Management Skills: Explaining Patterns in Principals' Time Use, Job Stress, and Perceived Effectiveness

    Science.gov (United States)

    Grissom, Jason A.; Loeb, Susanna; Mitani, Hajime

    2015-01-01

    Purpose: Time demands faced by school principals make principals' work increasingly difficult. Research outside education suggests that effective time management skills may help principals meet job demands, reduce job stress, and improve their performance. The purpose of this paper is to investigate these hypotheses. Design/methodology/approach:…

  15. Haemodynamic Performance of Low Strength Below Knee Graduated Elastic Compression Stockings in Health, Venous Disease, and Lymphoedema.

    Science.gov (United States)

    Lattimer, C R; Kalodiki, E; Azzam, M; Geroulakos, G

    2016-07-01

    To test the in vivo haemodynamic performance of graduated elastic compression (GEC) stockings using air-plethysmography (APG) in healthy volunteers (controls) and patients with varicose veins (VVs), post-thrombotic syndrome (PTS), or lymphoedema. Responsiveness data were used to determine which group benefited the most from GEC. There were 12 patients per group compared using no compression, knee-length Class 1 (18-21 mmHg) compression, and Class 2 (23-32 mmHg) compression. Stocking/leg interface pressures (mmHg) were measured supine in two places using an air-sensor transducer. Stocking performance parameters, investigated before and after GEC, included the standard APG tests (working venous volume [wVV], venous filling index [VFI], venous drainage index [VDI], ejection fraction [EF]) and the occlusion plethysmography tests (incremental pressure causing the maximal increase in calf volume [IPMIV], outflow fraction [OF]). Results were expressed as median and interquartile range. Significant graduated compression was achieved in all four groups with higher interface pressures at the ankle. Only the VVs patients had a significant reduction in their wVV (without: 133 [109-146] vs. class1: 93 [74-113] mL) and the VFI (without: 4.6 [3-7.1] vs. class1: 3.1 [1.9-5] mL/s), both at p <.05. The IPMIV improved significantly in all groups except in the PTS group (p <.05). The OF improved only in the controls (without: 43 [38-51] vs. class1: 50 [48-53] %) and the VVs patients (without: 47 [39-58] vs. class1: 56 [50-64] %), both at p <.05. There were no significant differences in the VDI or the EF with GEC. Compression dose-response relationships were not observed. Patients with varicose veins improved the most, whereas those with PTS improved the least. Performance seemed to depend more on disease pathophysiology than compression strength. However, the lack of responsiveness to compression strength may be related to the low external pressures used. Stocking performance tests

  16. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  17. Principal Self-Efficacy, Teacher Perceptions of Principal Performance, and Teacher Job Satisfaction

    Science.gov (United States)

    Evans, Molly Lynn

    2016-01-01

    In public schools, the principal's role is of paramount importance in influencing teachers to excel and to keep their job satisfaction high. The self-efficacy of leaders is an important characteristic of leadership, but this issue has not been extensively explored in school principals. Using internet-based questionnaires, this study obtained…

  18. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2013-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....

  19. Maximizing entropy over Markov processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2014-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...

  20. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  1. HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL

    CERN Document Server

    HR Division

    2000-01-01

    Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...

  2. On the maximal diphoton width

    CERN Document Server

    Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo

    2016-01-01

    Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.

  3. Legal Problems of the Principal.

    Science.gov (United States)

    Stern, Ralph D.; And Others

    The three talks included here treat aspects of the law--tort liability, student records, and the age of majority--as they relate to the principal. Specifically, the talk on torts deals with the consequences of principal negligence in the event of injuries to students. Assurance is given that a reasonable and prudent principal will have a minimum…

  4. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  5. The effects of transverse rotation angle on compression and effective lever arm of prosthetic feet during simulated stance.

    Science.gov (United States)

    Major, Matthew J; Howard, David; Jones, Rebecca; Twiste, Martin

    2012-06-01

    Unlike sagittal plane prosthesis alignment, few studies have observed the effects of transverse plane alignment on gait and prosthesis behaviour. Changes in transverse plane rotation angle will rotate the points of loading on the prosthesis during stance and may alter its mechanical behaviour. This study observed the effects of increasing the external transverse plane rotation angle, or toe-out, on foot compression and effective lever arm of three commonly prescribed prosthetic feet. The roll-over shape of a SACH, Flex and single-axis foot was measured at four external rotation angle conditions (0°, 5°, 7° and 12° relative to neutral). Differences in foot compression between conditions were measured as average distance between roll-over shapes. Increasing the transverse plane rotation angle did not affect foot compression. However, it did affect the effective lever arm, which was maximized with the 5° condition, although differences between conditions were small. Increasing the transverse plane rotation angle of prosthetic feet by up to 12° beyond neutral has minimal effects on their mechanical behaviour in the plane of walking progression during weight-bearing.

  6. Comparison of the effectiveness of compression stockings and layer compression systems in venous ulceration treatment

    Science.gov (United States)

    Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna; Mościcka, Paulina

    2010-01-01

    Introduction The aim of the research was to compare the dynamics of venous ulcer healing when treated with the use of compression stockings as well as original two- and four-layer bandage systems. Material and methods A group of 46 patients suffering from venous ulcers was studied. This group consisted of 36 (78.3%) women and 10 (21.70%) men aged between 41 and 88 years (the average age was 66.6 years and the median was 67). Patients were randomized into three groups, for treatment with the ProGuide two-layer system, Profore four-layer compression, and with the use of compression stockings class II. In the case of multi-layer compression, compression ensuring 40 mmHg blood pressure at ankle level was used. Results In all patients, independently of the type of compression therapy, a few significant statistical changes of ulceration area in time were observed (Student’s t test for matched pairs, p ulceration area in each of the successive measurements was observed in patients treated with the four-layer system – on average 0.63 cm2/per week. The smallest loss of ulceration area was observed in patients using compression stockings – on average 0.44 cm2/per week. However, the observed differences were not statistically significant (Kruskal-Wallis test H = 4.45, p > 0.05). Conclusions A systematic compression therapy, applied with preliminary blood pressure of 40 mmHg, is an effective method of conservative treatment of venous ulcers. Compression stockings and prepared systems of multi-layer compression were characterized by similar clinical effectiveness. PMID:22419941

  7. Development of a ReaxFF reactive force field for ammonium nitrate and application to shock compression and thermal decomposition.

    Science.gov (United States)

    Shan, Tzu-Ray; van Duin, Adri C T; Thompson, Aidan P

    2014-02-27

    We have developed a new ReaxFF reactive force field parametrization for ammonium nitrate. Starting with an existing nitramine/TATB ReaxFF parametrization, we optimized it to reproduce electronic structure calculations for dissociation barriers, heats of formation, and crystal structure properties of ammonium nitrate phases. We have used it to predict the isothermal pressure-volume curve and the unreacted principal Hugoniot states. The predicted isothermal pressure-volume curve for phase IV solid ammonium nitrate agreed with electronic structure calculations and experimental data within 10% error for the considered range of compression. The predicted unreacted principal Hugoniot states were approximately 17% stiffer than experimental measurements. We then simulated thermal decomposition during heating to 2500 K. Thermal decomposition pathways agreed with experimental findings.

  8. Correlations between quality indexes of chest compression.

    Science.gov (United States)

    Zhang, Feng-Ling; Yan, Li; Huang, Su-Fang; Bai, Xiang-Jun

    2013-01-01

    Cardiopulmonary resuscitation (CPR) is a kind of emergency treatment for cardiopulmonary arrest, and chest compression is the most important and necessary part of CPR. The American Heart Association published the new Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care in 2010 and demanded for better performance of chest compression practice, especially in compression depth and rate. The current study was to explore the relationship of quality indexes of chest compression and to identify the key points in chest compression training and practice. Totally 219 healthcare workers accepted chest compression training by using Laerdal ACLS advanced life support resuscitation model. The quality indexes of chest compression, including compression hands placement, compression rate, compression depth, and chest wall recoil as well as self-reported fatigue time were monitored by the Laerdal Computer Skills and Reporting System. The quality of chest compression was related to the gender of the compressor. The indexes in males, including self-reported fatigue time, the accuracy of compression depth and the compression rate, the accuracy of compression rate, were higher than those in females. However, the accuracy of chest recoil was higher in females than in males. The quality indexes of chest compression were correlated with each other. The self-reported fatigue time was related to all the indexes except the compression rate. It is necessary to offer CPR training courses regularly. In clinical practice, it might be better to change the practitioner before fatigue, especially for females or weak practitioners. In training projects, more attention should be paid to the control of compression rate, in order to delay the fatigue, guarantee enough compression depth and improve the quality of chest compression.

  9. Does the quality of chest compressions deteriorate when the chest compression rate is above 120/min?

    Science.gov (United States)

    Lee, Soo Hoon; Kim, Kyuseok; Lee, Jae Hyuk; Kim, Taeyun; Kang, Changwoo; Park, Chanjong; Kim, Joonghee; Jo, You Hwan; Rhee, Joong Eui; Kim, Dong Hoon

    2014-08-01

    The quality of chest compressions along with defibrillation is the cornerstone of cardiopulmonary resuscitation (CPR), which is known to improve the outcome of cardiac arrest. We aimed to investigate the relationship between the compression rate and other CPR quality parameters including compression depth and recoil. A conventional CPR training for lay rescuers was performed 2 weeks before the 'CPR contest'. CPR anytime training kits were distributed to respective participants for self-training on their own in their own time. The participants were tested for two-person CPR in pairs. The quantitative and qualitative data regarding the quality of CPR were collected from a standardised check list and SkillReporter, and compared by the compression rate. A total of 161 teams consisting of 322 students, which includes 116 men and 206 women, participated in the CPR contest. The mean depth and rate for chest compression were 49.0±8.2 mm and 110.2±10.2/min. Significantly deeper chest compression depths were noted at rates over 120/min than those at any other rates (47.0±7.4, 48.8±8.4, 52.3±6.7, p=0.008). Chest compression depth was proportional to chest compression rate (r=0.206, pcompression including chest compression depth and chest recoil by chest compression rate. Further evaluation regarding the upper limit of the chest compression rate is needed to ensure complete full chest wall recoil while maintaining an adequate chest compression depth. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  11. Principals Who Think Like Teachers

    Science.gov (United States)

    Fahey, Kevin

    2013-01-01

    Being a principal is a complex job, requiring quick, on-the-job learning. But many principals already have deep experience in a role at the very essence of the principalship. They know how to teach. In interviews with principals, Fahey and his colleagues learned that thinking like a teacher was key to their work. Part of thinking the way a teacher…

  12. Trust Me, Principal, or Burn Out! The Relationship between Principals' Burnout and Trust in Students and Parents

    Science.gov (United States)

    Ozer, Niyazi

    2013-01-01

    The purpose of this study was to determine the primary school principals' views on trust in students and parents and also, to explore the relationships between principals' levels of professional burnout and their trust in students and parents. To this end, Principal Trust Survey and Friedman Principal Burnout scales were administered on 119…

  13. Multichannel compressive sensing MRI using noiselet encoding.

    Directory of Open Access Journals (Sweden)

    Kamlesh Pawar

    Full Text Available The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS. In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  14. Structural biomechanics of the craniomaxillofacial skeleton under maximal masticatory loading: Inferences and critical analysis based on a validated computational model.

    Science.gov (United States)

    Pakdel, Amir R; Whyne, Cari M; Fialkov, Jeffrey A

    2017-06-01

    The trend towards optimizing stabilization of the craniomaxillofacial skeleton (CMFS) with the minimum amount of fixation required to achieve union, and away from maximizing rigidity, requires a quantitative understanding of craniomaxillofacial biomechanics. This study uses computational modeling to quantify the structural biomechanics of the CMFS under maximal physiologic masticatory loading. Using an experimentally validated subject-specific finite element (FE) model of the CMFS, the patterns of stress and strain distribution as a result of physiological masticatory loading were calculated. The trajectories of the stresses were plotted to delineate compressive and tensile regimes over the entire CMFS volume. The lateral maxilla was found to be the primary vertical buttress under maximal bite force loading, with much smaller involvement of the naso-maxillary buttress. There was no evidence that the pterygo-maxillary region is a buttressing structure, counter to classical buttress theory. The stresses at the zygomatic sutures suggest that two-point fixation of zygomatic complex fractures may be sufficient for fixation under bite force loading. The current experimentally validated biomechanical FE model of the CMFS is a practical tool for in silico optimization of current practice techniques and may be used as a foundation for the development of design criteria for future technologies for the treatment of CMFS injury and disease. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  15. Quasi-isentropic compressibility of a strongly nonideal deuterium plasma at pressures of up to 5500 GPa: Nonideality and degeneracy effects

    Energy Technology Data Exchange (ETDEWEB)

    Mochalov, M. A., E-mail: postmaster@ifv.vniief.ru; Il’kaev, R. I. [Russian Federal Nuclear Center All-Russia Research Institute for Nuclear Physics (Russian Federation); Fortov, V. E. [Russian Academy of Sciences, Joint Institute for High Temperatures (Russian Federation); Mikhailov, A. L.; Blikov, A. O.; Ogorodnikov, V. A. [Russian Federal Nuclear Center All-Russia Research Institute for Nuclear Physics (Russian Federation); Gryaznov, V. K. [Russian Academy of Sciences, Institute for Problems of Chemical Physics (Russian Federation); Iosilevskii, I. L. [Russian Academy of Sciences, Joint Institute for High Temperatures (Russian Federation)

    2017-03-15

    We report on the experimental results on the quasi-isentropic compressibility of a strongly nonideal deuterium plasma that have been obtained on setups of cylindrical and spherical geometries in the pressure range of up to P ≈ 5500 GPa. We describe the characteristics of experimental setups, as well as the methods for the diagnostics and interpretation of the experimental results. The trajectory of metal shells that compress the deuterium plasma was detected using powerful pulsed X-ray sources with a maximal electron energy of up to 60 MeV. The values of the plasma density, which varied from ρ ≈ 0.8 g/cm{sup 3} to ρ ≈ 6 g/cm{sup 3}, which corresponds to pressure P ≈ 5500 GPa (55 Mbar), were determined from the measured value of the shell radius at the instant that it was stopped. The pressure of the compressed plasma was determined using gasdynamic calculations taking into account the actual characteristics of the experimental setups. We have obtained a strongly compressed deuterium plasma in which electron degeneracy effects under the conditions of strong interparticle interaction are significant. The experimental results have been compared with the theoretical models of a strongly nonideal partly degenerate plasma. We have obtained experimental confirmation of the plasma phase transition in the pressure range near 150 GPa (1.5 Mbar), which is in keeping with the conclusion concerning anomaly in the compressibility of the deuterium plasma drawn in [1].

  16. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    Science.gov (United States)

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  17. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  18. Wellhead compression

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  19. Exploring the Impact of Applicants' Gender and Religion on Principals' Screening Decisions for Assistant Principal Applicants

    Science.gov (United States)

    Bon, Susan C.

    2009-01-01

    In this experimental study, a national random sample of high school principals (stratified by gender) were asked to evaluate hypothetical applicants whose resumes varied by religion (Jewish, Catholic, nondenominational) and gender (male, female) for employment as assistant principals. Results reveal that male principals rate all applicants higher…

  20. Maximizing Power Output in Homogeneous Charge Compression Ignition (HCCI) Engines and Enabling Effective Control of Combustion Timing

    Science.gov (United States)

    Saxena, Samveg

    Homogeneous Charge Compression Ignition (HCCI) engines are one of the most promising engine technologies for the future of energy conversion from clean, efficient combustion. HCCI engines allow high efficiency and lower CO2 emission through the use of high compression ratios and the removal of intake throttle valves (like Diesel), and allow very low levels of urban pollutants like nitric oxide and soot (like Otto). These engines, however, are not without their challenges, such as low power density compared with other engine technologies, and a difficulty in controlling combustion timing. This dissertation first addresses the power output limits. The particular strategies for enabling high power output investigated in this dissertation focus on avoiding five critical limits that either damage an engine, drastically reduce efficiency, or drastically increase emissions: (1) ringing limits, (2) peak in-cylinder pressure limits, (3) misfire limits, (4) low intake temperature limits, and (5) excessive emissions limits. The research shows that the key factors that enable high power output, sufficient for passenger vehicles, while simultaneously avoiding the five limits defined above are the use of: (1) high intake air pressures allowing improved power output, (2) highly delayed combustion timing to avoid ringing limits, and (3) using the highest possible equivalence ratio before encountering ringing limits. These results are revealed by conducting extensive experiments spanning a wide range of operating conditions on a multi-cylinder HCCI engine. Second, this dissertation discusses strategies for effectively sensing combustion characteristics on a HCCI engine. For effective feedback control of HCCI combustion timing, a sensor is required to quantify when combustion occurs. Many laboratory engines use in-cylinder pressure sensors but these sensors are currently prohibitively expensive for wide-scale commercialization. Instead, ion sensors made from inexpensive sparkplugs

  1. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  2. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  3. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  4. Rheological-dynamical continuum damage model for concrete under uniaxial compression and its experimental verification

    Directory of Open Access Journals (Sweden)

    Milašinović Dragan D.

    2015-01-01

    Full Text Available A new analytical model for the prediction of concrete response under uniaxial compression and its experimental verification is presented in this paper. The proposed approach, referred to as the rheological-dynamical continuum damage model, combines rheological-dynamical analogy and damage mechanics. Within the framework of this approach the key continuum parameters such as the creep coefficient, Poisson’s ratio and damage variable are functionally related. The critical values of the creep coefficient and damage variable under peak stress are used to describe the failure mode of the concrete cylinder. The ultimate strain is determined in the post-peak regime only, using the secant stress-strain relation from damage mechanics. The post-peak branch is used for the energy analysis. Experimental data for five concrete compositions were obtained during the examination presented herein. The principal difference between compressive failure and tensile fracture is that there is a residual stress in the specimens, which is a consequence of uniformly accelerated motion of load during the examination of compressive strength. The critical interpenetration displacements and crushing energy are obtained theoretically based on the concept of global failure analysis. [Projekat Ministarstva nauke Republike Srbije, br. ON 174027: Computational Mechanics in Structural Engineering i br. TR 36017: Utilization of by-products and recycled waste materials in concrete composites for sustainable construction development in Serbia: Investigation and environmental assessment of possible applications

  5. Principals' Perceptions of Politics

    Science.gov (United States)

    Tooms, Autumn K.; Kretovics, Mark A.; Smialek, Charles A.

    2007-01-01

    This study is an effort to examine principals' perceptions of workplace politics and its influence on their productivity and efficacy. A survey was used to explore the perceptions of current school administrators with regard to workplace politics. The instrument was disseminated to principals serving public schools in one Midwestern state in the…

  6. 29 CFR 1917.154 - Compressed air.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  7. A Criterion to Identify Maximally Entangled Four-Qubit State

    International Nuclear Information System (INIS)

    Zha Xinwei; Song Haiyang; Feng Feng

    2011-01-01

    Paolo Facchi, et al. [Phys. Rev. A 77 (2008) 060304(R)] presented a maximally multipartite entangled state (MMES). Here, we give a criterion for the identification of maximally entangled four-qubit states. Using this criterion, we not only identify some existing maximally entangled four-qubit states in the literature, but also find several new maximally entangled four-qubit states as well. (general)

  8. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  9. Application specific compression : final report.

    Energy Technology Data Exchange (ETDEWEB)

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  10. Compressibility of the protein-water interface

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-01

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (˜0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ˜45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in

  11. Compressibility of the protein-water interface.

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-07

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (∼0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ∼45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than

  12. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  13. Vacua of maximal gauged D=3 supergravities

    International Nuclear Information System (INIS)

    Fischbacher, T; Nicolai, H; Samtleben, H

    2002-01-01

    We analyse the scalar potentials of maximal gauged three-dimensional supergravities which reveal a surprisingly rich structure. In contrast to maximal supergravities in dimensions D≥4, all these theories possess a maximally supersymmetric (N=16) ground state with negative cosmological constant Λ 2 gauged theory, whose maximally supersymmetric groundstate has Λ = 0. We compute the mass spectra of bosonic and fermionic fluctuations around these vacua and identify the unitary irreducible representations of the relevant background (super)isometry groups to which they belong. In addition, we find several stationary points which are not maximally supersymmetric, and determine their complete mass spectra as well. In particular, we show that there are analogues of all stationary points found in higher dimensions, among them are de Sitter (dS) vacua in the theories with noncompact gauge groups SO(5, 3) 2 and SO(4, 4) 2 , as well as anti-de Sitter (AdS) vacua in the compact gauged theory preserving 1/4 and 1/8 of the supersymmetries. All the dS vacua have tachyonic instabilities, whereas there do exist nonsupersymmetric AdS vacua which are stable, again in contrast to the D≥4 theories

  14. Renewing the Principal Pipeline

    Science.gov (United States)

    Turnbull, Brenda J.

    2015-01-01

    The work principals do has always mattered, but as the demands of the job increase, it matters even more. Perhaps once they could maintain safety and order and call it a day, but no longer. Successful principals today must also lead instruction and nurture a productive learning community for students, teachers, and staff. They set the tone for the…

  15. EFFECTIVENESS OF ADJUVANT USE OF POSTERIOR MANUAL COMPRESSION WITH GRADED COMPRESSION IN THE SONOGRAPHIC DIAGNOSIS OF ACUTE APPENDICITIS

    Directory of Open Access Journals (Sweden)

    Senthilnathan V

    2018-01-01

    Full Text Available BACKGROUND Diagnosing appendicitis by Graded Compression Ultrasonogram is a difficult task because of limiting factors such as operator– dependent technique, retrocaecal location of the appendix and patient obesity. Posterior manual compression technique visualizes the appendix better in the Grey-scale Ultrasonogram. The Aim of this study is to determine the accuracy of ultrasound in detecting or excluding acute appendicitis and to evaluate the usefulness of the adjuvant use of posterior manual compression technique in visualization of the appendix and in the diagnosis of acute appendicitis MATERIALS AND METHODS This prospective study involved a total of 240 patients in all age groups and both sexes. All these patients underwent USG for suspected appendicitis. Ultrasonography was performed with transverse and longitudinal graded compression sonography. If the appendix is not visualized on graded compression sonography, posterior manual compression technique was used to further improve the detection of appendix. RESULTS The vermiform appendix was visualized in 185 patients (77.1% out of 240 patients with graded compression alone. 55 out of 240 patients whose appendix could not be visualized by graded compression alone were subjected to both graded followed by posterior manual compression technique among that Appendix was visualized in 43 patients on posterior manual compression technique amounting to 78.2% of cases, Appendix could not be visualized in the remaining 12 patients (21.8% out of 55. CONCLUSION Combined method of graded compression with posterior manual compression technique is better than the graded compression technique alone in diagnostic accuracy and detection rate of the vermiform appendix.

  16. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  17. Nonlinear viscoelasticity of pre-compressed layered polymeric composite under oscillatory compression

    KAUST Repository

    Xu, Yangguang

    2018-05-03

    Describing nonlinear viscoelastic properties of polymeric composites when subjected to dynamic loading is essential for development of practical applications of such materials. An efficient and easy method to analyze nonlinear viscoelasticity remains elusive because the dynamic moduli (storage modulus and loss modulus) are not very convenient when the material falls into nonlinear viscoelastic range. In this study, we utilize two methods, Fourier transform and geometrical nonlinear analysis, to quantitatively characterize the nonlinear viscoelasticity of a pre-compressed layered polymeric composite under oscillatory compression. We discuss the influences of pre-compression, dynamic loading, and the inner structure of polymeric composite on the nonlinear viscoelasticity. Furthermore, we reveal the nonlinear viscoelastic mechanism by combining with other experimental results from quasi-static compressive tests and microstructural analysis. From a methodology standpoint, it is proved that both Fourier transform and geometrical nonlinear analysis are efficient tools for analyzing the nonlinear viscoelasticity of a layered polymeric composite. From a material standpoint, we consequently posit that the dynamic nonlinear viscoelasticity of polymeric composites with complicated inner structures can also be well characterized using these methods.

  18. Effect of compressibility on the hypervelocity penetration

    Science.gov (United States)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  19. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    Science.gov (United States)

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  20. FRESCO: Referential compression of highly similar sequences.

    Science.gov (United States)

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  1. Principal Ports

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Principal Ports are defined by port limits or US Army Corps of Engineers (USACE) projects, these exclude non-USACE projects not authorized for publication. The...

  2. Perceptions of Beginning Public School Principals.

    Science.gov (United States)

    Lyons, James E.

    1993-01-01

    Summarizes a study to determine principal's perceptions of their competency in primary responsibility areas and their greatest challenges and frustrations. Beginning principals are challenged by delegating responsibilities and becoming familiar with the principal's role, the local school, and school operations. Their major frustrations are role…

  3. Comparing biological networks via graph compression

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2010-09-01

    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  4. Fixed-Rate Compressed Floating-Point Arrays.

    Science.gov (United States)

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  5. Andragogical Practices of School Principals in Developing the Leadership Capacities of Assistant Principals

    Science.gov (United States)

    McDaniel, Luther

    2017-01-01

    The purpose of this mixed methods study was to assess school principals' perspectives of the extent to which they apply the principles of andragogy to the professional development of assistant principals in their schools. This study was conducted in school districts that constitute a RESA area in a southeastern state. The schools in these…

  6. Towards 4D intervention guidance using compressed sensing

    Energy Technology Data Exchange (ETDEWEB)

    Kuntz, Jan; Bartling, Soenke [Deutsches Krebsforschungszentrum DKFZ, Heidelberg (Germany); Brehm, Marcus; Kachelriess, Marc [Erlangen-Nuernberg Univ., Erlangen (Germany). Inst. of Medical Physics (IMP)

    2011-07-01

    Interventional radiology is nowadays usually guided with projection radiography using mono- or biplane systems. Due to the projective nature of this guidance imaging certain intraprocedural situations remain unclear. Although helpful, the use of 3D CT is limited due to radiation dose. Using advanced reconstruction techniques incorporating prior knowledge, one could overcome these limitations without exceeding dose limitations. Intervention guidance is especially appealing to those algorithms, because certain constrains apply to useful images in intervention guidance that vary relevantly from other CT applications. These are: key relevance of high contrast structures, sparse temporal updates and little relevance of absolute CT values. In this paper the principal usability of reconstruction algorithms for intervention guidance is tested. Compressed sensing algorithms PICCS and ASD-POCS are compared to the McKinnon-Bates and Feldkamp-Davis-Kress algorithm. Animal experiments as well as simulations are performed. An outlook towards 4D intervention guidance is provided. (orig.)

  7. 41 CFR 105-68.995 - Principal.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Principal. 105-68.995 Section 105-68.995 Public Contracts and Property Management Federal Property Management Regulations System...-GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 105-68.995 Principal. Principal means— (a...

  8. Consequences of Laughter Upon Trunk Compression and Cortical Activation: Linear and Polynomial Relations

    Science.gov (United States)

    Svebak, Sven

    2016-01-01

    Results from two studies of biological consequences of laughter are reported. A proposed inhibitory brain mechanism was tested in Study 1. It aims to protect against trunk compression that can cause health hazards during vigorous laughter. Compression may be maximal during moderate durations and, for protective reasons, moderate in enduring vigorous laughs. Twenty-five university students volunteered to see a candid camera film. Laughter responses (LR) and the superimposed ha-responses were operationally assessed by mercury-filled strain gauges strapped around the trunk. On average, the thorax compression amplitudes exceeded those of the abdomen, and greater amplitudes were seen in the males than in the females after correction for resting trunk circumference. Regression analyses supported polynomial relations because medium LR durations were associated with particularly high thorax amplitudes. In Study 2, power changes were computed in the beta and alpha EEG frequency bands of the parietal cortex from before to after exposure to the comedy “Dinner for one” in 56 university students. Highly significant linear relations were calculated between the number of laughs and post-exposure cortical activation (increase of beta, decrease of alpha) due to high activation after frequent laughter. The results from Study 1 supported the hypothesis of a protective brain mechanism that is activated during long LRs to reduce the risk of harm to vital organs in the trunk cavity. The results in Study 2 supported a linear cortical activation and, thus, provided evidence for a biological correlate to the subjective experience of mental refreshment after laughter. PMID:27547260

  9. JPEG and wavelet compression of ophthalmic images

    Science.gov (United States)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  10. Sex differences in autonomic function following maximal exercise.

    Science.gov (United States)

    Kappus, Rebecca M; Ranadive, Sushant M; Yan, Huimin; Lane-Cordova, Abbi D; Cook, Marc D; Sun, Peng; Harvey, I Shevon; Wilund, Kenneth R; Woods, Jeffrey A; Fernhall, Bo

    2015-01-01

    Heart rate variability (HRV), blood pressure variability, (BPV) and heart rate recovery (HRR) are measures that provide insight regarding autonomic function. Maximal exercise can affect autonomic function, and it is unknown if there are sex differences in autonomic recovery following exercise. Therefore, the purpose of this study was to determine sex differences in several measures of autonomic function and the response following maximal exercise. Seventy-one (31 males and 40 females) healthy, nonsmoking, sedentary normotensive subjects between the ages of 18 and 35 underwent measurements of HRV and BPV at rest and following a maximal exercise bout. HRR was measured at minute one and two following maximal exercise. Males have significantly greater HRR following maximal exercise at both minute one and two; however, the significance between sexes was eliminated when controlling for VO2 peak. Males had significantly higher resting BPV-low-frequency (LF) values compared to females and did not significantly change following exercise, whereas females had significantly increased BPV-LF values following acute maximal exercise. Although males and females exhibited a significant decrease in both HRV-LF and HRV-high frequency (HF) with exercise, females had significantly higher HRV-HF values following exercise. Males had a significantly higher HRV-LF/HF ratio at rest; however, both males and females significantly increased their HRV-LF/HF ratio following exercise. Pre-menopausal females exhibit a cardioprotective autonomic profile compared to age-matched males due to lower resting sympathetic activity and faster vagal reactivation following maximal exercise. Acute maximal exercise is a sufficient autonomic stressor to demonstrate sex differences in the critical post-exercise recovery period.

  11. Use of Bedside Compression Ultrasonography for Diagnosis of Deep Venous Thrombosis

    Directory of Open Access Journals (Sweden)

    Mohamad Moussa

    2017-07-01

    Full Text Available History of present illness: A 70-year-old female with a history of breast cancer and smoking for 50 years presented to the emergency department with left-lower extremity pain and swelling for two days. The patient denied recent long-distance travel, history of hypercoagulable disorder, or recent surgery. Physical examination revealed a warm, erythematous, 3+ edematous left-lower extremity with mild tenderness extending into the proximal thigh. Her D-dimer level was 2307ng/mL and vital signs were significant for a heart rate of 110bpm, oxygen saturation of 90% on 2 liters of oxygen, and blood pressure of 153/102. Significant findings: As shown in the still image of the performed ultrasound, a transverse view of the proximal-thigh revealed a visible thrombus (green shading occluding the lumen of the left common femoral vein (blue ring, which was non-compressible when direct pressure was applied to the probe. Also visible is a patent and compressible branch of the common femoral vein (purple ring and the femoral artery (red ring, highlighted by its thick vessel wall and pulsatile motion. Discussion: Deep venous thrombosis (DVT affects 1 per 1,000 individuals each year and may lead to complications such as recurrent DVT, pulmonary embolism, and death.1 The utilization of bedside compression ultrasonography allows for rapid diagnosis of DVT and has virtually replaced other diagnostic methods due to its non-invasive and inexpensive nature. When performing compression ultrasonography, the patient should be positioned to maximize distention of the leg veins. The extremity in question should be flexed at the knee and externally rotated at the hip (this fully exposes of the common, superficial, and deep femoral veins as well as the popliteal fossa and the head of the bed elevated at a 30-45 degree angle.2 In patients with an elevated D-dimer and low-to-moderate clinical probability, negative compression imaging of a single proximal location of the femoral

  12. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  13. Eccentric exercise decreases maximal insulin action in humans

    DEFF Research Database (Denmark)

    Asp, Svend; Daugaard, J R; Kristiansen, S

    1996-01-01

    subjects participated in two euglycaemic clamps, performed in random order. One clamp was preceded 2 days earlier by one-legged eccentric exercise (post-eccentric exercise clamp (PEC)) and one was without the prior exercise (control clamp (CC)). 2. During PEC the maximal insulin-stimulated glucose uptake...... for all three clamp steps used (P maximal activity of glycogen synthase was identical in the two thighs for all clamp steps. 3. The glucose infusion rate (GIR......) necessary to maintain euglycaemia during maximal insulin stimulation was lower during PEC compared with CC (15.7%, 81.3 +/- 3.2 vs. 96.4 +/- 8.8 mumol kg-1 min-1, P maximal...

  14. Principal-Counselor Collaboration and School Climate

    Science.gov (United States)

    Rock, Wendy D.; Remley, Theodore P.; Range, Lillian M.

    2017-01-01

    Examining whether principal-counselor collaboration and school climate were related, researchers sent 4,193 surveys to high school counselors in the United States and received 419 responses. As principal-counselor collaboration increased, there were increases in counselors viewing the principal as supportive, the teachers as regarding one another…

  15. 12 CFR 561.39 - Principal office.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Principal office. 561.39 Section 561.39 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY DEFINITIONS FOR REGULATIONS AFFECTING ALL SAVINGS ASSOCIATIONS § 561.39 Principal office. The term principal office means the home...

  16. Teacher Supervision Practices and Principals' Characteristics

    Science.gov (United States)

    April, Daniel; Bouchamma, Yamina

    2015-01-01

    A questionnaire was used to determine the individual and collective teacher supervision practices of school principals and vice-principals in Québec (n = 39) who participated in a research-action study on pedagogical supervision. These practices were then analyzed in terms of the principals' sociodemographic and socioprofessional characteristics…

  17. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix; Gregson, James; Wetzstein, Gordon; Raskar, Ramesh; Heidrich, Wolfgang

    2014-01-01

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  18. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  19. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  20. Compression experiments on the TOSKA tokamak

    International Nuclear Information System (INIS)

    Cima, G.; McGuire, K.M.; Robinson, D.C.; Wootton, A.J.

    1980-10-01

    Results from minor radius compression experiments on a tokamak plasma in TOSCA are reported. The compression is achieved by increasing the toroidal field up to twice its initial value in 200μs. Measurements show that particles and magnetic flux are conserved. When the initial energy confinement time is comparable with the compression time, energy gains are greater than for an adiabatic change of state. The total beta value increases. Central beta values approximately 3% are measured when a small major radius compression is superimposed on a minor radius compression. Magnetic field fluctuations are affected: both the amplitude and period decrease. Starting from low energy confinement times, approximately 200μs, increases in confinement times up to approximately 1 ms are measured. The increase in plasma energy results from a large reduction in the power losses during the compression. When the initial energy confinement time is much longer than the compression time, the parameter changes are those expected for an adiabatic change of state. (author)

  1. Maximize x(a - x)

    Science.gov (United States)

    Lange, L. H.

    1974-01-01

    Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)

  2. Developing Principal Instructional Leadership through Collaborative Networking

    Science.gov (United States)

    Cone, Mariah Bahar

    2010-01-01

    This study examines what occurs when principals of urban schools meet together to learn and improve their instructional leadership in collaborative principal networks designed to support, sustain, and provide ongoing principal capacity building. Principal leadership is considered second only to teaching in its ability to improve schools, yet few…

  3. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  4. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  5. Principal Stability and the Rural Divide

    Science.gov (United States)

    Pendola, Andrew; Fuller, Edward J.

    2018-01-01

    This article examines the unique features of the rural school context and how these features are associated with the stability of principals in these schools. Given the small but growing literature on the characteristics of rural principals, this study presents an exploratory analysis of principal stability across schools located in different…

  6. Building indifferentiable compression functions from the PGV compression functions

    DEFF Research Database (Denmark)

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde

    2016-01-01

    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black......, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block...

  7. CEPRAM: Compression for Endurance in PCM RAM

    OpenAIRE

    González Alberquilla, Rodrigo; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Tirado Fernández, Francisco

    2017-01-01

    We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...

  8. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  9. The Distinction of Hot Herbal Compress, Hot Compress, and Topical Diclofenac as Myofascial Pain Syndrome Treatment.

    Science.gov (United States)

    Boonruab, Jurairat; Nimpitakpong, Netraya; Damjuti, Watchara

    2018-01-01

    This randomized controlled trial aimed to investigate the distinctness after treatment among hot herbal compress, hot compress, and topical diclofenac. The registrants were equally divided into groups and received the different treatments including hot herbal compress, hot compress, and topical diclofenac group, which served as the control group. After treatment courses, Visual Analog Scale and 36-Item Short Form Health survey were, respectively, used to establish the level of pain intensity and quality of life. In addition, cervical range of motion and pressure pain threshold were also examined to identify the motional effects. All treatments showed significantly decreased level of pain intensity and increased cervical range of motion, while the intervention groups exhibited extraordinary capability compared with the topical diclofenac group in pressure pain threshold and quality of life. In summary, hot herbal compress holds promise to be an efficacious treatment parallel to hot compress and topical diclofenac.

  10. Objective models of compressed breast shapes undergoing mammography

    Science.gov (United States)

    Feng, Steve Si Jia; Patel, Bhavika; Sechopoulos, Ioannis

    2013-01-01

    Purpose: To develop models of compressed breasts undergoing mammography based on objective analysis, that are capable of accurately representing breast shapes in acquired clinical images and generating new, clinically realistic shapes. Methods: An automated edge detection algorithm was used to catalogue the breast shapes of clinically acquired cranio-caudal (CC) and medio-lateral oblique (MLO) view mammograms from a large database of digital mammography images. Principal component analysis (PCA) was performed on these shapes to reduce the information contained within the shapes to a small number of linearly independent variables. The breast shape models, one of each view, were developed from the identified principal components, and their ability to reproduce the shape of breasts from an independent set of mammograms not used in the PCA, was assessed both visually and quantitatively by calculating the average distance error (ADE). Results: The PCA breast shape models of the CC and MLO mammographic views based on six principal components, in which 99.2% and 98.0%, respectively, of the total variance of the dataset is contained, were found to be able to reproduce breast shapes with strong fidelity (CC view mean ADE = 0.90 mm, MLO view mean ADE = 1.43 mm) and to generate new clinically realistic shapes. The PCA models based on fewer principal components were also successful, but to a lesser degree, as the two-component model exhibited a mean ADE = 2.99 mm for the CC view, and a mean ADE = 4.63 mm for the MLO view. The four-component models exhibited a mean ADE = 1.47 mm for the CC view and a mean ADE = 2.14 mm for the MLO view. Paired t-tests of the ADE values of each image between models showed that these differences were statistically significant (max p-value = 0.0247). Visual examination of modeled breast shapes confirmed these results. Histograms of the PCA parameters associated with the six principal components were fitted with Gaussian distributions. The six

  11. Objective models of compressed breast shapes undergoing mammography

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Steve Si Jia [Department of Biomedical Engineering, Georgia Institute of Technology and Emory University and Department of Radiology and Imaging Sciences, Emory University, 1701 Uppergate Drive Northeast, Suite 5018, Atlanta, Georgia 30322 (United States); Patel, Bhavika [Department of Radiology and Imaging Sciences, Emory University, 1701 Uppergate Drive Northeast, Suite 5018, Atlanta, Georgia 30322 (United States); Sechopoulos, Ioannis [Departments of Radiology and Imaging Sciences, Hematology and Medical Oncology and Winship Cancer Institute, Emory University, 1701 Uppergate Drive Northeast, Suite 5018, Atlanta, Georgia 30322 (United States)

    2013-03-15

    Purpose: To develop models of compressed breasts undergoing mammography based on objective analysis, that are capable of accurately representing breast shapes in acquired clinical images and generating new, clinically realistic shapes. Methods: An automated edge detection algorithm was used to catalogue the breast shapes of clinically acquired cranio-caudal (CC) and medio-lateral oblique (MLO) view mammograms from a large database of digital mammography images. Principal component analysis (PCA) was performed on these shapes to reduce the information contained within the shapes to a small number of linearly independent variables. The breast shape models, one of each view, were developed from the identified principal components, and their ability to reproduce the shape of breasts from an independent set of mammograms not used in the PCA, was assessed both visually and quantitatively by calculating the average distance error (ADE). Results: The PCA breast shape models of the CC and MLO mammographic views based on six principal components, in which 99.2% and 98.0%, respectively, of the total variance of the dataset is contained, were found to be able to reproduce breast shapes with strong fidelity (CC view mean ADE = 0.90 mm, MLO view mean ADE = 1.43 mm) and to generate new clinically realistic shapes. The PCA models based on fewer principal components were also successful, but to a lesser degree, as the two-component model exhibited a mean ADE = 2.99 mm for the CC view, and a mean ADE = 4.63 mm for the MLO view. The four-component models exhibited a mean ADE = 1.47 mm for the CC view and a mean ADE = 2.14 mm for the MLO view. Paired t-tests of the ADE values of each image between models showed that these differences were statistically significant (max p-value = 0.0247). Visual examination of modeled breast shapes confirmed these results. Histograms of the PCA parameters associated with the six principal components were fitted with Gaussian distributions. The six

  12. Objective models of compressed breast shapes undergoing mammography

    International Nuclear Information System (INIS)

    Feng, Steve Si Jia; Patel, Bhavika; Sechopoulos, Ioannis

    2013-01-01

    Purpose: To develop models of compressed breasts undergoing mammography based on objective analysis, that are capable of accurately representing breast shapes in acquired clinical images and generating new, clinically realistic shapes. Methods: An automated edge detection algorithm was used to catalogue the breast shapes of clinically acquired cranio-caudal (CC) and medio-lateral oblique (MLO) view mammograms from a large database of digital mammography images. Principal component analysis (PCA) was performed on these shapes to reduce the information contained within the shapes to a small number of linearly independent variables. The breast shape models, one of each view, were developed from the identified principal components, and their ability to reproduce the shape of breasts from an independent set of mammograms not used in the PCA, was assessed both visually and quantitatively by calculating the average distance error (ADE). Results: The PCA breast shape models of the CC and MLO mammographic views based on six principal components, in which 99.2% and 98.0%, respectively, of the total variance of the dataset is contained, were found to be able to reproduce breast shapes with strong fidelity (CC view mean ADE = 0.90 mm, MLO view mean ADE = 1.43 mm) and to generate new clinically realistic shapes. The PCA models based on fewer principal components were also successful, but to a lesser degree, as the two-component model exhibited a mean ADE = 2.99 mm for the CC view, and a mean ADE = 4.63 mm for the MLO view. The four-component models exhibited a mean ADE = 1.47 mm for the CC view and a mean ADE = 2.14 mm for the MLO view. Paired t-tests of the ADE values of each image between models showed that these differences were statistically significant (max p-value = 0.0247). Visual examination of modeled breast shapes confirmed these results. Histograms of the PCA parameters associated with the six principal components were fitted with Gaussian distributions. The six

  13. Compression of the digitized X-ray images

    International Nuclear Information System (INIS)

    Terae, Satoshi; Miyasaka, Kazuo; Fujita, Nobuyuki; Takamura, Akio; Irie, Goro; Inamura, Kiyonari.

    1987-01-01

    Medical images are using an increased amount of space in the hospitals, while they are not accessed easily. Thus, suitable data filing system and precise data compression will be necessitated. Image quality was evaluated before and after image data compression, using local filing system (MediFile 1000, NEC Co.) and forty-seven modes of compression parameter. For this study X-ray images of 10 plain radiographs and 7 contrast examinations were digitized using a film reader of CCD sensor in MediFile 1000. Those images were compressed into forty-seven kinds of image data to save in an optical disc and then the compressed images were reconstructed. Each reconstructed image was compared with non-compressed images in respect to several regions of our interest by four radiologists. Compression and extension of radiological images were promptly made by employing the local filing system. Image quality was much more affected by the ratio of data compression than by the mode of parameter itself. In another word, the higher compression ratio became, the worse the image quality were. However, image quality was not significantly degraded until the compression ratio was about 15: 1 on plain radiographs and about 8: 1 on contrast studies. Image compression by this technique will be admitted by diagnostic radiology. (author)

  14. Introduction to compressible fluid flow

    CERN Document Server

    Oosthuizen, Patrick H

    2013-01-01

    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices

  15. A principal-agent Model of corruption

    NARCIS (Netherlands)

    Groenendijk, Nico

    1997-01-01

    One of the new avenues in the study of political corruption is that of neo-institutional economics, of which the principal-agent theory is a part. In this article a principal-agent model of corruption is presented, in which there are two principals (one of which is corrupting), and one agent (who is

  16. Mathematical theory of compressible viscous fluids analysis and numerics

    CERN Document Server

    Feireisl, Eduard; Pokorný, Milan

    2016-01-01

    This book offers an essential introduction to the mathematical theory of compressible viscous fluids. The main goal is to present analytical methods from the perspective of their numerical applications. Accordingly, we introduce the principal theoretical tools needed to handle well-posedness of the underlying Navier-Stokes system, study the problems of sequential stability, and, lastly, construct solutions by means of an implicit numerical scheme. Offering a unique contribution – by exploring in detail the “synergy” of analytical and numerical methods – the book offers a valuable resource for graduate students in mathematics and researchers working in mathematical fluid mechanics. Mathematical fluid mechanics concerns problems that are closely connected to real-world applications and is also an important part of the theory of partial differential equations and numerical analysis in general. This book highlights the fact that numerical and mathematical analysis are not two separate fields of mathematic...

  17. A Numerical and Experimental Study of Ejector Internal Flow Structure and Geometry Modification for Maximized Performance

    Science.gov (United States)

    Falsafioon, Mehdi; Aidoun, Zine; Poirier, Michel

    2017-12-01

    A wide range of industrial refrigeration systems are good candidates to benefit from the cooling and refrigeration potential of supersonic ejectors. These are thermally activated and can use waste heat recovery from industrial processes where it is abundantly generated and rejected to the environment. In other circumstances low cost heat from biomass or solar energy may also be used in order to produce a cooling effect. Ejector performance is however typically modest and needs to be maximized in order to take full advantage of the simplicity and low cost of the technology. In the present work, the behavior of ejectors with different nozzle exit positions has been investigated using a prototype as well as a CFD model. The prototype was used in order to measure the performance advantages of refrigerant (R-134a) flowing inside the ejector. For the CFD model, it is assumed that the ejectors are axi-symmetric along x-axis, thus the generated model is in 2D. The preliminary CFD results are validated with experimental data over a wide range of conditions and are in good accordance in terms of entrainment and compression ratios. Next, the flow patterns of four different topologies are studied in order to discuss the optimum geometry in term of ejector entrainment improvement. Finally, The numerical simulations were used to find an optimum value corresponding to maximized entrainment ratio for fixed operating conditions.

  18. Development and assessment of compression technique for medical images using neural network. I. Assessment of lossless compression

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi

    2007-01-01

    This paper describes assessment of the lossless compression of a new efficient compression technique (JIS system) using neural network that the author and co-workers have recently developed. At first, theory is explained for encoding and decoding the data. Assessment is done on 55 images each of chest digital roentgenography, digital mammography, 64-row multi-slice CT, 1.5 Tesla MRI, positron emission tomography (PET) and digital subtraction angiography, which are lossless-compressed by the present JIS system to see the compression rate and loss. For comparison, those data are also JPEG lossless-compressed. Personal computer (PC) is an Apple MacBook Pro with configuration of Boot Camp for Windows environment. The present JIS system is found to have a more than 4 times higher efficiency than the usual compressions which compressing the file volume to only 1/11 in average, and thus to be importantly responsible to the increasing medical imaging data. (R.T.)

  19. A comparative experimental study on engine operating on premixed charge compression ignition and compression ignition mode

    Directory of Open Access Journals (Sweden)

    Bhiogade Girish E.

    2017-01-01

    Full Text Available New combustion concepts have been recently developed with the purpose to tackle the problem of high emissions level of traditional direct injection Diesel engines. A good example is the premixed charge compression ignition combustion. A strategy in which early injection is used causing a burning process in which the fuel burns in the premixed condition. In compression ignition engines, soot (particulate matter and NOx emissions are an extremely unsolved issue. Premixed charge compression ignition is one of the most promising solutions that combine the advantages of both spark ignition and compression ignition combustion modes. It gives thermal efficiency close to the compression ignition engines and resolves the associated issues of high NOx and particulate matter, simultaneously. Premixing of air and fuel preparation is the challenging part to achieve premixed charge compression ignition combustion. In the present experimental study a diesel vaporizer is used to achieve premixed charge compression ignition combustion. A vaporized diesel fuel was mixed with the air to form premixed charge and inducted into the cylinder during the intake stroke. Low diesel volatility remains the main obstacle in preparing premixed air-fuel mixture. Exhaust gas re-circulation can be used to control the rate of heat release. The objective of this study is to reduce exhaust emission levels with maintaining thermal efficiency close to compression ignition engine.

  20. Principal Curves on Riemannian Manifolds.

    Science.gov (United States)

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  1. Pulsed Compression Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Roestenberg, T. [University of Twente, Enschede (Netherlands)

    2012-06-07

    The advantages of the Pulsed Compression Reactor (PCR) over the internal combustion engine-type chemical reactors are briefly discussed. Over the last four years a project concerning the fundamentals of the PCR technology has been performed by the University of Twente, Enschede, Netherlands. In order to assess the feasibility of the application of the PCR principle for the conversion methane to syngas, several fundamental questions needed to be answered. Two important questions that relate to the applicability of the PCR for any process are: how large is the heat transfer rate from a rapidly compressed and expanded volume of gas, and how does this heat transfer rate compare to energy contained in the compressed gas? And: can stable operation with a completely free piston as it is intended with the PCR be achieved?.

  2. Principal Leadership for Technology-enhanced Learning in Science

    Science.gov (United States)

    Gerard, Libby F.; Bowyer, Jane B.; Linn, Marcia C.

    2008-02-01

    Reforms such as technology-enhanced instruction require principal leadership. Yet, many principals report that they need help to guide implementation of science and technology reforms. We identify strategies for helping principals provide this leadership. A two-phase design is employed. In the first phase we elicit principals' varied ideas about the Technology-enhanced Learning in Science (TELS) curriculum materials being implemented by teachers in their schools, and in the second phase we engage principals in a leadership workshop designed based on the ideas they generated. Analysis uses an emergent coding scheme to categorize principals' ideas, and a knowledge integration framework to capture the development of these ideas. The analysis suggests that principals frame their thinking about the implementation of TELS in terms of: principal leadership, curriculum, educational policy, teacher learning, student outcomes and financial resources. They seek to improve their own knowledge to support this reform. The principals organize their ideas around individual school goals and current political issues. Principals prefer professional development activities that engage them in reviewing curricula and student work with other principals. Based on the analysis, this study offers guidelines for creating learning opportunities that enhance principals' leadership abilities in technology and science reform.

  3. Utility Maximization in Nonconvex Wireless Systems

    CERN Document Server

    Brehmer, Johannes

    2012-01-01

    This monograph formulates a framework for modeling and solving utility maximization problems in nonconvex wireless systems. First, a model for utility optimization in wireless systems is defined. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed. The development is based on a careful examination of the properties that are required for the application of each method. The focus is on problems whose initial formulation does not allow for a solution by standard convex methods. Solution approaches that take into account the nonconvexities inherent to wireless systems are discussed in detail. The monograph concludes with two case studies that demonstrate the application of the proposed framework to utility maximization in multi-antenna broadcast channels.

  4. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  5. Atomic effect algebras with compression bases

    International Nuclear Information System (INIS)

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-01

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  6. Effect of material strength on the relationship between the principal Hugoniot and quasi-isentrope of beryllium and 6061-T6 aluminum below 35 GPa

    International Nuclear Information System (INIS)

    Moss, W.C.

    1985-01-01

    Quasi-isentropic (QI) compression can be achieved by loading a specimen with a low strain rate, long rise time uniaxial strain wave. Recent experimental data show that the quasi-isentrope of 6061-T6 aluminum lies a few percent above the principal Hugoniot, that is, at a given specific volume, the QI stress exceeds the principal Hugoniot stress. It has been suggested that this effect is due to material strength. Using Hugoniot data, shock-reshock, and shock-unload data for beryllium and 6061-T6 aluminum, we have constructed the quasi-isentropes as functions of specific volume. Our results show that the QI stress exceeds the principal Hugoniot stress above a Hugoniot stress of 8.4 GPa in beryllium, and between Hugoniot stresses of 3.8 and 21.4 GPa in aluminum. The effect is due to strength and implies that the QI yield strength can be large. Our calculations show that the QI yield strength is 0.9 GPa in aluminum at a QI stress of 9 GPa, and 5.2 GPa in beryllium at a QI stress of 35 GPa

  7. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  8. End-Tidal CO2-Guided Chest Compression Delivery Improves Survival in a Neonatal Asphyxial Cardiac Arrest Model.

    Science.gov (United States)

    Hamrick, Justin T; Hamrick, Jennifer L; Bhalala, Utpal; Armstrong, Jillian S; Lee, Jeong-Hoo; Kulikowicz, Ewa; Lee, Jennifer K; Kudchadkar, Sapna R; Koehler, Raymond C; Hunt, Elizabeth A; Shaffner, Donald H

    2017-11-01

    To determine whether end-tidal CO2-guided chest compression delivery improves survival over standard cardiopulmonary resuscitation after prolonged asphyxial arrest. Preclinical randomized controlled study. University animal research laboratory. 1-2-week-old swine. After undergoing a 20-minute asphyxial arrest, animals received either standard or end-tidal CO2-guided cardiopulmonary resuscitation. In the standard group, chest compression delivery was optimized by video and verbal feedback to maintain the rate, depth, and release within published guidelines. In the end-tidal CO2-guided group, chest compression rate and depth were adjusted to obtain a maximal end-tidal CO2 level without other feedback. Cardiopulmonary resuscitation included 10 minutes of basic life support followed by advanced life support for 10 minutes or until return of spontaneous circulation. Mean end-tidal CO2 at 10 minutes of cardiopulmonary resuscitation was 34 ± 8 torr in the end-tidal CO2 group (n = 14) and 19 ± 9 torr in the standard group (n = 14; p = 0.0001). The return of spontaneous circulation rate was 7 of 14 (50%) in the end-tidal CO2 group and 2 of 14 (14%) in the standard group (p = 0.04). The chest compression rate averaged 143 ± 10/min in the end-tidal CO2 group and 102 ± 2/min in the standard group (p tidal CO2-guided chest compression delivery. The response of the relaxation arterial pressure and cerebral perfusion pressure to the initial epinephrine administration was greater in the end-tidal CO2 group than in the standard group (p = 0.01 and p = 0.03, respectively). The prevalence of resuscitation-related injuries was similar between groups. End-tidal CO2-guided chest compression delivery is an effective resuscitation method that improves early survival after prolonged asphyxial arrest in this neonatal piglet model. Optimizing end-tidal CO2 levels during cardiopulmonary resuscitation required that chest compression delivery rate exceed current guidelines

  9. Advances in compressible turbulent mixing

    International Nuclear Information System (INIS)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

  10. Advances in compressible turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  11. Study of CSR longitudinal bunch compression cavity

    International Nuclear Information System (INIS)

    Yin Dayu; Li Peng; Liu Yong; Xie Qingchun

    2009-01-01

    The scheme of longitudinal bunch compression cavity for the Cooling Storage Ring (CSR)is an important issue. Plasma physics experiments require high density heavy ion beam and short pulsed bunch,which can be produced by non-adiabatic compression of bunch implemented by a fast compression with 90 degree rotation in the longitudinal phase space. The phase space rotation in fast compression is initiated by a fast jump of the RF-voltage amplitude. For this purpose, the CSR longitudinal bunch compression cavity, loaded with FINEMET-FT-1M is studied and simulated with MAFIA code. In this paper, the CSR longitudinal bunch compression cavity is simulated and the initial bunch length of 238 U 72+ with 250 MeV/u will be compressed from 200 ns to 50 ns.The construction and RF properties of the CSR longitudinal bunch compression cavity are simulated and calculated also with MAFIA code. The operation frequency of the cavity is 1.15 MHz with peak voltage of 80 kV, and the cavity can be used to compress heavy ions in the CSR. (authors)

  12. Efficacy of chest compressions directed by end-tidal CO2 feedback in a pediatric resuscitation model of basic life support.

    Science.gov (United States)

    Hamrick, Jennifer L; Hamrick, Justin T; Lee, Jennifer K; Lee, Benjamin H; Koehler, Raymond C; Shaffner, Donald H

    2014-04-14

    End-tidal carbon dioxide (ETCO2) correlates with systemic blood flow and resuscitation rate during cardiopulmonary resuscitation (CPR) and may potentially direct chest compression performance. We compared ETCO2-directed chest compressions with chest compressions optimized to pediatric basic life support guidelines in an infant swine model to determine the effect on rate of return of spontaneous circulation (ROSC). Forty 2-kg piglets underwent general anesthesia, tracheostomy, placement of vascular catheters, ventricular fibrillation, and 90 seconds of no-flow before receiving 10 or 12 minutes of pediatric basic life support. In the optimized group, chest compressions were optimized by marker, video, and verbal feedback to obtain American Heart Association-recommended depth and rate. In the ETCO2-directed group, compression depth, rate, and hand position were modified to obtain a maximal ETCO2 without video or verbal feedback. After the interval of pediatric basic life support, external defibrillation and intravenous epinephrine were administered for another 10 minutes of CPR or until ROSC. Mean ETCO2 at 10 minutes of CPR was 22.7±7.8 mm Hg in the optimized group (n=20) and 28.5±7.0 mm Hg in the ETCO2-directed group (n=20; P=0.02). Despite higher ETCO2 and mean arterial pressure in the latter group, ROSC rates were similar: 13 of 20 (65%; optimized) and 14 of 20 (70%; ETCO2 directed). The best predictor of ROSC was systemic perfusion pressure. Defibrillation attempts, epinephrine doses required, and CPR-related injuries were similar between groups. The use of ETCO2-directed chest compressions is a novel guided approach to resuscitation that can be as effective as standard CPR optimized with marker, video, and verbal feedback.

  13. Flux compression generators as plasma compression power sources

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.; Thomson, D.B.; Garn, W.B.

    1979-01-01

    A survey is made of applications where explosive-driven magnetic flux compression generators have been or can be used to directly power devices that produce dense plasmas. Representative examples are discussed that are specific to the theta pinch, the plasma gun, the dense plasma focus and the Z pinch. These examples are used to illustrate the high energy and power capabilities of explosive generators. An application employing a rocket-borne, generator-powered plasma gun emphasizes the size and weight potential of flux compression power supplies. Recent results from a local effort to drive a dense plasma focus are provided. Imploding liners ae discussed in the context of both the theta and Z pinches

  14. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  15. Anisotropic Concrete Compressive Strength

    DEFF Research Database (Denmark)

    Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao

    2017-01-01

    When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...

  16. Experiments with automata compression

    NARCIS (Netherlands)

    Daciuk, J.; Yu, S; Daley, M; Eramian, M G

    2001-01-01

    Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

  17. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization.

    Science.gov (United States)

    Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for

  18. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization

    Directory of Open Access Journals (Sweden)

    Yoanna Arlina Kurnianingsih

    2015-05-01

    Full Text Available We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble and choice strategies (what gamble information influences choices within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning.We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61 to 80 years old were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic

  19. Limiting density ratios in piston-driven compressions

    International Nuclear Information System (INIS)

    Lee, S.

    1985-07-01

    By using global energy and pressure balance applied to a shock model it is shown that for a piston-driven fast compression, the maximum compression ratio is not dependent on the absolute magnitude of the piston power, but rather on the power pulse shape. Specific cases are considered and a maximum density compression ratio of 27 is obtained for a square-pulse power compressing a spherical pellet with specific heat ratio of 5/3. Double pulsing enhances the density compression ratio to 1750 in the case of linearly rising compression pulses. Using this method further enhancement by multiple pulsing becomes obvious. (author)

  20. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2013-01-01

    Compressibility, Turbulence and High Speed Flow introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range, through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. The book provides the reader with the necessary background and current trends in the theoretical and experimental aspects of compressible turbulent flows and compressible turbulence. Detailed derivations of the pertinent equations describing the motion of such turbulent flows is provided and an extensive discussion of the various approaches used in predicting both free shear and wall bounded flows is presented. Experimental measurement techniques common to the compressible flow regime are introduced with particular emphasis on the unique challenges presented by high speed flows. Both experimental and numerical simulation work is supplied throughout to provide the reader with an overall perspective of current tre...

  1. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  2. Maximally Informative Observables and Categorical Perception

    OpenAIRE

    Tsiang, Elaine

    2012-01-01

    We formulate the problem of perception in the framework of information theory, and prove that categorical perception is equivalent to the existence of an observable that has the maximum possible information on the target of perception. We call such an observable maximally informative. Regardless whether categorical perception is real, maximally informative observables can form the basis of a theory of perception. We conclude with the implications of such a theory for the problem of speech per...

  3. Triangulating Principal Effectiveness: How Perspectives of Parents, Teachers, and Assistant Principals Identify the Central Importance of Managerial Skills. Working Paper 35

    Science.gov (United States)

    Grissom, Jason A.; Loeb, Susanna

    2009-01-01

    While the importance of effective principals is undisputed, few studies have addressed what specific skills principals need to promote school success. This study draws on unique data combining survey responses from principals, assistant principals, teachers and parents with rich administrative data to identify which principal skills matter most…

  4. Principal components

    NARCIS (Netherlands)

    Hallin, M.; Hörmann, S.; Piegorsch, W.; El Shaarawi, A.

    2012-01-01

    Principal Components are probably the best known and most widely used of all multivariate analysis techniques. The essential idea consists in performing a linear transformation of the observed k-dimensional variables in such a way that the new variables are vectors of k mutually orthogonal

  5. Measuring Principal Performance: How Rigorous Are Commonly Used Principal Performance Assessment Instruments? A Quality School Leadership Issue Brief

    Science.gov (United States)

    Condon, Christopher; Clifford, Matthew

    2010-01-01

    This brief reviews the publicly available principal assessments and points superintendents and policy makers toward strong instruments to measure principal performance. Specifically, the measures included in this review are expressly intended to evaluate principal performance and have varying degrees of publicly available evidence of psychometric…

  6. 30 CFR 77.412 - Compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...

  7. Two divergent paths: compression vs. non-compression in deep venous thrombosis and post thrombotic syndrome

    Directory of Open Access Journals (Sweden)

    Eduardo Simões Da Matta

    Full Text Available Abstract Use of compression therapy to reduce the incidence of postthrombotic syndrome among patients with deep venous thrombosis is a controversial subject and there is no consensus on use of elastic versus inelastic compression, or on the levels and duration of compression. Inelastic devices with a higher static stiffness index, combine relatively small and comfortable pressure at rest with pressure while standing strong enough to restore the “valve mechanism” generated by plantar flexion and dorsiflexion of the foot. Since the static stiffness index is dependent on the rigidity of the compression system and the muscle strength within the bandaged area, improvement of muscle mass with muscle-strengthening programs and endurance training should be encouraged. Therefore, in the acute phase of deep venous thrombosis events, anticoagulation combined with inelastic compression therapy can reduce the extension of the thrombus. Notwithstanding, prospective studies evaluating the effectiveness of inelastic therapy in deep venous thrombosis and post-thrombotic syndrome are needed.

  8. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  9. Maximally Entangled Multipartite States: A Brief Survey

    International Nuclear Information System (INIS)

    Enríquez, M; Wintrowicz, I; Życzkowski, K

    2016-01-01

    The problem of identifying maximally entangled quantum states of a composite quantum systems is analyzed. We review some states of multipartite systems distinguished with respect to certain measures of quantum entanglement. Numerical results obtained for 4-qubit pure states illustrate the fact that the notion of maximally entangled state depends on the measure used. (paper)

  10. Corporate Social Responsibility and Profit Maximizing Behaviour

    OpenAIRE

    Becchetti, Leonardo; Giallonardo, Luisa; Tessitore, Maria Elisabetta

    2005-01-01

    We examine the behavior of a profit maximizing monopolist in a horizontal differentiation model in which consumers differ in their degree of social responsibility (SR) and consumers SR is dynamically influenced by habit persistence. The model outlines parametric conditions under which (consumer driven) corporate social responsibility is an optimal choice compatible with profit maximizing behavior.

  11. Theoretical models for describing longitudinal bunch compression in the neutralized drift compression experiment

    Directory of Open Access Journals (Sweden)

    Adam B. Sefkow

    2006-09-01

    Full Text Available Heavy ion drivers for warm dense matter and heavy ion fusion applications use intense charge bunches which must undergo transverse and longitudinal compression in order to meet the requisite high current densities and short pulse durations desired at the target. The neutralized drift compression experiment (NDCX at the Lawrence Berkeley National Laboratory is used to study the longitudinal neutralized drift compression of a space-charge-dominated ion beam, which occurs due to an imposed longitudinal velocity tilt and subsequent neutralization of the beam’s space charge by background plasma. Reduced theoretical models have been used in order to describe the realistic propagation of an intense charge bunch through the NDCX device. A warm-fluid model is presented as a tractable computational tool for investigating the nonideal effects associated with the experimental acceleration gap geometry and voltage waveform of the induction module, which acts as a means to pulse shape both the velocity and line density profiles. Self-similar drift compression solutions can be realized in order to transversely focus the entire charge bunch to the same focal plane in upcoming simultaneous transverse and longitudinal focusing experiments. A kinetic formalism based on the Vlasov equation has been employed in order to show that the peaks in the experimental current profiles are a result of the fact that only the central portion of the beam contributes effectively to the main compressed pulse. Significant portions of the charge bunch reside in the nonlinearly compressing part of the ion beam because of deviations between the experimental and ideal velocity tilts. Those regions form a pedestal of current around the central peak, thereby decreasing the amount of achievable longitudinal compression and increasing the pulse durations achieved at the focal plane. A hybrid fluid-Vlasov model which retains the advantages of both the fluid and kinetic approaches has been

  12. Muscle Synergies of Untrained Subjects during 6 min Maximal Rowing on Slides and Fixed Ergometer

    Directory of Open Access Journals (Sweden)

    Shazlin Shaharudin

    2014-12-01

    Full Text Available The slides ergometer (SE was an improvisation from fixed ergometer (FE to bridge the gap of mechanics between ergometer rowing and on-water rowing. The specific mechanical constraints of these two types of ergometers may affect the pattern of muscle recruitment, coordination and adaptation. The main purpose of this study was to evaluate the muscle synergy during 6 minutes maximal rowing on slides (SE and fixed ergometers (FE. The laterality of muscle synergy was also examined. Surface electromyography activity, power output, heart rate, stroke length and stroke rate were analyzed from nine physically active subjects to assess the rowing performance. Physically active subjects, who were not specifically trained in rowing, were chosen to exclude the training effect on muscle synergy. Principal component analysis (PCA with varimax rotation was applied to extract muscle synergy. Three muscle synergies were sufficient to explain the majority of variance in SE (94.4 ± 2.2 % and FE (92.8 ± 1.7 %. Subjects covered more rowing distance, exerted greater power output and attained higher maximal heart rate during rowing on SE than on FE. The results proved the flexibility of muscle synergy to adapt to the mechanical constraints. Rowing on SE emphasized on bi-articular muscles contrary to rowing on FE which relied on cumulative effect of trunk and upper limb muscles during propulsive phase.

  13. Poor chest compression quality with mechanical compressions in simulated cardiopulmonary resuscitation: a randomized, cross-over manikin study.

    Science.gov (United States)

    Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob

    2011-10-01

    Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  15. Innovation Management Perceptions of Principals

    Science.gov (United States)

    Bakir, Asli Agiroglu

    2016-01-01

    This study is aimed to determine the perceptions of principals about innovation management and to investigate whether there is a significant difference in this perception according to various parameters. In the study, descriptive research model is used and universe is consisted from principals who participated in "Acquiring Formation Course…

  16. Signal-to-noise contribution of principal component loads in reconstructed near-infrared Raman tissue spectra.

    Science.gov (United States)

    Grimbergen, M C M; van Swol, C F P; Kendall, C; Verdaasdonk, R M; Stone, N; Bosch, J L H R

    2010-01-01

    The overall quality of Raman spectra in the near-infrared region, where biological samples are often studied, has benefited from various improvements to optical instrumentation over the past decade. However, obtaining ample spectral quality for analysis is still challenging due to device requirements and short integration times required for (in vivo) clinical applications of Raman spectroscopy. Multivariate analytical methods, such as principal component analysis (PCA) and linear discriminant analysis (LDA), are routinely applied to Raman spectral datasets to develop classification models. Data compression is necessary prior to discriminant analysis to prevent or decrease the degree of over-fitting. The logical threshold for the selection of principal components (PCs) to be used in discriminant analysis is likely to be at a point before the PCs begin to introduce equivalent signal and noise and, hence, include no additional value. Assessment of the signal-to-noise ratio (SNR) at a certain peak or over a specific spectral region will depend on the sample measured. Therefore, the mean SNR over the whole spectral region (SNR(msr)) is determined in the original spectrum as well as for spectra reconstructed from an increasing number of principal components. This paper introduces a method of assessing the influence of signal and noise from individual PC loads and indicates a method of selection of PCs for LDA. To evaluate this method, two data sets with different SNRs were used. The sets were obtained with the same Raman system and the same measurement parameters on bladder tissue collected during white light cystoscopy (set A) and fluorescence-guided cystoscopy (set B). This method shows that the mean SNR over the spectral range in the original Raman spectra of these two data sets is related to the signal and noise contribution of principal component loads. The difference in mean SNR over the spectral range can also be appreciated since fewer principal components can

  17. Efficient Wideband Spectrum Sensing with Maximal Spectral Efficiency for LEO Mobile Satellite Systems

    Directory of Open Access Journals (Sweden)

    Feilong Li

    2017-01-01

    Full Text Available The usable satellite spectrum is becoming scarce due to static spectrum allocation policies. Cognitive radio approaches have already demonstrated their potential towards spectral efficiency for providing more spectrum access opportunities to secondary user (SU with sufficient protection to licensed primary user (PU. Hence, recent scientific literature has been focused on the tradeoff between spectrum reuse and PU protection within narrowband spectrum sensing (SS in terrestrial wireless sensing networks. However, those narrowband SS techniques investigated in the context of terrestrial CR may not be applicable for detecting wideband satellite signals. In this paper, we mainly investigate the problem of joint designing sensing time and hard fusion scheme to maximize SU spectral efficiency in the scenario of low earth orbit (LEO mobile satellite services based on wideband spectrum sensing. Compressed detection model is established to prove that there indeed exists one optimal sensing time achieving maximal spectral efficiency. Moreover, we propose novel wideband cooperative spectrum sensing (CSS framework where each SU reporting duration can be utilized for its following SU sensing. The sensing performance benefits from the novel CSS framework because the equivalent sensing time is extended by making full use of reporting slot. Furthermore, in respect of time-varying channel, the spatiotemporal CSS (ST-CSS is presented to attain space and time diversity gain simultaneously under hard decision fusion rule. Computer simulations show that the optimal sensing settings algorithm of joint optimization of sensing time, hard fusion rule and scheduling strategy achieves significant improvement in spectral efficiency. Additionally, the novel ST-CSS scheme performs much higher spectral efficiency than that of general CSS framework.

  18. Metabolomic Profiling of the White, Violet, and Red Flowers of Rhododendron schlippenbachii Maxim.

    Science.gov (United States)

    Park, Chang Ha; Yeo, Hyeon Ji; Kim, Nam Su; Park, Ye Eun; Park, Soo-Yun; Kim, Jae Kwang; Park, Sang Un

    2018-04-04

    Rhododendron schlippenbachii Maxim. is a garden plant that is also used for natural medicines as a consequence of the biological activities of its diverse metabolites. We accordingly profiled two anthocyanins and 40 primary and secondary metabolites in the three different colored flowers. The major anthocyanins found in the flowers were cyanidins. The red flowers exhibited the highest accumulation of anthocyanins (1.02 ± 0.02 mg/g dry weight). Principal component analysis was applied to the GC‒TOFMS data. The levels of key tricarboxylic acid cycle intermediates in red flowers, such as succinic acid, fumaric acid, and malic acid, were found to be highly significantly different ( p < 0.0001) from those in the flowers of other colors. In this study, we aimed to determine metabolite interactions and phenotypic variation among white, violet, and red flowers of R. schlippenbachii by using gas chromatography time-of-flight mass spectrometry (GC‒TOFMS) and high-performance liquid chromatography (HPLC).

  19. Guinea pig maximization test

    DEFF Research Database (Denmark)

    Andersen, Klaus Ejner

    1985-01-01

    Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...

  20. Modelo de contrato de terceirização de manutenção: uma abordagem principal-agente Model contract for outsourcing of maintenance: a principal-agent approach

    Directory of Open Access Journals (Sweden)

    Jonas Alves de Paiva

    2012-12-01

    Full Text Available A terceirização é uma das soluções utilizadas para se reduzir esforços em atividades não relacionadas com a atividade de produção. Na terceirização da manutenção, modelos de contratos de incentivos existentes na literatura focam como principais indicadores de controle apenas o tempo de reparo e o custo das atividades de manutenção. O principal objetivo desse trabalho consiste em apresentar um modelo de incentivos considerando também outras variáveis que são afetadas pela manutenção e que afetam fortemente o lucro, que são a qualidade dos produtos produzidos e a redução da capacidade de produção. O trabalho utiliza a teoria principal-agente para modelar um contrato de incentivos que conduz a um aumento do lucro da empresa, forçando o agente a desenvolver atividades que maximizem esse lucro. É apresentada uma exemplificação numérica, que evidencia o impacto positivo nos resultados da empresa, além da generalização e adequação do modelo.The outsourcing is being used to reduce work on activities not related to production activities. Nowadays, the models of contracts of maintenance use the repair time and the cost of maintenance activities as main control indicator to estimate the political of incentives. The main objective of this paper is to introduce a model of incentives considering other variables that affect substantially the profit and that they are affected by the maintenance activities as the quality of products produced and the reduction of production capacity. The paper uses the Principal-Agent Theory to develop an incentive contract that leads the agent to execute activities that maximize the profit. It is used a numerical example in order to highlight the positive impact on company results, in addition to the generalization and adaptation of the model.

  1. Medullary compression syndrome

    International Nuclear Information System (INIS)

    Barriga T, L.; Echegaray, A.; Zaharia, M.; Pinillos A, L.; Moscol, A.; Barriga T, O.; Heredia Z, A.

    1994-01-01

    The authors made a retrospective study in 105 patients treated in the Radiotherapy Department of the National Institute of Neoplasmic Diseases from 1973 to 1992. The objective of this evaluation was to determine the influence of radiotherapy in patients with medullary compression syndrome in aspects concerning pain palliation and improvement of functional impairment. Treatment sheets of patients with medullary compression were revised: 32 out of 39 of patients (82%) came to hospital by their own means and continued walking after treatment, 8 out of 66 patients (12%) who came in a wheelchair or were bedridden, could mobilize by their own after treatment, 41 patients (64%) had partial alleviation of pain after treatment. In those who came by their own means and did not change their characteristics, functional improvement was observed. It is concluded that radiotherapy offers palliative benefit in patients with medullary compression syndrome. (authors). 20 refs., 5 figs., 6 tabs

  2. Comparison of changes in tidal volume associated with expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation.

    Science.gov (United States)

    Morino, Akira; Shida, Masahiro; Tanaka, Masashi; Sato, Kimihiro; Seko, Toshiaki; Ito, Shunsuke; Ogawa, Shunichi; Takahashi, Naoaki

    2015-07-01

    [Purpose] This study was designed to compare and clarify the relationship between expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation, with a focus on tidal volume. [Subjects and Methods] The subjects were 18 patients on prolonged mechanical ventilation, who had undergone tracheostomy. Each patient received expiratory rib cage compression and expiratory abdominal compression; the order of implementation was randomized. Subjects were positioned in a 30° lateral recumbent position, and a 2-kgf compression was applied. For expiratory rib cage compression, the rib cage was compressed unilaterally; for expiratory abdominal compression, the area directly above the navel was compressed. Tidal volume values were the actual measured values divided by body weight. [Results] Tidal volume values were as follows: at rest, 7.2 ± 1.7 mL/kg; during expiratory rib cage compression, 8.3 ± 2.1 mL/kg; during expiratory abdominal compression, 9.1 ± 2.2 mL/kg. There was a significant difference between the tidal volume during expiratory abdominal compression and that at rest. The tidal volume in expiratory rib cage compression was strongly correlated with that in expiratory abdominal compression. [Conclusion] These results indicate that expiratory abdominal compression may be an effective alternative to the manual breathing assist procedure.

  3. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  4. Combining nonlinear multiresolution system and vector quantization for still image compression

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Y.

    1993-12-17

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  5. Gradient Dynamics and Entropy Production Maximization

    Science.gov (United States)

    Janečka, Adam; Pavelka, Michal

    2018-01-01

    We compare two methods for modeling dissipative processes, namely gradient dynamics and entropy production maximization. Both methods require similar physical inputs-how energy (or entropy) is stored and how it is dissipated. Gradient dynamics describes irreversible evolution by means of dissipation potential and entropy, it automatically satisfies Onsager reciprocal relations as well as their nonlinear generalization (Maxwell-Onsager relations), and it has statistical interpretation. Entropy production maximization is based on knowledge of free energy (or another thermodynamic potential) and entropy production. It also leads to the linear Onsager reciprocal relations and it has proven successful in thermodynamics of complex materials. Both methods are thermodynamically sound as they ensure approach to equilibrium, and we compare them and discuss their advantages and shortcomings. In particular, conditions under which the two approaches coincide and are capable of providing the same constitutive relations are identified. Besides, a commonly used but not often mentioned step in the entropy production maximization is pinpointed and the condition of incompressibility is incorporated into gradient dynamics.

  6. Time Management for New Principals

    Science.gov (United States)

    Ruder, Robert

    2008-01-01

    Becoming a principal is a milestone in an educator's professional life. The principalship is an opportunity to provide leadership that will afford students opportunities to thrive in a nurturing and supportive environment. Despite the continuously expanding demands of being a new principal, effective time management will enable an individual to be…

  7. MP3 compression of Doppler ultrasound signals.

    Science.gov (United States)

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  8. On Maximal Non-Disjoint Families of Subsets

    Directory of Open Access Journals (Sweden)

    Yu. A. Zuev

    2017-01-01

    Full Text Available The paper studies maximal non-disjoint families of subsets of a finite set. Non-disjointness means that any two subsets of a family have a nonempty intersection. The maximality is expressed by the fact that adding a new subset to the family cannot increase its power without violating a non-disjointness condition. Studying the properties of such families is an important section of the extreme theory of sets. Along with purely combinatorial interest, the problems considered here play an important role in informatics, anti-noise coding, and cryptography.In 1961 this problem saw the light of day in the Erdos, Ko and Rado paper, which established a maximum power of the non-disjoint family of subsets of equal power. In 1974 the Erdos and Claytman publication estimated the number of maximal non-disjoint families of subsets without involving the equality of their power. These authors failed to establish an asymptotics of the logarithm of the number of such families when the power of a basic finite set tends to infinity. However, they suggested such an asymptotics as a hypothesis. A.D. Korshunov in two publications in 2003 and 2005 established the asymptotics for the number of non-disjoint families of the subsets of arbitrary powers without maximality condition of these families.The basis for the approach used in the paper to study the families of subsets is their description in the language of Boolean functions. A one-to-one correspondence between a family of subsets and a Boolean function is established by the fact that the characteristic vectors of subsets of a family are considered to be the unit sets of a Boolean function. The main theoretical result of the paper is that the maximal non-disjoint families are in one-to-one correspondence with the monotonic self-dual Boolean functions. When estimating the number of maximal non-disjoint families, this allowed us to use the result of A.A. Sapozhenko, who established the asymptotics of the number of the

  9. Plasma heating by adiabatic compression

    International Nuclear Information System (INIS)

    Ellis, R.A. Jr.

    1972-01-01

    These two lectures will cover the following three topics: (i) The application of adiabatic compression to toroidal devices is reviewed. The special case of adiabatic compression in tokamaks is considered in more detail, including a discussion of the equilibrium, scaling laws, and heating effects. (ii) The ATC (Adiabatic Toroidal Compressor) device which was completed in May 1972, is described in detail. Compression of a tokamak plasma across a static toroidal field is studied in this device. The device is designed to produce a pre-compression plasma with a major radius of 17 cm, toroidal field of 20 kG, and current of 90 kA. The compression leads to a plasma with major radius of 38 cm and minor radius of 10 cm. Scaling laws imply a density increase of a factor 6, temperature increase of a factor 3, and current increase of a factor 2.4. An additional feature of ATC is that it is a large tokamak which operates without a copper shell. (iii) Data which show that the expected MHD behavior is largely observed is presented and discussed. (U.S.)

  10. Concurrent data compression and protection

    International Nuclear Information System (INIS)

    Saeed, M.

    2009-01-01

    Data compression techniques involve transforming data of a given format, called source message, to data of a smaller sized format, called codeword. The primary objective of data encryption is to ensure security of data if it is intercepted by an eavesdropper. It transforms data of a given format, called plaintext, to another format, called ciphertext, using an encryption key or keys. Thus, combining the processes of compression and encryption together must be done in this order, that is, compression followed by encryption because all compression techniques heavily rely on the redundancies which are inherently a part of a regular text or speech. The aim of this research is to combine two processes of compression (using an existing scheme) with a new encryption scheme which should be compatible with encoding scheme embedded in encoder. The novel technique proposed by the authors is new, unique and is highly secured. The deployment of sentinel marker' enhances the security of the proposed TR-One algorithm from 2/sup 44/ ciphertexts to 2/sup 44/ +2/sub 20/ ciphertexts thus imposing extra challenges to the intruders. (author)

  11. Inquiry in bibliography some of the bustan`s maxim

    Directory of Open Access Journals (Sweden)

    sajjad rahmatian

    2016-12-01

    Full Text Available Sa`di is on of those poets who`s has placed a special position to preaching and guiding the people and among his works, allocated throughout the text of bustan to advice and maxim on legal and ethical various subjects. Surely, sa`di on the way of to compose this work and expression of its moral point, direct or indirect have been affected by some previous sources and possibly using their content. The main purpose of this article is that the pay review of basis and sources of bustan`s maxims and show that sa`di when expression the maxims of this work has been affected by which of the texts and works. For this purpose is tried to with search and research on the resources that have been allocated more or less to the aphorisms, to discover and extract traces of influence sa`di from their moral and didactic content. From the most important the finding of this study can be mentioned that indirect effect of some pahlavi books of maxim (like maxims of azarbad marespandan and bozorgmehr book of maxim and also noted sa`di directly influenced of moral and ethical works of poets and writers before him, and of this, sa`di`s influence from abo- shakur balkhi maxims, ferdowsi and keikavus is remarkable and noteworthy.

  12. Can monkeys make investments based on maximized pay-off?

    Directory of Open Access Journals (Sweden)

    Sophie Steelandt

    2011-03-01

    Full Text Available Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella and thirteen macaques (Macaca fascicularis, Macaca tonkeana in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible.

  13. Compressible Fluid Suspension Performance Testing

    National Research Council Canada - National Science Library

    Hoogterp, Francis

    2003-01-01

    ... compressible fluid suspension system that was designed and installed on the vehicle by DTI. The purpose of the tests was to evaluate the possible performance benefits of the compressible fluid suspension system...

  14. What Motivates Principals?

    Science.gov (United States)

    Iannone, Ron

    1973-01-01

    Achievement and recognition were mentioned as factors appearing with greater frequency in principal's job satisfactions; school district policy and interpersonal relations were mentioned as job dissatisfactions. (Editor)

  15. Systolic Compression of Epicardial Coronary and Intramural Arteries

    Science.gov (United States)

    Mohiddin, Saidi A.; Fananapazir, Lameh

    2002-01-01

    It has been suggested that systolic compression of epicardial coronary arteries is an important cause of myocardial ischemia and sudden death in children with hypertrophic cardiomyopathy. We examined the associations between sudden death, systolic coronary compression of intra- and epicardial arteries, myocardial perfusion abnormalities, and severity of hypertrophy in children with hypertrophic cardiomyopathy. We reviewed the angiograms from 57 children with hypertrophic cardiomyopathy for the presence of coronary and septal artery compression; coronary compression was present in 23 (40%). The left anterior descending artery was most often affected, and multiple sites were found in 4 children. Myocardial perfusion abnormalities were more frequently present in children with coronary compression than in those without (94% vs 47%, P = 0.002). Coronary compression was also associated with more severe septal hypertrophy and greater left ventricular outflow gradient. Septal branch compression was present in 65% of the children and was significantly associated with coronary compression, severity of septal hypertrophy, and outflow obstruction. Multivariate analysis showed that septal thickness and septal branch compression, but not coronary compression, were independent predictors of perfusion abnormalities. Coronary compression was not associated with symptom severity, ventricular tachycardia, or a worse prognosis. We conclude that compression of coronary arteries and their septal branches is common in children with hypertrophic cardiomyopathy and is related to the magnitude of left ventricular hypertrophy. Our findings suggest that coronary compression does not make an important contribution to myocardial ischemia in hypertrophic cardiomyopathy; however, left ventricular hypertrophy and compression of intramural arteries may contribute significantly. (Tex Heart Inst J 2002;29:290–8) PMID:12484613

  16. Insertion profiles of 4 headless compression screws.

    Science.gov (United States)

    Hart, Adam; Harvey, Edward J; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A

    2013-09-01

    In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. The peak compression occurs at an insertion depth of -3.1 mm, -2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of -2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of -2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws, and enable the surgeon to optimize compression. Copyright

  17. Maximal lattice free bodies, test sets and the Frobenius problem

    DEFF Research Database (Denmark)

    Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt

    Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral m...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....

  18. Disk Density Tuning of a Maximal Random Packing.

    Science.gov (United States)

    Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A

    2016-08-01

    We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.

  19. Energy Conservation In Compressed Air Systems

    International Nuclear Information System (INIS)

    Yusuf, I.Y.; Dewu, B.B.M.

    2004-01-01

    Compressed air is an essential utility that accounts for a substantial part of the electricity consumption (bill) in most industrial plants. Although the general saying Air is free of charge is not true for compressed air, the utility's cost is not accorded the rightful importance due to its by most industries. The paper will show that the cost of 1 unit of energy in the form of compressed air is at least 5 times the cost electricity (energy input) required to produce it. The paper will also provide energy conservation tips in compressed air systems

  20. Compressed Data Structures for Range Searching

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Vind, Søren Juhl

    2015-01-01

    matrices and web graphs. Our contribution is twofold. First, we show how to compress geometric repetitions that may appear in standard range searching data structures (such as K-D trees, Quad trees, Range trees, R-trees, Priority R-trees, and K-D-B trees), and how to implement subsequent range queries...... on the compressed representation with only a constant factor overhead. Secondly, we present a compression scheme that efficiently identifies geometric repetitions in point sets, and produces a hierarchical clustering of the point sets, which combined with the first result leads to a compressed representation...

  1. Compression therapy after ankle fracture surgery

    DEFF Research Database (Denmark)

    Winge, R; Bayer, L; Gottlieb, H

    2017-01-01

    PURPOSE: The main purpose of this systematic review was to investigate the effect of compression treatment on the perioperative course of ankle fractures and describe its effect on edema, pain, ankle joint mobility, wound healing complication, length of stay (LOS) and time to surgery (TTS). The aim...... undergoing surgery, testing either intermittent pneumatic compression, compression bandage and/or compression stocking and reporting its effect on edema, pain, ankle joint mobility, wound healing complication, LOS and TTS. To conclude on data a narrative synthesis was performed. RESULTS: The review included...

  2. Bureaucratic Control and Principal Role.

    Science.gov (United States)

    Bezdek, Robert; And Others

    The purposes of this study were to determine the manner in which the imposition of increased bureaucratic control over principals influenced their allocation of time to tasks and to investigate principals' perceptions of the changes in their roles brought about by this increased control. The specific bureaucratic control system whose effects were…

  3. Effect of Kollidon VA®64 particle size and morphology as directly compressible excipient on tablet compression properties.

    Science.gov (United States)

    Chaudhary, R S; Patel, C; Sevak, V; Chan, M

    2018-01-01

    The study evaluates use of Kollidon VA ® 64 and a combination of Kollidon VA ® 64 with Kollidon VA ® 64 Fine as excipient in direct compression process of tablets. The combination of the two grades of material is evaluated for capping, lamination and excessive friability. Inter particulate void space is higher for such excipient due to the hollow structure of the Kollidon VA ® 64 particles. During tablet compression air remains trapped in the blend exhibiting poor compression with compromised physical properties of the tablets. Composition of Kollidon VA ® 64 and Kollidon VA ® 64 Fine is evaluated by design of experiment (DoE). A scanning electron microscopy (SEM) of two grades of Kollidon VA ® 64 exhibits morphological differences between coarse and fine grade. The tablet compression process is evaluated with a mix consisting of entirely Kollidon VA ® 64 and two mixes containing Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23 and 65:35. A statistical modeling on the results from the DoE trials resulted in the optimum composition for direct tablet compression as combination of Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23. This combination compressed with the predicted parameters based on the statistical modeling and applying main compression force between 5 and 15 kN, pre-compression force between 2 and 3 kN, feeder speed fixed at 25 rpm and compression range of 45-49 rpm produced tablets with hardness ranging between 19 and 21 kp, with no friability, capping, or lamination issue.

  4. A Fully Integrated Wireless Compressed Sensing Neural Signal Acquisition System for Chronic Recording and Brain Machine Interface.

    Science.gov (United States)

    Liu, Xilin; Zhang, Milin; Xiong, Tao; Richardson, Andrew G; Lucas, Timothy H; Chin, Peter S; Etienne-Cummings, Ralph; Tran, Trac D; Van der Spiegel, Jan

    2016-07-18

    Reliable, multi-channel neural recording is critical to the neuroscience research and clinical treatment. However, most hardware development of fully integrated, multi-channel wireless neural recorders to-date, is still in the proof-of-concept stage. To be ready for practical use, the trade-offs between performance, power consumption, device size, robustness, and compatibility need to be carefully taken into account. This paper presents an optimized wireless compressed sensing neural signal recording system. The system takes advantages of both custom integrated circuits and universal compatible wireless solutions. The proposed system includes an implantable wireless system-on-chip (SoC) and an external wireless relay. The SoC integrates 16-channel low-noise neural amplifiers, programmable filters and gain stages, a SAR ADC, a real-time compressed sensing module, and a near field wireless power and data transmission link. The external relay integrates a 32 bit low-power microcontroller with Bluetooth 4.0 wireless module, a programming interface, and an inductive charging unit. The SoC achieves high signal recording quality with minimized power consumption, while reducing the risk of infection from through-skin connectors. The external relay maximizes the compatibility and programmability. The proposed compressed sensing module is highly configurable, featuring a SNDR of 9.78 dB with a compression ratio of 8×. The SoC has been fabricated in a 180 nm standard CMOS technology, occupying 2.1 mm × 0.6 mm silicon area. A pre-implantable system has been assembled to demonstrate the proposed paradigm. The developed system has been successfully used for long-term wireless neural recording in freely behaving rhesus monkey.

  5. Isentropic Compression of Argon

    International Nuclear Information System (INIS)

    Oona, H.; Solem, J.C.; Veeser, L.R.; Ekdahl, C.A.; Rodriquez, P.J.; Younger, S.M.; Lewis, W.; Turley, W.D.

    1997-01-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal

  6. An information maximization model of eye movements

    Science.gov (United States)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  7. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: I. general description

    Energy Technology Data Exchange (ETDEWEB)

    Kaganovich, Igor D.; Massidda, Scottt; Startsev, Edward A.; Davidson, Ronald C.; Vay, Jean-Luc; Friedman, Alex

    2012-06-21

    Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the

  8. Transmission of compressed tactical imagery by means of an rf link

    Science.gov (United States)

    Conners, Gary H.; Liou, C. S. J.; Muczynski, Joe

    1995-01-01

    The joint University of Rochester/Rochester Institute of Technology `Center for Electronic Imaging Systems' (CEIS) is designed to focus on research problems of interest to industrial sponsors. A particular feature of the research is that it is organized in the `triplet' mode: each project includes a faculty researcher, an industrial partner, and a doctoral or postdoctoral fellow. Compression of tactical images for transmission over an rf link is an example of this type of research project which is being worked on in collaboration with one of the CEIS sponsors, Harris Corporation/Rf communications. The Harris Digital Video Imagery Transmission System (DVITS) is designed to fulfill the need to transmit secure imagery between unwired locations at real-time rates. DVITS specializes in transmission systems for users who rely on hf equipment operating at the low end of the frequency spectrum. However, the inherently low bandwidth of hf combined with transmission characteristics such as fading and dropout severely restrict the effective throughput. The problem is posed as one of maximizing the probability of reception of the most significant information in an m x n pixel image in the shortest possible time. Various design strategies combining image segmentation, compression, and error correction are evaluated using a realistic model for the communication channel. A recommended strategy is developed and a test method using a variety of test images is described. The methodology established here can be employed for other image transmission designs.

  9. Compressive multi-mode superresolution display

    KAUST Repository

    Heide, Felix

    2014-01-01

    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image. © 2014 Optical Society of America.

  10. Confounding compression: the effects of posture, sizing and garment type on measured interface pressure in sports compression clothing.

    Science.gov (United States)

    Brophy-Williams, Ned; Driller, Matthew William; Shing, Cecilia Mary; Fell, James William; Halson, Shona Leigh; Halson, Shona Louise

    2015-01-01

    The purpose of this investigation was to measure the interface pressure exerted by lower body sports compression garments, in order to assess the effect of garment type, size and posture in athletes. Twelve national-level boxers were fitted with sports compression garments (tights and leggings), each in three different sizes (undersized, recommended size and oversized). Interface pressure was assessed across six landmarks on the lower limb (ranging from medial malleolus to upper thigh) as athletes assumed sitting, standing and supine postures. Sports compression leggings exerted a significantly higher mean pressure than sports compression tights (P sports compression garments is significantly affected by garment type, size and posture assumed by the wearer.

  11. School Principals' Sources of Knowledge

    Science.gov (United States)

    Perkins, Arland Early

    2014-01-01

    The purpose of this study was to determine what sources of professional knowledge are available to principals in 1 rural East Tennessee school district. Qualitative research methods were applied to gain an understanding of what sources of knowledge are used by school principals in 1 rural East Tennessee school district and the barriers they face…

  12. On the way towards a generalized entropy maximization procedure

    International Nuclear Information System (INIS)

    Bagci, G. Baris; Tirnakli, Ugur

    2009-01-01

    We propose a generalized entropy maximization procedure, which takes into account the generalized averaging procedures and information gain definitions underlying the generalized entropies. This novel generalized procedure is then applied to Renyi and Tsallis entropies. The generalized entropy maximization procedure for Renyi entropies results in the exponential stationary distribution asymptotically for q element of (0,1] in contrast to the stationary distribution of the inverse power law obtained through the ordinary entropy maximization procedure. Another result of the generalized entropy maximization procedure is that one can naturally obtain all the possible stationary distributions associated with the Tsallis entropies by employing either ordinary or q-generalized Fourier transforms in the averaging procedure.

  13. Violating Bell inequalities maximally for two d-dimensional systems

    International Nuclear Information System (INIS)

    Chen Jingling; Wu Chunfeng; Oh, C. H.; Kwek, L. C.; Ge Molin

    2006-01-01

    We show the maximal violation of Bell inequalities for two d-dimensional systems by using the method of the Bell operator. The maximal violation corresponds to the maximal eigenvalue of the Bell operator matrix. The eigenvectors corresponding to these eigenvalues are described by asymmetric entangled states. We estimate the maximum value of the eigenvalue for large dimension. A family of elegant entangled states |Ψ> app that violate Bell inequality more strongly than the maximally entangled state but are somewhat close to these eigenvectors is presented. These approximate states can potentially be useful for quantum cryptography as well as many other important fields of quantum information

  14. Reduction of symplectic principal R-bundles

    International Nuclear Information System (INIS)

    Lacirasella, Ignazio; Marrero, Juan Carlos; Padrón, Edith

    2012-01-01

    We describe a reduction process for symplectic principal R-bundles in the presence of a momentum map. These types of structures play an important role in the geometric formulation of non-autonomous Hamiltonian systems. We apply this procedure to the standard symplectic principal R-bundle associated with a fibration π:M→R. Moreover, we show a reduction process for non-autonomous Hamiltonian systems on symplectic principal R-bundles. We apply these reduction processes to several examples. (paper)

  15. Selecting a general-purpose data compression algorithm

    Science.gov (United States)

    Mathews, Gary Jason

    1995-01-01

    The National Space Science Data Center's Common Data Formate (CDF) is capable of storing many types of data such as scalar data items, vectors, and multidimensional arrays of bytes, integers, or floating point values. However, regardless of the dimensionality and data type, the data break down into a sequence of bytes that can be fed into a data compression function to reduce the amount of data without losing data integrity and thus remaining fully reconstructible. Because of the diversity of data types and high performance speed requirements, a general-purpose, fast, simple data compression algorithm is required to incorporate data compression into CDF. The questions to ask are how to evaluate and compare compression algorithms, and what compression algorithm meets all requirements. The object of this paper is to address these questions and determine the most appropriate compression algorithm to use within the CDF data management package that would be applicable to other software packages with similar data compression needs.

  16. New Principal Coaching as a Safety Net

    Science.gov (United States)

    Celoria, Davide; Roberson, Ingrid

    2015-01-01

    This study examines new principal coaching as an induction process and explores the emotional dimensions of educational leadership. Twelve principal coaches and new principals--six of each--participated in this qualitative study that employed emergent coding (Creswell, 2008; Denzin, 2005; Glaser & Strauss, 1998; Spradley, 1979). The major…

  17. Modelling Monthly Mental Sickness Cases Using Principal ...

    African Journals Online (AJOL)

    The methodology was principal component analysis (PCA) using data obtained from the hospital to estimate regression coefficients and parameters. It was found that the principal component regression model that was derived was good predictive tool. The principal component regression model obtained was okay and this ...

  18. Importance of an Effective Principal-Counselor Relationship

    Science.gov (United States)

    Edwards, LaWanda; Grace, Ronald; King, Gwendolyn

    2014-01-01

    An effective relationship between the principal and school counselor is essential when improving student achievement. To have an effective relationship, there must be communication, trust and respect, leadership, and collaborative planning between the principal and school counselor (College Board, 2011). Principals and school counselors are both…

  19. Compression force behaviours: An exploration of the beliefs and values influencing the application of breast compression during screening mammography

    International Nuclear Information System (INIS)

    Murphy, Fred; Nightingale, Julie; Hogg, Peter; Robinson, Leslie; Seddon, Doreen; Mackay, Stuart

    2015-01-01

    This research project investigated the compression behaviours of practitioners during screening mammography. The study sought to provide a qualitative understanding of ‘how’ and ‘why’ practitioners apply compression force. With a clear conflict in the existing literature and little scientific evidence base to support the reasoning behind the application of compression force, this research project investigated the application of compression using a phenomenological approach. Following ethical approval, six focus group interviews were conducted at six different breast screening centres in England. A sample of 41 practitioners were interviewed within the focus groups together with six one-to-one interviews of mammography educators or clinical placement co-ordinators. The findings revealed two broad humanistic and technological categories consisting of 10 themes. The themes included client empowerment, white-lies, time for interactions, uncertainty of own practice, culture, power, compression controls, digital technology, dose audit-safety nets, numerical scales. All of these themes were derived from 28 units of significant meaning (USM). The results demonstrate a wide variation in the application of compression force, thus offering a possible explanation for the difference between practitioner compression forces found in quantitative studies. Compression force was applied in many different ways due to individual practitioner experiences and behaviour. Furthermore, the culture and the practice of the units themselves influenced beliefs and attitudes of practitioners in compression force application. The strongest recommendation to emerge from this study was the need for peer observation to enable practitioners to observe and compare their own compression force practice to that of their colleagues. The findings are significant for clinical practice in order to understand how and why compression force is applied

  20. Evaluation of the robustness of the preprocessing technique improving reversible compressibility of CT images: Tested on various CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung; Lee, Jong Min [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Kim, Kil Joong [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Department of Radiation Applied Life Science, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul 110-799 (Korea, Republic of); Lee, Kyoung Ho [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Institute of Radiation Medicine, Seoul National University Medical Research Center, and Clinical Research Institute, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 110-744 (Korea, Republic of); Kim, Tae Ki [Medical Information Center, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of)

    2013-10-15

    Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique was developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.

  1. Management Of Indiscipline Among Teachers By Principals Of ...

    African Journals Online (AJOL)

    This study compared the management of indiscipline among teachers by public and private school principals in Akwa Ibom State. The sample comprised four hundred and fifty (450) principals/vice principals randomly selected from a population of one thousand, four hundred and twenty eight (1,428) principals. The null ...

  2. What Do Effective Principals Do?

    Science.gov (United States)

    Protheroe, Nancy

    2011-01-01

    Much has been written during the past decade about the changing role of the principal and the shift in emphasis from manager to instructional leader. Anyone in education, and especially principals themselves, could develop a mental list of responsibilities that fit within each of these realms. But research makes it clear that both those aspects of…

  3. Real-time topic-aware influence maximization using preprocessing.

    Science.gov (United States)

    Chen, Wei; Lin, Tian; Yang, Cheng

    2016-01-01

    Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.

  4. The Principal as Academician: The Renewed Voice.

    Science.gov (United States)

    McAvoy, Brenda, Ed.

    This collection of essays was written by principals who participated in the 1986-87 Humanities Seminar sponsored by the Principals' Institute of Georgia State University. The focus was "The Evolution of Intellectual Leadership." The roles of the principal as philosopher, historian, ethnician, writer and team member are examined through…

  5. Memory hierarchy using row-based compression

    Science.gov (United States)

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  6. Compressed Sensing with Rank Deficient Dictionaries

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Johansen, Daniel Højrup; Jørgensen, Peter Bjørn

    2012-01-01

    In compressed sensing it is generally assumed that the dictionary matrix constitutes a (possibly overcomplete) basis of the signal space. In this paper we consider dictionaries that do not span the signal space, i.e. rank deficient dictionaries. We show that in this case the signal-to-noise ratio...... (SNR) in the compressed samples can be increased by selecting the rows of the measurement matrix from the column space of the dictionary. As an example application of compressed sensing with a rank deficient dictionary, we present a case study of compressed sensing applied to the Coarse Acquisition (C...

  7. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    International Nuclear Information System (INIS)

    Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex

    2012-01-01

    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕ b . In the presence of large voltage errors, δU⪢ΔE b , the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  8. On the characterisation of the dynamic compressive behaviour of silicon carbides subjected to isentropic compression experiments

    Directory of Open Access Journals (Sweden)

    Zinszner Jean-Luc

    2015-01-01

    Full Text Available Ceramic materials are commonly used as protective materials particularly due to their very high hardness and compressive strength. However, the microstructure of a ceramic has a great influence on its compressive strength and on its ballistic efficiency. To study the influence of microstructural parameters on the dynamic compressive behaviour of silicon carbides, isentropic compression experiments have been performed on two silicon carbide grades using a high pulsed power generator called GEPI. Contrary to plate impact experiments, the use of the GEPI device and of the lagrangian analysis allows determining the whole loading path. The two SiC grades studied present different Hugoniot elastic limit (HEL due to their different microstructures. For these materials, the experimental technique allowed evaluating the evolution of the equivalent stress during the dynamic compression. It has been observed that these two grades present a work hardening more or less pronounced after the HEL. The densification of the material seems to have more influence on the HEL than the grain size.

  9. Perceptual Image Compression in Telemedicine

    Science.gov (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  10. Radiologic image compression -- A review

    International Nuclear Information System (INIS)

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  11. Leadership Coaching for Principals: A National Study

    Science.gov (United States)

    Wise, Donald; Cavazos, Blanca

    2017-01-01

    Surveys were sent to a large representative sample of public school principals in the United States asking if they had received leadership coaching. Comparison of responses to actual numbers of principals indicates that the sample represents the first national study of principal leadership coaching. Results indicate that approximately 50% of all…

  12. El culto de Maximón en Guatemala

    OpenAIRE

    Pédron‑Colombani, Sylvie

    2009-01-01

    Este artículo se enfoca en la figura de Maximón, deidad sincrética de Guatemala, en un contexto de desplazamiento de la religión católica popular por parte de las iglesias protestantes. Esta divinidad híbrida a la cual se agregan santos católicos como Judas Iscariote o el dios maya Mam, permite la apropiación de Maximón por segmentos diferenciados de la población (tanto indígena como mestiza). Permite igualmente ser símbolo de protestas sociales enmascaradas cuando se asocia Maximón con figur...

  13. Riccati transformations and principal solutions of discrete linear systems

    International Nuclear Information System (INIS)

    Ahlbrandt, C.D.; Hooker, J.W.

    1984-01-01

    Consider a second-order linear matrix difference equation. A definition of principal and anti-principal, or recessive and dominant, solutions of the equation are given and the existence of principal and anti-principal solutions and the essential uniqueness of principal solutions is proven

  14. Compressive sensing for urban radar

    CERN Document Server

    Amin, Moeness

    2014-01-01

    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  15. Self-consistent collective-coordinate method for ''maximally-decoupled'' collective subspace and its boson mapping: Quantum theory of ''maximally-decoupled'' collective motion

    International Nuclear Information System (INIS)

    Marumori, T.; Sakata, F.; Maskawa, T.; Une, T.; Hashimoto, Y.

    1983-01-01

    The main purpose of this paper is to develop a full quantum theory, which is capable by itself of determining a ''maximally-decoupled'' collective motion. The paper is divided into two parts. In the first part, the motivation and basic idea of the theory are explained, and the ''maximal-decoupling condition'' on the collective motion is formulated within the framework of the time-dependent Hartree-Fock theory, in a general form called the invariance principle of the (time-dependent) Schrodinger equation. In the second part, it is shown that when the author positively utilize the invariance principle, we can construct a full quantum theory of the ''maximally-decoupled'' collective motion. This quantum theory is shown to be a generalization of the kinematical boson-mapping theories so far developed, in such a way that the dynamical ''maximal-decoupling condition'' on the collective motion is automatically satisfied

  16. On Normalized Compression Distance and Large Malware

    OpenAIRE

    Borbely, Rebecca Schuller

    2015-01-01

    Normalized Compression Distance (NCD) is a popular tool that uses compression algorithms to cluster and classify data in a wide range of applications. Existing discussions of NCD's theoretical merit rely on certain theoretical properties of compression algorithms. However, we demonstrate that many popular compression algorithms don't seem to satisfy these theoretical properties. We explore the relationship between some of these properties and file size, demonstrating that this theoretical pro...

  17. Principals' Collaborative Roles as Leaders for Learning

    Science.gov (United States)

    Kitchen, Margaret; Gray, Susan; Jeurissen, Maree

    2016-01-01

    This article draws on data from three multicultural New Zealand primary schools to reconceptualize principals' roles as leaders for learning. In doing so, the writers build on Sinnema and Robinson's (2012) article on goal setting in principal evaluation. Sinnema and Robinson found that even principals hand-picked for their experience fell short on…

  18. A hybrid data compression approach for online backup service

    Science.gov (United States)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  19. Induction of a shorter compression phase is correlated with a deeper chest compression during metronome-guided cardiopulmonary resuscitation: a manikin study.

    Science.gov (United States)

    Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Cho, Yun Kyung; You, Je Sung; Choi, Sung Wook; Kim, Ok Jun

    2013-07-01

    Recent studies have shown that there may be an interaction between duty cycle and other factors related to the quality of chest compression. Duty cycle represents the fraction of compression phase. We aimed to investigate the effect of shorter compression phase on average chest compression depth during metronome-guided cardiopulmonary resuscitation. Senior medical students performed 12 sets of chest compressions following the guiding sounds, with three down-stroke patterns (normal, fast and very fast) and four rates (80, 100, 120 and 140 compressions/min) in random sequence. Repeated-measures analysis of variance was used to compare the average chest compression depth and duty cycle among the trials. The average chest compression depth increased and the duty cycle decreased in a linear fashion as the down-stroke pattern shifted from normal to very fast (pmetronome-guided cardiopulmonary resuscitation.

  20. Principal component analysis for authorship attribution

    OpenAIRE

    Amir Jamak; Alen Savatic; Mehmet Can

    2012-01-01

    Background: To recognize the authors of the texts by the use of statistical tools, one first needs to decide about the features to be used as author characteristics, and then extract these features from texts. The features extracted from texts are mostly the counts of so called function words. Objectives: The data extracted are processed further to compress as a data with less number of features, such a way that the compressed data still has the power of effective discriminators. In this case...

  1. Compression Characteristics of Solid Wastes as Backfill Materials

    OpenAIRE

    Meng Li; Jixiong Zhang; Rui Gao

    2016-01-01

    A self-made large-diameter compression steel chamber and a SANS material testing machine were chosen to perform a series of compression tests in order to fully understand the compression characteristics of differently graded filling gangue samples. The relationship between the stress-deformation modulus and stress-compression degree was analyzed comparatively. The results showed that, during compression, the deformation modulus of gangue grew linearly with stress, the overall relationship bet...

  2. Exploring compression techniques for ROOT IO

    Science.gov (United States)

    Zhang, Z.; Bockelman, B.

    2017-10-01

    ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.

  3. Stress analysis of shear/compression test

    International Nuclear Information System (INIS)

    Nishijima, S.; Okada, T.; Ueno, S.

    1997-01-01

    Stress analysis has been made on the glass fiber reinforced plastics (GFRP) subjected to the combined shear and compression stresses by means of finite element method. The two types of experimental set up were analyzed, that is parallel and series method where the specimen were compressed by tilted jigs which enable to apply the combined stresses, to the specimen. Modified Tsai-Hill criterion was employed to judge the failure under the combined stresses that is the shear strength under the compressive stress. The different failure envelopes were obtained between the two set ups. In the parallel system the shear strength once increased with compressive stress then decreased. On the contrary in the series system the shear strength decreased monotonicly with compressive stress. The difference is caused by the different stress distribution due to the different constraint conditions. The basic parameters which control the failure under the combined stresses will be discussed

  4. Principals as Assessment Leaders in Rural Schools

    Science.gov (United States)

    Renihan, Patrick; Noonan, Brian

    2012-01-01

    This article reports a study of rural school principals' assessment leadership roles and the impact of rural context on their work. The study involved three focus groups of principals serving small rural schools of varied size and grade configuration in three systems. Principals viewed assessment as a matter of teacher accountability and as a…

  5. Principals: Learn P.R. Survival Skills.

    Science.gov (United States)

    Reep, Beverly B.

    1988-01-01

    School building level public relations depends on the principal or vice principal. Strategies designed to enhance school public relations programs include linking school and community, working with the press, and keeping morale high inside the school. (MLF)

  6. Wavelet-based audio embedding and audio/video compression

    Science.gov (United States)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  7. Prevention of deep vein thrombosis in potential neurosurgical patients. A randomized trial comparing graduated compression stockings alone or graduated compression stockings plus intermittent pneumatic compression with control

    International Nuclear Information System (INIS)

    Turpie, A.G.; Hirsh, J.; Gent, M.; Julian, D.; Johnson, J.

    1989-01-01

    In a randomized trial of neurosurgical patients, groups wearing graduated compression stockings alone (group 1) or graduated compression stockings plus intermittent pneumatic compression (IPC) (group 2) were compared with an untreated control group in the prevention of deep vein thrombosis (DVT). In both active treatment groups, the graduated compression stockings were continued for 14 days or until hospital discharge, if earlier. In group 2, IPC was continued for seven days. All patients underwent DVT surveillance with iodine 125-labeled fibrinogen leg scanning and impedance plethysmography. Venography was carried out if either test became abnormal. Deep vein thrombosis occurred in seven (8.8%) of 80 patients in group 1, in seven (9.0%) of 78 patients in group 2, and in 16 (19.8%) of 81 patients in the control group. The observed differences among these rates are statistically significant. The results of this study indicate that graduated compression stockings alone or in combination with IPC are effective methods of preventing DVT in neurosurgical patients

  8. Compression-absorption (resorption) refrigerating machinery. Modeling of reactors; Machine frigorifique a compression-absorption (resorption). Modelisation des reacteurs

    Energy Technology Data Exchange (ETDEWEB)

    Lottin, O; Feidt, M; Benelmir, R [LEMTA-UHP Nancy-1, 54 - Vandoeuvre-les-Nancy (France)

    1998-12-31

    This paper is a series of transparencies presenting a comparative study of the thermal performances of different types of refrigerating machineries: di-thermal with vapor compression, tri-thermal with moto-compressor, with ejector, with free piston, adsorption-type, resorption-type, absorption-type, compression-absorption-type. A prototype of ammonia-water compression-absorption heat pump is presented and modeled. (J.S.)

  9. Compression-absorption (resorption) refrigerating machinery. Modeling of reactors; Machine frigorifique a compression-absorption (resorption). Modelisation des reacteurs

    Energy Technology Data Exchange (ETDEWEB)

    Lottin, O.; Feidt, M.; Benelmir, R. [LEMTA-UHP Nancy-1, 54 - Vandoeuvre-les-Nancy (France)

    1997-12-31

    This paper is a series of transparencies presenting a comparative study of the thermal performances of different types of refrigerating machineries: di-thermal with vapor compression, tri-thermal with moto-compressor, with ejector, with free piston, adsorption-type, resorption-type, absorption-type, compression-absorption-type. A prototype of ammonia-water compression-absorption heat pump is presented and modeled. (J.S.)

  10. Principal minors and rhombus tilings

    International Nuclear Information System (INIS)

    Kenyon, Richard; Pemantle, Robin

    2014-01-01

    The algebraic relations between the principal minors of a generic n × n matrix are somewhat mysterious, see e.g. Lin and Sturmfels (2009 J. Algebra 322 4121–31). We show, however, that by adding in certain almost principal minors, the ideal of relations is generated by translations of a single relation, the so-called hexahedron relation, which is a composition of six cluster mutations. We give in particular a Laurent-polynomial parameterization of the space of n × n matrices, whose parameters consist of certain principal and almost principal minors. The parameters naturally live on vertices and faces of the tiles in a rhombus tiling of a convex 2n-gon. A matrix is associated to an equivalence class of tilings, all related to each other by Yang–Baxter-like transformations. By specializing the initial data we can similarly parameterize the space of Hermitian symmetric matrices over R,C or H the quaternions. Moreover by further specialization we can parametrize the space of positive definite matrices over these rings. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Cluster algebras mathematical physics’. (paper)

  11. Data Compression with Linear Algebra

    OpenAIRE

    Etler, David

    2015-01-01

    A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.

  12. Assessment of the Maximal Split-Half Coefficient to Estimate Reliability

    Science.gov (United States)

    Thompson, Barry L.; Green, Samuel B.; Yang, Yanyun

    2010-01-01

    The maximal split-half coefficient is computed by calculating all possible split-half reliability estimates for a scale and then choosing the maximal value as the reliability estimate. Osburn compared the maximal split-half coefficient with 10 other internal consistency estimates of reliability and concluded that it yielded the most consistently…

  13. Learning curves for mutual information maximization

    International Nuclear Information System (INIS)

    Urbanczik, R.

    2003-01-01

    An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed [S. Becker and G. Hinton, Nature (London) 355, 161 (1992)]. For a generic data model, I show that in the large sample limit the structure in the data is recognized by mutual information maximization. For a more restricted model, where the networks are similar to perceptrons, I calculate the learning curves for zero-temperature Gibbs learning. These show that convergence can be rather slow, and a way of regularizing the procedure is considered

  14. Tokamak plasma variations under rapid compression

    International Nuclear Information System (INIS)

    Holmes, J.A.; Peng, Y.K.M.; Lynch, S.J.

    1980-04-01

    Changes in plasmas undergoing large, rapid compressions are examined numerically over the following range of aspect ratios A:3 greater than or equal to A greater than or equal to 1.5 for major radius compressions of circular, elliptical, and D-shaped cross sections; and 3 less than or equal to A less than or equal to 6 for minor radius compressions of circular and D-shaped cross sections. The numerical approach combines the computation of fixed boundary MHD equilibria with single-fluid, flux-surface-averaged energy balance, particle balance, and magnetic flux diffusion equations. It is found that the dependences of plasma current I/sub p/ and poloidal beta anti β/sub p/ on the compression ratio C differ significantly in major radius compressions from those proposed by Furth and Yoshikawa. The present interpretation is that compression to small A dramatically increases the plasma current, which lowers anti β/sub p/ and makes the plasma more paramagnetic. Despite large values of toroidal beta anti β/sub T/ (greater than or equal to 30% with q/sub axis/ approx. = 1, q/sub edge/ approx. = 3), this tends to concentrate more toroidal flux near the magnetic axis, which means that a reduced minor radius is required to preserve the continuity of the toroidal flux function F at the plasma edge. Minor radius compressions to large aspect ratio agree well with the Furth-Yoshikawa scaling laws

  15. Benign compression fractures of the spine: signal patterns

    International Nuclear Information System (INIS)

    Ryu, Kyung Nam; Choi, Woo Suk; Lee, Sun Wha; Lim, Jae Hoon

    1992-01-01

    Fifteen patients with 38 compression fractures of the spine underwent magnetic resonance(MR) imaging. We retrospectively evaluated MR images in those benign compression fractures. MR images showed four patterns in T1-weighted images. MR imaging patterns were normal signal(21), band like low signal(8), low signal with preservation of peripheral portion of the body(8), and diffuse low signal through the vertebral body(1). The low signal portions were changed to high signal intensities in T2-weighted images. In 7 of 15 patients (11 compression fractures), there was a history of trauma, and the remaining 8 patients (27 compression fractures) had no history of trauma. Benign compression fractures of trauma, remained 8 patients (27 compression fractures) were non-traumatic. Benign compression fractures of the spine reveal variable signal intensities in MR imagings. These patterns of benign compression fractures may be useful in interpretation of MR imagings of the spine

  16. Statewide Data on Supply and Demand of Principals after Policy Changes to Principal Preparation in Illinois

    Science.gov (United States)

    Haller, Alicia; Hunt, Erika

    2016-01-01

    Research has demonstrated that principals have a powerful impact on school improvement and student learning. Principals play a vital role in recruiting, developing, and retaining effective teachers; creating a school-wide culture of learning; and implementing a continuous improvement plan aimed at increasing student achievement. Leithwood, Louis,…

  17. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  18. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  19. An efficient compression scheme for bitmap indices

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  20. Breakdown of maximality conjecture in continuous phase transitions

    International Nuclear Information System (INIS)

    Mukamel, D.; Jaric, M.V.

    1983-04-01

    A Landau-Ginzburg-Wilson model associated with a single irreducible representation which exhibits an ordered phase whose symmetry group is not a maximal isotropy subgroup of the symmetry group of the disordered phase is constructed. This example disproves the maximality conjecture suggested in numerous previous studies. Below the (continuous) transition, the order parameter points along a direction which varies with the temperature and with the other parameters which define the model. An extension of the maximality conjecture to reducible representations was postulated in the context of Higgs symmetry breaking mechanism. Our model can also be extended to provide a counter example in these cases. (author)

  1. Maximizers versus satisficers: Decision-making styles, competence, and outcomes

    OpenAIRE

    Andrew M. Parker; Wändi Bruine de Bruin; Baruch Fischhoff

    2007-01-01

    Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al.\\ (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decision...

  2. A comparative analysis of the cryo-compression and cryo-adsorption hydrogen storage methods

    Energy Technology Data Exchange (ETDEWEB)

    Petitpas, G [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Benard, P [Universite du Quebec a Trois-Rivieres (Canada); Klebanoff, L E [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Xiao, J [Universite du Quebec a Trois-Rivieres (Canada); Aceves, S M [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-07-01

    While conventional low-pressure LH₂ dewars have existed for decades, advanced methods of cryogenic hydrogen storage have recently been developed. These advanced methods are cryo-compression and cryo-adsorption hydrogen storage, which operate best in the temperature range 30–100 K. We present a comparative analysis of both approaches for cryogenic hydrogen storage, examining how pressure and/or sorbent materials are used to effectively increase onboard H₂ density and dormancy. We start by reviewing some basic aspects of LH₂ properties and conventional means of storing it. From there we describe the cryo-compression and cryo-adsorption hydrogen storage methods, and then explore the relationship between them, clarifying the materials science and physics of the two approaches in trying to solve the same hydrogen storage task (~5–8 kg H₂, typical of light duty vehicles). Assuming that the balance of plant and the available volume for the storage system in the vehicle are identical for both approaches, the comparison focuses on how the respective storage capacities, vessel weight and dormancy vary as a function of temperature, pressure and type of cryo-adsorption material (especially, powder MOF-5 and MIL-101). By performing a comparative analysis, we clarify the science of each approach individually, identify the regimes where the attributes of each can be maximized, elucidate the properties of these systems during refueling, and probe the possible benefits of a combined “hybrid” system with both cryo-adsorption and cryo-compression phenomena operating at the same time. In addition the relationships found between onboard H₂ capacity, pressure vessel and/or sorbent mass and dormancy as a function of rated pressure, type of sorbent material and fueling conditions are useful as general designing guidelines in future engineering efforts using these two hydrogen storage approaches.

  3. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  4. A biological compression model and its applications.

    Science.gov (United States)

    Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd

    2011-01-01

    A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.

  5. Dietary Fiber Extraction from Defatted Corn Hull by Hot-Compressed Water

    Directory of Open Access Journals (Sweden)

    Wang Li

    2018-06-01

    Full Text Available Corn hulls were abundant and inexpensive byproducts of the corn dry or wet milling processes, but most of them were discarded as agro-wastes. The aim of this study was to extract the dietary fiber by hot-compressed water (HCW from defatted corn hull and to determine the chemical properties. Results showed that temperature and time played critical roles in extraction efficiency; the maximal yield of dietary fiber A (DFA extracted by HCW reached 33.0% at 150°C for 60 min. The yield of dietary fiber B (DFB increased from 2.0% to 56.9% as the temperature increased from 110 to 180°C, while the yield of solid residue (SR decreased from 88.7% to 27.7%. Fourier transform infrared spectroscopy (FT-IR results demonstrated that C-H, O-H, C=O, COO- occurred in the DFA, SR and DFB. The dietary fiber polysaccharides consisted of arabinose, galactose, glucose, xylose and uronic acid.

  6. Neutrino mass textures with maximal CP violation

    International Nuclear Information System (INIS)

    Aizawa, Ichiro; Kitabayashi, Teruyuki; Yasue, Masaki

    2005-01-01

    We show three types of neutrino mass textures, which give maximal CP violation as well as maximal atmospheric neutrino mixing. These textures are described by six real mass parameters: one specified by two complex flavor neutrino masses and two constrained ones and the others specified by three complex flavor neutrino masses. In each texture, we calculate mixing angles and masses, which are consistent with observed data, as well as Majorana CP phases

  7. Important role of vertical migration of compressed gas, oil and water in formation of AVPD (abnormally high pressure gradient) zones

    Energy Technology Data Exchange (ETDEWEB)

    Anikiyev, K.A.

    1980-01-01

    The principal role of vertical migration of compressed gases, gas-saturated petroleum and water during formation of abnormally high pressure gradients (AVPD) is confirmed by extensive factual data on gas production, grifons, blowouts and gushers that accompany drilling formations with AVPD from early history to the present time; the sources of vertical migration of compressed fluids, in accordance with geodynamic AVPD theory, are the deep degasified centers of the earth mantle. Among the various types of AVPD zones especially notable are the large (often massive or massive-layer) deposits and the intrusion aureoles that top them in the overlapping covering layers. Prediction of AVPD zones and determining their field and energy potential must be based on field-baric simulation of the formations being drilled in light of laws regarding the important role of the vertical migration of compressed fluids. When developing field-baric models, it is necessary to utilize the extensive and valuable data on grifons, gas production and blowouts that has been collected and categorized by drilling engineers and production geologists. To further develop data on field-baric conditions of the earth, it is necessary to collect and study signals of AVPD. First of all, there is a need to evaluate potential elastic resources of compressed fluids which can move from the bed into the well. Thus it is necessary to study and standardize intrusion aureoles and other AVPD zones within the aspect of fieldbaric modeling.

  8. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  9. The compressed word problem for groups

    CERN Document Server

    Lohrey, Markus

    2014-01-01

    The Compressed Word Problem for Groups provides a detailed exposition of known results on the compressed word problem, emphasizing efficient algorithms for the compressed word problem in various groups. The author presents the necessary background along with the most recent results on the compressed word problem to create a cohesive self-contained book accessible to computer scientists as well as mathematicians. Readers will quickly reach the frontier of current research which makes the book especially appealing for students looking for a currently active research topic at the intersection of group theory and computer science. The word problem introduced in 1910 by Max Dehn is one of the most important decision problems in group theory. For many groups, highly efficient algorithms for the word problem exist. In recent years, a new technique based on data compression for providing more efficient algorithms for word problems, has been developed, by representing long words over group generators in a compres...

  10. Comparison of changes in the mobility of the pelvic floor muscle on during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction

    OpenAIRE

    Jung, Halim; Jung, Sangwoo; Joo, Sunghee; Song, Changho

    2016-01-01

    [Purpose] The purpose of this study was to compare changes in the mobility of the pelvic floor muscle during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. [Subjects] Thirty healthy adults participated in this study (15 men and 15 women). [Methods] All participants performed a bridge exercise and abdominal curl-up during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. Pelvic floor mobility...

  11. ERGC: an efficient referential genome compression algorithm.

    Science.gov (United States)

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Nonpainful wide-area compression inhibits experimental pain.

    Science.gov (United States)

    Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena

    2016-09-01

    Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM.

  13. Density ratios in compressions driven by radiation pressure

    International Nuclear Information System (INIS)

    Lee, S.

    1988-01-01

    It has been suggested that in the cannonball scheme of laser compression the pellet may be considered to be compressed by the 'brute force' of the radiation pressure. For such a radiation-driven compression, an energy balance method is applied to give an equation fixing the radius compression ratio K which is a key parameter for such intense compressions. A shock model is used to yield specific results. For a square-pulse driving power compressing a spherical pellet with a specific heat ratio of 5/3, a density compression ratio Γ of 27 is computed. Double (stepped) pulsing with linearly rising power enhances Γ to 1750. The value of Γ is not dependent on the absolute magnitude of the piston power, as long as this is large enough. Further enhancement of compression by multiple (stepped) pulsing becomes obvious. The enhanced compression increases the energy gain factor G for a 100 μm DT pellet driven by radiation power of 10 16 W from 6 for a square pulse power with 0.5 MJ absorbed energy to 90 for a double (stepped) linearly rising pulse with absorbed energy of 0.4 MJ assuming perfect coupling efficiency. (author)

  14. High Bit-Depth Medical Image Compression With HEVC.

    Science.gov (United States)

    Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2018-03-01

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  15. Compression of surface myoelectric signals using MP3 encoding.

    Science.gov (United States)

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  16. The Deputy Principal Instructional Leadership Role and Professional Learning: Perceptions of Secondary Principals, Deputies and Teachers

    Science.gov (United States)

    Leaf, Ann; Odhiambo, George

    2017-01-01

    Purpose: The purpose of this paper is to report on a study examining the perceptions of secondary principals, deputies and teachers, of deputy principal (DP) instructional leadership (IL), as well as deputies' professional learning (PL) needs. Framed within an interpretivist approach, the specific objectives of this study were: to explore the…

  17. Measuring Principals' Effectiveness: Results from New Jersey's First Year of Statewide Principal Evaluation. REL 2016-156

    Science.gov (United States)

    Herrmann, Mariesa; Ross, Christine

    2016-01-01

    States and districts across the country are implementing new principal evaluation systems that include measures of the quality of principals' school leadership practices and measures of student achievement growth. Because these evaluation systems will be used for high-stakes decisions, it is important that the component measures of the evaluation…

  18. Compression and fast retrieval of SNP data.

    Science.gov (United States)

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Principals' Perceptions of School Public Relations

    Science.gov (United States)

    Morris, Robert C.; Chan, Tak Cheung; Patterson, Judith

    2009-01-01

    This study was designed to investigate school principals' perceptions on school public relations in five areas: community demographics, parental involvement, internal and external communications, school council issues, and community resources. Findings indicated that principals' concerns were as follows: rapid population growth, change of…

  20. Relationship between medical compression and intramuscular pressure as an explanation of a compression paradox.

    Science.gov (United States)

    Uhl, J-F; Benigni, J-P; Cornu-Thenard, A; Fournier, J; Blin, E

    2015-06-01

    Using standing magnetic resonance imaging (MRI), we recently showed that medical compression, providing an interface pressure (IP) of 22 mmHg, significantly compressed the deep veins of the leg but not, paradoxically, superficial varicose veins. To provide an explanation for this compression paradox by studying the correlation between the IP exerted by medical compression and intramuscular pressure (IMP). In 10 legs of five healthy subjects, we studied the effects of different IPs on the IMP of the medial gastrocnemius muscle. The IP produced by a cuff manometer was verified by a Picopress® device. The IMP was measured with a 21G needle connected to a manometer. Pressure data were recorded in the prone and standing positions with cuff manometer pressures from 0 to 50 mmHg. In the prone position, an IP of less than 20 did not significantly change the IMP. On the contrary, a perfect linear correlation with the IMP (r = 0.99) was observed with an IP from 20 to 50 mmHg. We found the same correlation in the standing position. We found that an IP of 22 mmHg produced a significant IMP increase from 32 to 54 mmHg, in the standing position. At the same time, the subcutaneous pressure is only provided by the compression device, on healthy subjects. In other words, the subcutaneous pressure plus the IP is only a little higher than 22 mmHg-a pressure which is too low to reduce the caliber of the superficial veins. This is in accordance with our standing MRI 3D anatomical study which showed that, paradoxically, when applying low pressures (IP), the deep veins are compressed while the superficial veins are not. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  1. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  2. Halftoning processing on a JPEG-compressed image

    Science.gov (United States)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  3. Principal-vector-directed fringe-tracking technique.

    Science.gov (United States)

    Zhang, Zhihui; Guo, Hongwei

    2014-11-01

    Fringe tracking is one of the most straightforward techniques for analyzing a single fringe pattern. This work presents a principal-vector-directed fringe-tracking technique. It uses Gaussian derivatives for estimating fringe gradients and uses hysteresis thresholding for segmenting singular points, thus improving the principal component analysis method. Using it allows us to estimate the principal vectors of fringes from a pattern with high noise. The fringe-tracking procedure is directed by these principal vectors, so that erroneous results induced by noise and other error-inducing factors are avoided. At the same time, the singular point regions of the fringe pattern are identified automatically. Using them allows us to determine paths through which the "seed" point for each fringe skeleton is easy to find, thus alleviating the computational burden in processing the fringe pattern. The results of a numerical simulation and experiment demonstrate this method to be valid.

  4. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  5. Crystal and Particle Engineering Strategies for Improving Powder Compression and Flow Properties to Enable Continuous Tablet Manufacturing by Direct Compression.

    Science.gov (United States)

    Chattoraj, Sayantan; Sun, Changquan Calvin

    2018-04-01

    Continuous manufacturing of tablets has many advantages, including batch size flexibility, demand-adaptive scale up or scale down, consistent product quality, small operational foot print, and increased manufacturing efficiency. Simplicity makes direct compression the most suitable process for continuous tablet manufacturing. However, deficiencies in powder flow and compression of active pharmaceutical ingredients (APIs) limit the range of drug loading that can routinely be considered for direct compression. For the widespread adoption of continuous direct compression, effective API engineering strategies to address power flow and compression problems are needed. Appropriate implementation of these strategies would facilitate the design of high-quality robust drug products, as stipulated by the Quality-by-Design framework. Here, several crystal and particle engineering strategies for improving powder flow and compression properties are summarized. The focus is on the underlying materials science, which is the foundation for effective API engineering to enable successful continuous manufacturing by the direct compression process. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  6. Compressed gas fuel storage system

    Science.gov (United States)

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  7. Compressed sensing for distributed systems

    CERN Document Server

    Coluccia, Giulio; Magli, Enrico

    2015-01-01

    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  8. A review on compressed pattern matching

    Directory of Open Access Journals (Sweden)

    Surya Prakash Mishra

    2016-09-01

    Full Text Available Compressed pattern matching (CPM refers to the task of locating all the occurrences of a pattern (or set of patterns inside the body of compressed text. In this type of matching, pattern may or may not be compressed. CPM is very useful in handling large volume of data especially over the network. It has many applications in computational biology, where it is useful in finding similar trends in DNA sequences; intrusion detection over the networks, big data analytics etc. Various solutions have been provided by researchers where pattern is matched directly over the uncompressed text. Such solution requires lot of space and consumes lot of time when handling the big data. Various researchers have proposed the efficient solutions for compression but very few exist for pattern matching over the compressed text. Considering the future trend where data size is increasing exponentially day-by-day, CPM has become a desirable task. This paper presents a critical review on the recent techniques on the compressed pattern matching. The covered techniques includes: Word based Huffman codes, Word Based Tagged Codes; Wavelet Tree Based Indexing. We have presented a comparative analysis of all the techniques mentioned above and highlighted their advantages and disadvantages.

  9. Whose Perception of Principal Instructional Leadership? Principal-Teacher Perceptual (Dis)agreement and Its Influence on Teacher Collaboration

    Science.gov (United States)

    Park, Joo-Ho; Ham, Seung-Hwan

    2016-01-01

    This study examines teacher collaboration across three Asia-Pacific countries (Australia, Malaysia, and South Korea), focusing on the possibility that principal-teacher perceptual disagreement regarding principal instructional leadership performance may impede progress toward a school organizational condition conducive to collaborative teacher…

  10. 30 CFR 57.13020 - Use of compressed air.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  11. Relationship between the edgewise compression strength of ...

    African Journals Online (AJOL)

    The results of this study were used to determine the linear regression constants in the Maltenfort model by correlating the measured board edgewise compression strength (ECT) with the predicted strength, using the paper components' compression strengths, measured with the short-span compression test (SCT) and the ...

  12. ROI-based DICOM image compression for telemedicine

    Indian Academy of Sciences (India)

    ground and reconstruct the image portions losslessly. The compressed image can ... If the image is compressed by 8:1 compression without any perceptual distortion, the ... Figure 2. Cross-sectional view of medical image (statistical representation). ... The Integer Wavelet Transform (IWT) is used to have lossless processing.

  13. Quasi-isentropic compression using compressed water flow generated by underwater electrical explosion of a wire array

    Science.gov (United States)

    Gurovich, V.; Virozub, A.; Rososhek, A.; Bland, S.; Spielman, R. B.; Krasik, Ya. E.

    2018-05-01

    A major experimental research area in material equation-of-state today involves the use of off-Hugoniot measurements rather than shock experiments that give only Hugoniot data. There is a wide range of applications using quasi-isentropic compression of matter including the direct measurement of the complete isentrope of materials in a single experiment and minimizing the heating of flyer plates for high-velocity shock measurements. We propose a novel approach to generating quasi-isentropic compression of matter. Using analytical modeling and hydrodynamic simulations, we show that a working fluid composed of compressed water, generated by an underwater electrical explosion of a planar wire array, might be used to efficiently drive the quasi-isentropic compression of a copper target to pressures ˜2 × 1011 Pa without any complex target designs.

  14. Phase Transitions in Aluminum Under Shockless Compression at the Z Machine

    Science.gov (United States)

    Davis, Jean-Paul; Brown, Justin; Shulenburger, Luke; Knudson, Marcus

    2017-06-01

    Aluminum 6061 alloy has been used extensively as an electrode material in shockless ramp-wave experiments at the Z Machine. Previous theoretical work suggests that the principal quasi-isentrope in aluminum should pass through two phase transitions at multi-megabar pressures, first from the ambient fcc phase to hcp at around 200 GPa, then to bcc at around 320 GPa. Previous static measurements in a diamond-anvil cell have detected the hcp phase above 200 GPa along the room-temperature isentherm. Recent laser-based dynamic compression experiments have observed both the hcp and bcc phases using X-ray diffraction. Here we present high-accuracy velocity waveform data taken on pure and alloy aluminum materials at the Z Machine under shockless compression with 200-ns rise-time to 400 GPa using copper electrodes and lithium-fluoride windows. These are compared to recent EOS tables developed at Los Alamos National Laboratory, to our own results from diffusion quantum Monte-Carlo calculations, and to multi-phase EOS models with phase-transition kinetics. We find clear evidence of a fast transition around 200 GPa as expected, and a possible suggestion of a slower transition at higher pressure. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE AC04-94AL85000.

  15. Maximally-localized position, Euclidean path-integral, and thermodynamics in GUP quantum mechanics

    Science.gov (United States)

    Bernardo, Reginald Christian S.; Esguerra, Jose Perico H.

    2018-04-01

    In dealing with quantum mechanics at very high energies, it is essential to adapt to a quasiposition representation using the maximally-localized states because of the generalized uncertainty principle. In this paper, we look at maximally-localized states as eigenstates of the operator ξ = X + iβP that we refer to as the maximally-localized position. We calculate the overlap between maximally-localized states and show that the identity operator can be expressed in terms of the maximally-localized states. Furthermore, we show that the maximally-localized position is diagonal in momentum-space and that the maximally-localized position and its adjoint satisfy commutation and anti-commutation relations reminiscent of the harmonic oscillator commutation and anti-commutation relations. As application, we use the maximally-localized position in developing the Euclidean path-integral and introduce the compact form of the propagator for maximal localization. The free particle momentum-space propagator and the propagator for maximal localization are analytically evaluated up to quadratic-order in β. Finally, we obtain a path-integral expression for the partition function of a thermodynamic system using the maximally-localized states. The partition function of a gas of noninteracting particles is evaluated. At temperatures exceeding the Planck energy, we obtain the gas' maximum internal energy N / 2 β and recover the zero heat capacity of an ideal gas.

  16. Why firms should not always maximize profits

    OpenAIRE

    Kolstad, Ivar

    2006-01-01

    Though corporate social responsibility (CSR) is on the agenda of most major corporations, corporate executives still largely support the view that corporations should maximize the returns to their owners. There are two lines of defence for this position. One is the Friedmanian view that maximizing owner returns is the corporate social responsibility of corporations. The other is a position voiced by many executives, that CSR and profits go together. This paper argues that the first position i...

  17. Eccentric crank variable compression ratio mechanism

    Science.gov (United States)

    Lawrence, Keith Edward [Kobe, JP; Moser, William Elliott [Peoria, IL; Roozenboom, Stephan Donald [Washington, IL; Knox, Kevin Jay [Peoria, IL

    2008-05-13

    A variable compression ratio mechanism for an internal combustion engine that has an engine block and a crankshaft is disclosed. The variable compression ratio mechanism has a plurality of eccentric disks configured to support the crankshaft. Each of the plurality of eccentric disks has at least one cylindrical portion annularly surrounded by the engine block. The variable compression ratio mechanism also has at least one actuator configured to rotate the plurality of eccentric disks.

  18. How Wage Compression Affects Job Turnover

    OpenAIRE

    Heyman, Fredrik

    2008-01-01

    I use Swedish establishment-level panel data to test Bertola and Rogerson’s (1997) hypothesis of a positive relation between the degree of wage compression and job reallocation. Results indicate that the effect of wage compression on job turnover is positive and significant in the manufacturing sector. The wage compression effect is stronger on job destruction than on job creation, consistent with downward wage rigidity. Further results include a strong positive relationship between the fract...

  19. CoGI: Towards Compressing Genomes as an Image.

    Science.gov (United States)

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  20. 30 CFR 56.13020 - Use of compressed air.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...

  1. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  2. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  3. On the electrical contact and long-term behavior of compression-type connections with conventional and high-temperature conductor ropes with low sag

    International Nuclear Information System (INIS)

    Hildmann, Christian

    2016-01-01

    In Germany and in Europe it is due to the ''Energiewende'' necessary to transmit more electrical energy with existing overhead transmission lines. One possible technical solution to reach this aim is the use of high temperature low sag conductors (HTLS-conductors). Compared to the common Aluminium Conductor Steel Reinforced (ACSR), HTLS-conductors have higher rated currents and rated temperatures. Thus the electrical connections for HTLS-conductors are stressed to higher temperatures too. These components are most important for the safe and reliable operation of an overhead transmission line. Besides other connection technologies, hexagonal compression connections with ordinary transmission line conductors have proven themselves since decades. From the literature, mostly empirical studies with electrical tests for compression connections are known. The electrical contact behaviour, i.e. the quality of the electrical contact after assembly, of these connections has been investigated insufficiently. This work presents and enhances an electrical model of compression connections, so that the electrical contact behaviour can be determined more accurate. Based on this, principal considerations on the current distribution in the compression connection and its influence on the connection resistance are presented. As a result from the theoretical and the experimental work, recommendations for the design of hexagonal compression connections for transmission line conductors were developed. Furthermore it is known from the functional principle of compression type connections, that the electrical contact behaviour can be influenced from their form fit, force fit and cold welding. In particular the forces in compression connections have been calculated up to now by approximation. The known analytical calculations simplify the geometry and material behaviour and do not consider the correct mechanical load during assembly. For these reasons the joining process

  4. Modeling the mechanical and compression properties of polyamide/elastane knitted fabrics used in compression sportswear

    NARCIS (Netherlands)

    Maqsood, Muhammad

    2016-01-01

    A compression sportswear fabric should have excellent stretch and recovery properties in order to improve the performance of the sportsman. The objective of this study was to investigate the effect of elastane linear density and loop length on the stretch, recovery, and compression properties of the

  5. An analysis of the efficacy of bag-valve-mask ventilation and chest compression during different compression-ventilation ratios in manikin-simulated paediatric resuscitation.

    Science.gov (United States)

    Kinney, S B; Tibballs, J

    2000-01-01

    The ideal chest compression and ventilation ratio for children during performance of cardiopulmonary resuscitation (CPR) has not been determined. The efficacy of chest compression and ventilation during compression ventilation ratios of 5:1, 10:2 and 15:2 was examined. Eighteen nurses, working in pairs, were instructed to provide chest compression and bag-valve-mask ventilation for 1 min with each ratio in random on a child-sized manikin. The subjects had been previously taught paediatric CPR within the last 3 or 5 months. The efficacy of ventilation was assessed by measurement of the expired tidal volume and the number of breaths provided. The rate of chest compression was guided by a metronome set at 100/min. The efficacy of chest compressions was assessed by measurement of the rate and depth of compression. There was no significant difference in the mean tidal volume or the percentage of effective chest compressions delivered for each compression-ventilation ratio. The number of breaths delivered was greatest with the ratio of 5:1. The percentage of effective chest compressions was equal with all three methods but the number of effective chest compressions was greatest with a ratio of 5:1. This study supports the use of a compression-ventilation ratio of 5:1 during two-rescuer paediatric cardiopulmonary resuscitation.

  6. Maximizing band gaps in plate structures

    DEFF Research Database (Denmark)

    Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard

    2006-01-01

    periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated......Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... theoretically and experimentally and the issue of finite size effects is addressed....

  7. Finding Maximal Pairs with Bounded Gap

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Lyngsø, Rune B.; Pedersen, Christian N. S.

    1999-01-01

    . In this paper we present methods for finding all maximal pairs under various constraints on the gap. In a string of length n we can find all maximal pairs with gap in an upper and lower bounded interval in time O(n log n+z) where z is the number of reported pairs. If the upper bound is removed the time reduces...... to O(n+z). Since a tandem repeat is a pair where the gap is zero, our methods can be seen as a generalization of finding tandem repeats. The running time of our methods equals the running time of well known methods for finding tandem repeats....

  8. A new integrated planning model for gas compression and transmission through a complex pipeline network; Um novo modelo de planejamento integrado de compressao e escoamento de gas para uma rede complexa

    Energy Technology Data Exchange (ETDEWEB)

    Iamashita, Edson K. [PETROBRAS, Rio de Janeiro, RJ (Brazil); Galaxe, Frederico; Arica, Jose [Universidade Estadual do Norte Fluminense (UENF), Campos dos Goytacases, RJ (Brazil)

    2005-07-01

    The aim of this paper is to show a new approach to solve integrated gas balance planning problems that defines the best compression and transmission strategy for a system with a large number of platforms or compression units that are interlinked with the delivery points through a complex gas pipeline network. For solving the proposed optimization problem is used a genetic meta-heuristic technique, where the fitness function of the algorithm is the Profit function of the gas balance, being considered the incomes and costs besides the pipeline network constraints, representing the compression system and transmission network near to the real operational condition. Newton Raphson's method is used to solve the nonlinear system that represents the calculation of the pressure drop in the gas pipeline network that can contain various cycles. This model could be used for design and optimization of gas pipeline networks, as well as for the gas balance planning of an existent network looking for the profit maximization. (author)

  9. The key kinematic determinants of undulatory underwater swimming at maximal velocity.

    Science.gov (United States)

    Connaboy, Chris; Naemi, Roozbeh; Brown, Susan; Psycharakis, Stelios; McCabe, Carla; Coleman, Simon; Sanders, Ross

    2016-01-01

    The optimisation of undulatory underwater swimming is highly important in competitive swimming performance. Nineteen kinematic variables were identified from previous research undertaken to assess undulatory underwater swimming performance. The purpose of the present study was to determine which kinematic variables were key to the production of maximal undulatory underwater swimming velocity. Kinematic data at maximal undulatory underwater swimming velocity were collected from 17 skilled swimmers. A series of separate backward-elimination analysis of covariance models was produced with cycle frequency and cycle length as dependent variables (DVs) and participant as a fixed factor, as including cycle frequency and cycle length would explain 100% of the maximal swimming velocity variance. The covariates identified in the cycle-frequency and cycle-length models were used to form the saturated model for maximal swimming velocity. The final parsimonious model identified three covariates (maximal knee joint angular velocity, maximal ankle angular velocity and knee range of movement) as determinants of the variance in maximal swimming velocity (adjusted-r2 = 0.929). However, when participant was removed as a fixed factor there was a large reduction in explained variance (adjusted r2 = 0.397) and only maximal knee joint angular velocity continued to contribute significantly, highlighting its importance to the production of maximal swimming velocity. The reduction in explained variance suggests an emphasis on inter-individual differences in undulatory underwater swimming technique and/or anthropometry. Future research should examine the efficacy of other anthropometric, kinematic and coordination variables to better understand the production of maximal swimming velocity and consider the importance of individual undulatory underwater swimming techniques when interpreting the data.

  10. Kinetic theory in maximal-acceleration invariant phase space

    International Nuclear Information System (INIS)

    Brandt, H.E.

    1989-01-01

    A vanishing directional derivative of a scalar field along particle trajectories in maximal acceleration invariant phase space is identical in form to the ordinary covariant Vlasov equation in curved spacetime in the presence of both gravitational and nongravitational forces. A natural foundation is thereby provided for a covariant kinetic theory of particles in maximal-acceleration invariant phase space. (orig.)

  11. Half-maximal supersymmetry from exceptional field theory

    Energy Technology Data Exchange (ETDEWEB)

    Malek, Emanuel [Arnold Sommerfeld Center for Theoretical Physics, Department fuer Physik, Ludwig-Maximilians-Universitaet Muenchen (Germany)

    2017-10-15

    We study D ≥ 4-dimensional half-maximal flux backgrounds using exceptional field theory. We define the relevant generalised structures and also find the integrability conditions which give warped half-maximal Minkowski{sub D} and AdS{sub D} vacua. We then show how to obtain consistent truncations of type II / 11-dimensional SUGRA which break half the supersymmetry. Such truncations can be defined on backgrounds admitting exceptional generalised SO(d - 1 - N) structures, where d = 11 - D, and N is the number of vector multiplets obtained in the lower-dimensional theory. Our procedure yields the most general embedding tensors satisfying the linear constraint of half-maximal gauged SUGRA. We use this to prove that all D ≥ 4 half-maximal warped AdS{sub D} and Minkowski{sub D} vacua of type II / 11-dimensional SUGRA admit a consistent truncation keeping only the gravitational supermultiplet. We also show to obtain heterotic double field theory from exceptional field theory and comment on the M-theory / heterotic duality. In five dimensions, we find a new SO(5, N) double field theory with a (6 + N)-dimensional extended space. Its section condition has one solution corresponding to 10-dimensional N = 1 supergravity and another yielding six-dimensional N = (2, 0) SUGRA. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  12. Task-oriented lossy compression of magnetic resonance images

    Science.gov (United States)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  13. Light-weight reference-based compression of FASTQ data.

    Science.gov (United States)

    Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan

    2015-06-09

    The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.

  14. Characterization of the Lateral Distribution of Fluorescent Lipid in Binary-Constituent Lipid Monolayers by Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    István P. Sugár

    2010-01-01

    Full Text Available Lipid lateral organization in binary-constituent monolayers consisting of fluorescent and nonfluorescent lipids has been investigated by acquiring multiple emission spectra during measurement of each force-area isotherm. The emission spectra reflect BODIPY-labeled lipid surface concentration and lateral mixing with different nonfluorescent lipid species. Using principal component analysis (PCA each spectrum could be approximated as the linear combination of only two principal vectors. One point on a plane could be associated with each spectrum, where the coordinates of the point are the coefficients of the linear combination. Points belonging to the same lipid constituents and experimental conditions form a curve on the plane, where each point belongs to a different mole fraction. The location and shape of the curve reflects the lateral organization of the fluorescent lipid mixed with a specific nonfluorescent lipid. The method provides massive data compression that preserves and emphasizes key information pertaining to lipid distribution in different lipid monolayer phases. Collectively, the capacity of PCA for handling large spectral data sets, the nanoscale resolution afforded by the fluorescence signal, and the inherent versatility of monolayers for characterization of lipid lateral interactions enable significantly enhanced resolution of lipid lateral organizational changes induced by different lipid compositions.

  15. Stability analysis and numerical simulation of a hard-core diffuse z pinch during compression with Atlas facility liner parameters

    Science.gov (United States)

    Siemon, R. E.; Atchison, W. L.; Awe, T.; Bauer, B. S.; Buyko, A. M.; Chernyshev, V. K.; Cowan, T. E.; Degnan, J. H.; Faehl, R. J.; Fuelling, S.; Garanin, S. F.; Goodrich, T.; Ivanovsky, A. V.; Lindemuth, I. R.; Makhin, V.; Mokhov, V. N.; Reinovsky, R. E.; Ryutov, D. D.; Scudder, D. W.; Taylor, T.; Yakubov, V. B.

    2005-09-01

    In the 'metal liner' approach to magnetized target fusion (MTF), a preheated magnetized plasma target is compressed to thermonuclear temperature and high density by externally driving the implosion of a flux conserving metal enclosure, or liner, which contains the plasma target. As in inertial confinement fusion, the principal fusion fuel heating mechanism is pdV work by the imploding enclosure, called a pusher in ICF. One possible MTF target, the hard-core diffuse z pinch, has been studied in MAGO experiments at VNIIEF and is one possible target being considered for experiments on the Atlas pulsed power facility. Numerical MHD simulations show two intriguing and helpful features of the diffuse z pinch with respect to compressional heating. First, in two-dimensional simulations the m = 0 interchange modes, arising from an unstable pressure profile, result in turbulent motions and self-organization into a stable pressure profile. The turbulence also gives rise to convective thermal transport, but the level of turbulence saturates at a finite level, and simulations show substantial heating during liner compression despite the turbulence. The second helpful feature is that pressure profile evolution during compression tends towards improved stability rather than instability when analysed according to the Kadomtsev criteria. A liner experiment is planned for Atlas to study compression of magnetic flux without plasma, as a first step. The Atlas geometry is compatible with a diffuse z pinch, and simulations of possible future experiments show that kiloelectronvolt temperatures and useful neutron production for diagnostic purposes should be possible if a suitable plasma injector is added to the Atlas facility.

  16. Inductively Driven, 3D Liner Compression of a Magnetized Plasma to Megabar Energy Densities

    Energy Technology Data Exchange (ETDEWEB)

    Slough, John [MSNW LLC, Redmond, WA (United States)

    2015-02-01

    To take advantage of the smaller scale, higher density regime of fusion an efficient method for achieving the compressional heating required to reach fusion gain conditions must be found. What is proposed is a more flexible metallic liner compression scheme that minimizes the kinetic energy required to reach fusion. It is believed that it is possible to accomplish this at sub-megajoule energies. This however will require operation at very small scale. To have a realistic hope of inexpensive, repetitive operation, it is essential to have the liner kinetic energy under a megajoule which allows for the survivability of the vacuum and power systems. At small scale the implosion speed must be reasonably fast to maintain the magnetized plasma (FRC) equilibrium during compression. For limited liner kinetic energy, it becomes clear that the thinnest liner imploded to the smallest radius consistent with the requirements for FRC equilibrium lifetime is desired. The proposed work is directed toward accomplishing this goal. Typically an axial (Z) current is employed for liner compression. There are however several advantages to using a θ-pinch coil. With the θ-pinch the liner currents are inductively driven which greatly simplifies the apparatus and vacuum system, and avoids difficulties with the post implosion vacuum integrity. With fractional flux leakage, the foil liner automatically provides for the seed axial compression field. To achieve it with optimal switching techniques, and at an accelerated pace however will require additional funding. This extra expense is well justified as the compression technique that will be enabled by this funding is unique in the ability to implode individual segments of the liner at different times. This is highly advantageous as the liner can be imploded in a manner that maximizes the energy transfer to the FRC. Production of shaped liner implosions for additional axial compression can thus be readily accomplished with the modified power

  17. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  18. Comparison of compression properties of stretchable knitted fabrics and bi-stretch woven fabrics for compression garments

    NARCIS (Netherlands)

    Maqsood, Muhammad

    2017-01-01

    Stretchable fabrics have diverse applications ranging from casual apparel to performance sportswear and compression therapy. Compression therapy is the universally accepted treatment for the management of hypertrophic scarring after severe burns. Mostly stretchable knitted fabrics are used in

  19. Compressed Air/Vacuum Transportation Techniques

    Science.gov (United States)

    Guha, Shyamal

    2011-03-01

    General theory of compressed air/vacuum transportation will be presented. In this transportation, a vehicle (such as an automobile or a rail car) is powered either by compressed air or by air at near vacuum pressure. Four version of such transportation is feasible. In all versions, a ``c-shaped'' plastic or ceramic pipe lies buried a few inches under the ground surface. This pipe carries compressed air or air at near vacuum pressure. In type I transportation, a vehicle draws compressed air (or vacuum) from this buried pipe. Using turbine or reciprocating air cylinder, mechanical power is generated from compressed air (or from vacuum). This mechanical power transferred to the wheels of an automobile (or a rail car) drives the vehicle. In type II-IV transportation techniques, a horizontal force is generated inside the plastic (or ceramic) pipe. A set of vertical and horizontal steel bars is used to transmit this force to the automobile on the road (or to a rail car on rail track). The proposed transportation system has following merits: virtually accident free; highly energy efficient; pollution free and it will not contribute to carbon dioxide emission. Some developmental work on this transportation will be needed before it can be used by the traveling public. The entire transportation system could be computer controlled.

  20. Maximal slicing of D-dimensional spherically symmetric vacuum spacetime

    International Nuclear Information System (INIS)

    Nakao, Ken-ichi; Abe, Hiroyuki; Yoshino, Hirotaka; Shibata, Masaru

    2009-01-01

    We study the foliation of a D-dimensional spherically symmetric black-hole spacetime with D≥5 by two kinds of one-parameter families of maximal hypersurfaces: a reflection-symmetric foliation with respect to the wormhole slot and a stationary foliation that has an infinitely long trumpetlike shape. As in the four-dimensional case, the foliations by the maximal hypersurfaces avoid the singularity irrespective of the dimensionality. This indicates that the maximal slicing condition will be useful for simulating higher-dimensional black-hole spacetimes in numerical relativity. For the case of D=5, we present analytic solutions of the intrinsic metric, the extrinsic curvature, the lapse function, and the shift vector for the foliation by the stationary maximal hypersurfaces. These data will be useful for checking five-dimensional numerical-relativity codes based on the moving puncture approach.

  1. New pulser for principal PO power

    International Nuclear Information System (INIS)

    Coudert, G.

    1984-01-01

    The pulser of the principal power of the PS is the unit that makes it possible to generate the reference function of the voltage of the principal magnet. This function depends on time and on the magnetic field of the magnet. It also generates various synchronization and reference pulses

  2. An Examination of Principal Job Satisfaction

    Science.gov (United States)

    Pengilly, Michelle M.

    2010-01-01

    As education continues to succumb to deficits in budgets and increasingly high levels of student performance to meet the federal and state mandates, the quest to sustain and retain successful principals is imperative. The National Association of School Boards (1999) portrays effective principals as "linchpins" of school improvement and…

  3. The Succession of a School Principal.

    Science.gov (United States)

    Fauske, Janice R.; Ogawa, Rodney T.

    Applying theory from organizational and cultural perspectives to succession of principals, this study observes and records the language and culture of a small suburban elementary school. The study's procedures included analyses of shared organizational understandings as well as identification of the principal's influence on the school. Analyses of…

  4. Social Media Strategies for School Principals

    Science.gov (United States)

    Cox, Dan; McLeod, Scott

    2014-01-01

    The purpose of this qualitative study was to describe, analyze, and interpret the experiences of school principals who use multiple social media tools with stakeholders as part of their comprehensive communications practices. Additionally, it examined why school principals have chosen to communicate with their stakeholders through social media.…

  5. Wave energy devices with compressible volumes.

    Science.gov (United States)

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John

    2014-12-08

    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m 3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.

  6. Left ventricle expands maximally preceding end-diastole. Radionuclide ventriculography study

    International Nuclear Information System (INIS)

    Horinouchi, Osamu

    2002-01-01

    It has been considered that left ventricle (LV) expands maximally at the end-diastole. However, is it exactly coincident with this point? This study was aimed to determine whether the maximal expansion of LV coincides with the peak of R wave on electrocardiogram. Thirty-three angina pectoris patients with normal LV motion were examined using radionuclide ventriculography. Data were obtained from every 30 ms backward frame from the peak of R wave. All patients showed the time of maximal expansion preceded the peak of R wave. The intervals from the peak of R wave and the onset of P wave to maximal expansion of LV was 105±29 ms and 88±25 ms, respectively. This period corresponds to the timing of maximal excurtion of mitral valve by atrial contraction, and the centripetal motion of LV without losing its volume before end-diastole may be interpreted on account of the movement of mitral valve toward closure. These findings suggest that LV expands maximally between P and R wave after atrial contraction, preceding the peak of R wave thought conventionally as the end-diastole. (author)

  7. Isostatic compression of buffer blocks. Middle scale

    International Nuclear Information System (INIS)

    Ritola, J.; Pyy, E.

    2012-01-01

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  8. Isostatic compression of buffer blocks. Middle scale

    Energy Technology Data Exchange (ETDEWEB)

    Ritola, J.; Pyy, E. [VTT Technical Research Centre of Finland, Espoo (Finland)

    2012-01-15

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  9. Fast lossless compression via cascading Bloom filters.

    Science.gov (United States)

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space

  10. An unusual case: right proximal ureteral compression by the ovarian vein and distal ureteral compression by the external iliac vein

    Directory of Open Access Journals (Sweden)

    Halil Ibrahim Serin

    2015-12-01

    Full Text Available A 32-years old woman presented to the emergency room of Bozok University Research Hospital with right renal colic. Multidetector computed tomography (MDCT showed compression of the proximal ureter by the right ovarian vein and compression of the right distal ureter by the right external iliac vein. To the best of our knowledge, right proximal ureteral compression by the ovarian vein together with distal ureteral compression by the external iliac vein have not been reported in the literature. Ovarian vein and external iliac vein compression should be considered in patients presenting to the emergency room with renal colic or low back pain and a dilated collecting system.

  11. Quantization Distortion in Block Transform-Compressed Data

    Science.gov (United States)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  12. The Interdependence of Principal School Leadership and Student Achievement

    Science.gov (United States)

    Soehner, David; Ryan, Thomas

    2011-01-01

    This review illuminated principal school leadership as a variable that impacted achievement. The principal as school leader and manager was explored because these roles were thought to impact student achievement both directly and indirectly. Specific principal leadership behaviors and principal effectiveness were explored as variables potentially…

  13. District Leadership for Effective Principal Evaluation and Support

    Science.gov (United States)

    Kimball, Steven M.; Arrigoni, Jessica; Clifford, Matthew; Yoder, Maureen; Milanowski, Anthony

    2015-01-01

    Research demonstrating principals' impact on student learning outcomes has fueled the shift from principals as facilities managers to an emphasis on instructional leadership (Hallinger & Heck, 1996; Leithwood, Louis, Anderson, & Wahlstrom, 2004; Marzano, Waters, & McNulty, 2005). Principals are under increasing pressure to carry out…

  14. School Restructuring and the Dilemmas of Principals' Work.

    Science.gov (United States)

    Wildy, Helen; Louden, William

    2000-01-01

    The complexity of principals' work may be characterized according to three dilemmas: accountability, autonomy, and efficiency. Narrative vignettes of 74 Australian principals revealed that principals were fair and inclusive. When faced with restructuring dilemmas, however, they favored strong over shared leadership, efficiency over collaboration,…

  15. Do Principals Fire the Worst Teachers?

    Science.gov (United States)

    Jacob, Brian A.

    2011-01-01

    This article takes advantage of a unique policy change to examine how principals make decisions regarding teacher dismissal. In 2004, the Chicago Public Schools (CPS) and Chicago Teachers Union signed a new collective bargaining agreement that gave principals the flexibility to dismiss probationary teachers for any reason and without the…

  16. Revising the Role of Principal Supervisor

    Science.gov (United States)

    Saltzman, Amy

    2016-01-01

    In Washington, D.C., and Tulsa, Okla., districts whose efforts are supported by the Wallace Foundation, principal supervisors concentrate on bolstering their principals' work to improve instruction, as opposed to focusing on the managerial or operational aspects of running a school. Supervisors oversee fewer schools, which enables them to provide…

  17. The Principal's Guide to Grant Success.

    Science.gov (United States)

    Bauer, David G.

    This book provides principals of public and private elementary and middle schools with a step-by-step approach for developing a system that empowers faculty, staff, and the school community in attracting grant funds. Following the introduction, chapter 1 discusses the principal's role in supporting grantseeking. Chapter 2 describes how to…

  18. Principals, agents and research programmes

    OpenAIRE

    Elizabeth Shove

    2003-01-01

    Research programmes appear to represent one of the more powerful instruments through which research funders (principals) steer and shape what researchers (agents) do. The fact that agents navigate between different sources and styles of programme funding and that they use programmes to their own ends is readily accommodated within principal-agent theory with the help of concepts such as shirking and defection. Taking a different route, I use three examples of research programming (by the UK, ...

  19. Compressed Air Production Using Vehicle Suspension

    OpenAIRE

    Ninad Arun Malpure; Sanket Nandlal Bhansali

    2015-01-01

    Abstract Generally compressed air is produced using different types of air compressors which consumes lot of electric energy and is noisy. In this paper an innovative idea is put forth for production of compressed air using movement of vehicle suspension which normal is wasted. The conversion of the force energy into the compressed air is carried out by the mechanism which consists of the vehicle suspension system hydraulic cylinder Non-return valve air compressor and air receiver. We are co...

  20. Maximization of regional probabilities using Optimal Surface Graphs

    DEFF Research Database (Denmark)

    Arias Lorza, Andres M.; Van Engelen, Arna; Petersen, Jens

    2018-01-01

    Purpose: We present a segmentation method that maximizes regional probabilities enclosed by coupled surfaces using an Optimal Surface Graph (OSG) cut approach. This OSG cut determines the globally optimal solution given a graph constructed around an initial surface. While most methods for vessel...... wall segmentation only use edge information, we show that maximizing regional probabilities using an OSG improves the segmentation results. We applied this to automatically segment the vessel wall of the carotid artery in magnetic resonance images. Methods: First, voxel-wise regional probability maps...... were obtained using a Support Vector Machine classifier trained on local image features. Then, the OSG segments the regions which maximizes the regional probabilities considering smoothness and topological constraints. Results: The method was evaluated on 49 carotid arteries from 30 subjects...