Compressive Principal Component Pursuit
Wright, John; Min, Kerui; Ma, Yi
2012-01-01
We consider the problem of recovering a target matrix that is a superposition of low-rank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured high-dimensional signals such as videos and hyperspectral images, as well as in the analysis of transformation invariant low-rank recovery. We analyze the performance of the natural convex heuristic for solving this problem, under the assumption that measurements are chosen uniformly at random. We prove that this heuristic exactly recovers low-rank and sparse terms, provided the number of observations exceeds the number of intrinsic degrees of freedom of the component signals by a polylogarithmic factor. Our analysis introduces several ideas that may be of independent interest for the more general problem of compressed sensing and decomposing superpositions of multiple structured signals.
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
The maximum force in a column under constant speed compression
Kuzkin, Vitaly A
2015-01-01
Dynamic buckling of an elastic column under compression at constant speed is investigated assuming the first-mode buckling. Two cases are considered: (i) an imperfect column (Hoff's statement), and (ii) a perfect column having an initial lateral deflection. The range of parameters, where the maximum load supported by a column exceeds Euler static force is determined. In this range, the maximum load is represented as a function of the compression rate, slenderness ratio, and imperfection/initial deflection. Considering the results we answer the following question: "How slowly the column should be compressed in order to measure static load-bearing capacity?" This question is important for the proper setup of laboratory experiments and computer simulations of buckling. Additionally, it is shown that the behavior of a perfect column having an initial deflection differ significantlys form the behavior of an imperfect column. In particular, the dependence of the maximum force on the compression rate is non-monotoni...
High precision Hugoniot measurements of D2 near maximum compression
Benage, John; Knudson, Marcus; Desjarlais, Michael
2015-11-01
The Hugoniot response of liquid deuterium has been widely studied due to its general importance and to the significant discrepancy in the inferred shock response obtained from early experiments. With improvements in dynamic compression platforms and experimental standards these results have converged and show general agreement with several equation of state (EOS) models, including quantum molecular dynamics (QMD) calculations within the Generalized Gradient Approximation (GGA). This approach to modeling the EOS has also proven quite successful for other materials and is rapidly becoming a standard approach. However, small differences remain among predictions obtained using different local and semi-local density functionals; these small differences show up in the deuterium Hugoniot at ~ 30-40 GPa near the region of maximum compression. Here we present experimental results focusing on that region of the Hugoniot and take advantage of advancements in the platform and standards, resulting in data with significantly higher precision than that obtained in previous studies. These new data may prove to distinguish between the subtle differences predicted by the various density functionals. Results of these experiments will be presented along with comparison to various QMD calculations. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang
2017-07-01
The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.
Kirkpatrick Mark
2005-01-01
Full Text Available Abstract Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1/2 to m(2k - m + 1/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given.
Kernel principal component and maximum autocorrelation factor analyses for change detection
Nielsen, Allan Aasbjerg; Canty, Morton John
2009-01-01
in Nevada acquired on successive passes of the Landsat-5 satellite in August-September 1991. The six-band images (the thermal band is omitted) with 1,000 by 1,000 28.5 m pixels were first processed with the iteratively re-weighted MAD (IR-MAD) algorithm in order to discriminate change. Then the MAD image......Principal component analysis (PCA) has often been used to detect change over time in remotely sensed images. A commonly used technique consists of finding the projections along the eigenvectors for data consisting of pair-wise (perhaps generalized) differences between corresponding spectral bands...... covering the same geographical region acquired at two different time points. In this paper kernel versions of the principal component and maximum autocorrelation factor (MAF) transformations are used to carry out the analysis. An example is based on bi-temporal Landsat-5 TM imagery over irrigation fields...
Design of reinforced concrete walls casted in place for the maximum normal stress of compression
T. C. Braguim
Full Text Available It is important to evaluate which designing models are safe and appropriate to structural analysis of buildings constructed in Concrete Wall system. In this work it is evaluated, through comparison of maximum normal stress of compression, a simple numerical model, which represents the walls with frame elements, with another much more robust and refined, which represents the walls with shells elements. The designing of the normal stress of compression it is done for both cases, based on NBR 16055, to conclude if the wall thickness initially adopted, it is enough or not.
Tanabe, Yuki; Kido, Teruhito; Kurata, Akira; Sawada, Shun; Suekuni, Hiroshi; Kido, Tomoyuki; Yokoi, Takahiro; Miyagawa, Masao; Mochizuki, Teruhito [Ehime University Graduate School of Medicine, Department of Radiology, Toon City, Ehime (Japan); Uetani, Teruyoshi; Inoue, Katsuji [Ehime University Graduate School of Medicine, Department of Cardiology, Pulmonology, Hypertension and Nephrology, Toon City, Ehime (Japan)
2017-04-15
To evaluate the feasibility of three-dimensional (3D) maximum principal strain (MP-strain) derived from cardiac computed tomography (CT) for detecting myocardial infarction (MI). Forty-three patients who underwent cardiac CT and magnetic resonance imaging (MRI) were retrospectively selected. Using the voxel tracking of motion coherence algorithm, the peak CT MP-strain was measured using the 16-segment model. With the trans-mural extent of late gadolinium enhancement (LGE) and the distance from MI, all segments were classified into four groups (infarcted, border, adjacent, and remote segments); infarcted and border segments were defined as MI with LGE positive. Diagnostic performance of MP-strain for detecting MI was compared with per cent systolic wall thickening (%SWT) assessed by MRI using receiver-operating characteristic curve analysis at a segment level. Of 672 segments excluding16 segments influenced by artefacts, 193 were diagnosed as MI. Sensitivity and specificity of peak MP-strain to identify MI were 81 % [95 % confidence interval (95 % CI): 74-88 %] and 86 % (81-92 %) compared with %SWT: 76 % (60-95 %) and 68 % (48-84 %), respectively. The area under the curve of peak MP-strain was superior to %SWT [0.90 (0.87-0.93) vs. 0.80 (0.76-0.83), p < 0.05]. CT MP-strain has a potential to provide incremental value to coronary CT angiography for detecting MI. (orig.)
Roopwani, Rahul; Buckner, Ira S
2011-10-14
Principal component analysis (PCA) was applied to pharmaceutical powder compaction. A solid fraction parameter (SF(c/d)) and a mechanical work parameter (W(c/d)) representing irreversible compression behavior were determined as functions of applied load. Multivariate analysis of the compression data was carried out using PCA. The first principal component (PC1) showed loadings for the solid fraction and work values that agreed with changes in the relative significance of plastic deformation to consolidation at different pressures. The PC1 scores showed the same rank order as the relative plasticity ranking derived from the literature for common pharmaceutical materials. The utility of PC1 in understanding deformation was extended to binary mixtures using a subset of the original materials. Combinations of brittle and plastic materials were characterized using the PCA method. The relationships between PC1 scores and the weight fractions of the mixtures were typically linear showing ideal mixing in their deformation behaviors. The mixture consisting of two plastic materials was the only combination to show a consistent positive deviation from ideality. The application of PCA to solid fraction and mechanical work data appears to be an effective means of predicting deformation behavior during compaction of simple powder mixtures.
Maximum-Entropy Meshfree Method for Compressible and Near-Incompressible Elasticity
Ortiz, A; Puso, M A; Sukumar, N
2009-09-04
Numerical integration errors and volumetric locking in the near-incompressible limit are two outstanding issues in Galerkin-based meshfree computations. In this paper, we present a modified Gaussian integration scheme on background cells for meshfree methods that alleviates errors in numerical integration and ensures patch test satisfaction to machine precision. Secondly, a locking-free small-strain elasticity formulation for meshfree methods is proposed, which draws on developments in assumed strain methods and nodal integration techniques. In this study, maximum-entropy basis functions are used; however, the generality of our approach permits the use of any meshfree approximation. Various benchmark problems in two-dimensional compressible and near-incompressible small strain elasticity are presented to demonstrate the accuracy and optimal convergence in the energy norm of the maximum-entropy meshfree formulation.
Kaganovich, Igor D., E-mail: ikaganov@pppl.gov [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Massidda, Scott; Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Vay, Jean-Luc [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)
2012-06-21
Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the
Gu, Fei; Wu, Hao
2016-09-01
The specifications of state space model for some principal component-related models are described, including the independent-group common principal component (CPC) model, the dependent-group CPC model, and principal component-based multivariate analysis of variance. Some derivations are provided to show the equivalence of the state space approach and the existing Wishart-likelihood approach. For each model, a numeric example is used to illustrate the state space approach. In addition, a simulation study is conducted to evaluate the standard error estimates under the normality and nonnormality conditions. In order to cope with the nonnormality conditions, the robust standard errors are also computed. Finally, other possible applications of the state space approach are discussed at the end.
Shen, Hua
2016-10-19
A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.
Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex
2012-06-01
Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕb. In the presence of large voltage errors, δU≫ΔEb, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.
Massidda, Scott [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Kaganovich, Igor D., E-mail: ikaganov@pppl.gov [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Lidia, Steven M.; Seidl, Peter [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)
2012-06-21
Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, {Delta}{Epsilon}{sub b}. In the presence of large voltage errors, {delta}U Double-Nested-Greater-Than {Delta}E{sub b}, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.
Yihang Yin
2015-08-01
Full Text Available Wireless sensor networks (WSNs have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA. First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.
Cao, Qian; Wan, Xiaoxia; Li, Junfeng; Liu, Qiang; Liang, Jingxing; Li, Chan
2016-08-01
This paper proposed two weight functions based on principal component analysis (PCA) to reserve more colorimetric information in spectral data compression process. One weight function consisted of the CIE XYZ color-matching functions representing the characteristic of the human visual system, while another was made up of the CIE XYZ color-matching functions of human visual system and relative spectral power distribution of the CIE standard illuminant D65. The improvement obtained from the proposed two methods were tested to compress and reconstruct the reflectance spectra of 1600 glossy Munsell color chips and 1950 Natural Color System color chips as well as six multispectral images. The performance was evaluated by the mean values of color difference under the CIE 1931 standard colorimetric observer and the CIE standard illuminant D65 and A. The mean values of root mean square errors between the original and reconstructed spectra were also calculated. The experimental results show that the proposed two methods significantly outperform the standard PCA and another two weighted PCA in the aspects of colorimetric reconstruction accuracy with very slight degradation in spectral reconstruction accuracy. In addition, weight functions with the CIE standard illuminant D65 can improve the colorimetric reconstruction accuracy compared to weight functions without the CIE standard illuminant D65.
Cao, Qian; Wan, Xiaoxia; Li, Junfeng; Liu, Qiang; Liang, Jingxing; Li, Chan
2016-10-01
This paper proposed two weight functions based on principal component analysis (PCA) to reserve more colorimetric information in spectral data compression process. One weight function consisted of the CIE XYZ color-matching functions representing the characteristic of the human visual system, while another was made up of the CIE XYZ color-matching functions of human visual system and relative spectral power distribution of the CIE standard illuminant D65. The improvement obtained from the proposed two methods were tested to compress and reconstruct the reflectance spectra of 1600 glossy Munsell color chips and 1950 Natural Color System color chips as well as six multispectral images. The performance was evaluated by the mean values of color difference under the CIE 1931 standard colorimetric observer and the CIE standard illuminant D65 and A. The mean values of root mean square errors between the original and reconstructed spectra were also calculated. The experimental results show that the proposed two methods significantly outperform the standard PCA and another two weighted PCA in the aspects of colorimetric reconstruction accuracy with very slight degradation in spectral reconstruction accuracy. In addition, weight functions with the CIE standard illuminant D65 can improve the colorimetric reconstruction accuracy compared to weight functions without the CIE standard illuminant D65.
Wu, Dufan; Li, Liang; Zhang, Li
2013-06-21
In computed tomography (CT), incomplete data problems such as limited angle projections often cause artifacts in the reconstruction results. Additional prior knowledge of the image has shown the potential for better results, such as a prior image constrained compressed sensing algorithm. While a pre-full-scan of the same patient is not always available, massive well-reconstructed images of different patients can be easily obtained from clinical multi-slice helical CTs. In this paper, a feature constrained compressed sensing (FCCS) image reconstruction algorithm was proposed to improve the image quality by using the prior knowledge extracted from the clinical database. The database consists of instances which are similar to the target image but not necessarily the same. Robust principal component analysis is employed to retrieve features of the training images to sparsify the target image. The features form a low-dimensional linear space and a constraint on the distance between the image and the space is used. A bi-criterion convex program which combines the feature constraint and total variation constraint is proposed for the reconstruction procedure and a flexible method is adopted for a good solution. Numerical simulations on both the phantom and real clinical patient images were taken to validate our algorithm. Promising results are shown for limited angle problems.
Lemofouet, Sylvain; Rufer, Alfred
This paper presents a hybrid energy storage system mainly based on Compressed Air, where the storage and withdrawal of energy are done within maximum efficiency conditions. As these maximum efficiency conditions impose the level of converted power, an intermittent time-modulated operation mode is applied to the thermodynamic converter to obtain a variable converted power. A smoothly variable output power is achieved with the help of a supercapacitive auxiliary storage device used as a filter. The paper describes the concept of the system, the power-electronic interfaces and especially the Maximum Efficiency Point Tracking (MEPT) algorithm and the strategy used to vary the output power. In addition, the paper introduces more efficient hybrid storage systems where the volumetric air machine is replaced by an oil-hydraulics and pneumatics converter, used under isothermal conditions. Practical results are also presented, recorded from a low-power air motor coupled to a small DC generator, as well as from a first prototype of the hydro-pneumatic system. Some economical considerations are also made, through a comparative cost evaluation of the presented hydro-pneumatic systems and a lead acid batteries system, in the context of a stand alone photovoltaic home application. This evaluation confirms the cost effectiveness of the presented hybrid storage systems.
Held, Louis F.; Pritchard, Ernest I.
1946-01-01
An investigation was conducted to evaluate the possibilities of utilizing the high-performance characteristics of triptane and xylidines blended with 28-R fuel in order to increase fuel economy by the use of high compression ratios and maximum-economy spark setting. Full-scale single-cylinder knock tests were run with 20 deg B.T.C. and maximum-economy spark settings at compression ratios of 6.9, 8.0, and 10.0, and with two inlet-air temperatures. The fuels tested consisted of triptane, four triptane and one xylidines blend with 28-R, and 28-R fuel alone. Indicated specific fuel consumption at lean mixtures was decreased approximately 17 percent at a compression ratio of 10.0 and maximum-economy spark setting, as compared to that obtained with a compression ratio of 6.9 and normal spark setting. When compression ratio was increased from 6.9 to 10.0 at an inlet-air temperature of 150 F, normal spark setting, and a fuel-air ratio of 0.065, 55-percent triptane was required with 28-R fuel to maintain the knock-limited brake power level obtained with 28-R fuel at a compression ratio of 6.9. Brake specific fuel consumption was decreased 17.5 percent at a compression ratio of 10.0 relative to that obtained at a compression ratio of 6.9. Approximately similar results were noted at an inlet-air temperature of 250 F. For concentrations up through at least 20 percent, triptane can be more efficiently used at normal than at maximum-economy spark setting to maintain a constant knock-limited power output over the range of compression ratios tested.
Lee, S. M. C.; Laurie, S. S.; Macias, B. R.; Willig, M.; Johnson, K.; Stenger, M. B.
2017-01-01
Astronauts and cosmonauts may experience symptoms of orthostatic intolerance during re-entry, landing, and for several days post-landing following short- and long-duration spaceflight. Presyncopal symptoms have been documented in approximately 20% of short-duration and greater than 60% of long-duration flyers on landing day specifically during 5-10 min of controlled (no countermeasures employed at the time of testing) stand tests or 80 deg head-up tilt tests. Current operational countermeasures to orthostatic intolerance include fluid loading prior to and whole body cooling during re-entry as well as compression garments that are worn during and for up to several days after landing. While both NASA and the Russian space program have utilized compression garments to protect astronauts and cosmonauts traveling on their respective vehicles, a "next-generation" gradient compression garment (GCG) has been developed and tested in collaboration with a commercial partner to support future space flight missions. Unlike previous compression garments used operationally by NASA that provide a single level of compression across only the calves, thighs, and lower abdomen, the GCG provides continuous coverage from the feet to below the pectoral muscles in a gradient fashion (from approximately 55 mm Hg at the feet to approximately 16 mmHg across the abdomen). The efficacy of the GCG has been demonstrated previously after a 14-d bed rest study without other countermeasures and after short-duration Space Shuttle missions. Currently the GCG is being tested during a stand test following long-duration missions (6 months) to the International Space Station. While results to date have been promising, interactions of the GCG with other space suit components have not been examined. Specifically, it is unknown whether wearing the GCG over NASA's Maximum Absorbency Garment (MAG; absorbent briefs worn for the collection of urine and feces while suited during re-entry and landing) will
Burns, Brian; Wilson, Neil E; Furuyama, Jon K; Thomas, M Albert
2014-02-01
The four-dimensional (4D) echo-planar correlated spectroscopic imaging (EP-COSI) sequence allows for the simultaneous acquisition of two spatial (ky, kx) and two spectral (t2, t1) dimensions in vivo in a single recording. However, its scan time is directly proportional to the number of increments in the ky and t1 dimensions, and a single scan can take 20–40 min using typical parameters, which is too long to be used for a routine clinical protocol. The present work describes efforts to accelerate EP-COSI data acquisition by application of non-uniform under-sampling (NUS) to the ky–t1 plane of simulated and in vivo EP-COSI datasets then reconstructing missing samples using maximum entropy (MaxEnt) and compressed sensing (CS). Both reconstruction problems were solved using the Cambridge algorithm, which offers many workflow improvements over other l1-norm solvers. Reconstructions of retrospectively under-sampled simulated data demonstrate that the MaxEnt and CS reconstructions successfully restore data fidelity at signal-to-noise ratios (SNRs) from 4 to 20 and 5× to 1.25× NUS. Retrospectively and prospectively 4× under-sampled 4D EP-COSI in vivo datasets show that both reconstruction methods successfully remove NUS artifacts; however, MaxEnt provides reconstructions equal to or better than CS. Our results show that NUS combined with iterative reconstruction can reduce 4D EP-COSI scan times by 75% to a clinically viable 5 min in vivo, with MaxEnt being the preferred method. 2013 John Wiley & Sons, Ltd.
Davis, Jean-Paul; Martin, Matthew; Knudson, Marcus
2011-06-01
Quasi-isentropic ramp-wave experiments promise accurate equation-of-state (EOS) data in the solid phase at relatively low temperatures and multimegabar pressures. In this range of pressure, isothermal diamond-anvil techniques have limited pressure accuracy due to reliance on theoretical EOS of calibration standards, thus accurate quasi-isentropic compression data would help immensely in constraining EOS models. Multi-megabar ramp compression experiments using the Z Machine at Sandia as a magnetic drive with stripline targets have been performed on tantalum, copper, gold, beryllium, molybdenum, and aluminum metals as well as lithium fluoride crystal. Much of the data from these experiments are analyzed using a single-sample inverse Lagrangian approach. This technique, and the quantification of its uncertainties, will be described in detail. Results will be presented for selected materials, with comparisons to independently developed EOS. *Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Lee, H.; Haimson, B.
2007-12-01
drillhole wall conditions is drastically different from that conventionally expected, but is compatible with breakout formation mechanism in granite (Haimson, Int. J. Rock Mech., 2007). All the 'unjacketed' true triaxial strength data can be fitted by a simple function in the octahedral shear stress versus octahedral normal stress domain, yielding a Nadai-type true triaxial strength criterion. The criterion can be used in conjunction with breakouts that have been located within the cored zone to yield the maximum horizontal in situ stress σH when the other two principal stress are known. Assuming that the state of stress at breakout-drillhole intersections (located for example by BHTV logging) is sufficient to bring about brittle failure (Vernik and Zoback, 1992), one can substitute the known principal stresses there (obtained from the Kirsch solution) for the corresponding values in the criterion. The in situ σv is given by the overburden density, σh is typically obtained from hydrofrac shut-in pressures, breakout width is extracted from BHTV logs, borehole fluid pressure is a function of its density, and the Poisson's ratio is obtained from mechanical lab testing. The only unknown, σH, is thus readily computed. An actual computation was not carried out because data on hydrofrac pressures and breakout dimensions were not available at the time of this submission.
Jang, Hyun-jeong; Kim, Suhn-yeop; Oh, Duck-won
2015-04-01
The aim of the present study was to investigate the effects of augmented trunk stabilization with external compression support (ECS) on the electromyography (EMG) activity of shoulder and scapular muscles and shoulder abductor strength during isometric shoulder abduction. Twenty-six women volunteered for the study. Surface EMG was used to monitor the activity of the upper trapezius (UT), lower trapezius (LT), serratus anterior (SA), and middle deltoid (MD), and shoulder abductor strength was measured using a dynamometer during three experimental conditions: (1) no external support (condition-1), (2) pelvic support (condition-2), and (3) pelvic and thoracic supports (condition-3) in an active therapeutic movement device. EMG activities were significantly lower for UT and higher for MD during condition 3 than during condition 1 (p Shoulder abductor strength was significantly higher during condition 3 than during condition 1 (p muscle effort of the UT during isometric shoulder abduction and increasing shoulder abductor strength. Copyright © 2014 Elsevier Ltd. All rights reserved.
N. A. Zolotukhina; I.P. Kharchenko
2005-01-01
We investigate the properties of interplanetary inhomogeneities generating long-lasting mid-latitude Pc1, 2 geomagnetic pulsations. The data from the Wind and IMP 8 spacecrafts, and from the Mondy and Borok midlatitude magnetic observatories are used in this study. The pulsations under investigation develop in the maximum and early recovery phase of magnetic storms. The pulsations have amplitudes from a few tens to several hundred pT andlast more than seven hours. A close association of the increase (decrease) in solar wind dynamic pressure (Psw) with the onset or enhancement (attenuation or decay) of these pulsations has been established. Contrary to high-latitude phenomena, there is a distinctive feature of the interplanetary inhomogeneities that are responsible for generation of long-lasting mid-latitude Pc1, 2. It is essential that the effect of the quasi-stationary negative Bz-component of the interplanetary magnetic field on the magnetosphere extends over 4 hours. Only then are the Psw pulses able to excite the above-mentioned type of mid-latitude geomagnetic pulsations. Model calculations show that in the cases under study the plasmapause can form in the vicinity of the magnetic observatory. This implies that the existence of an intense ring current resulting from the enhanced magnetospheric convection is necessary for the Pc1, 2 excitation. Further, the existence of the plasmapause above the observation point (as a waveguide) is necessary for long-lasting Pc1 waves to arrive at the ground.
Carlos Augusto de Miranda Gomide
2002-11-01
Full Text Available Buscando-se avaliar morfofisiologicamente a rebrota do capim-mombaça, quatro desfolhas foram impostas ao perfilho principal, sendo estudado o comportamento da planta em termos da taxa de expansão da área foliar, crescimento do sistema radicular, nível de carboidratos totais não estruturais (CTNE da raiz e do colmo, taxa de crescimento relativo (TCR, taxa de assimilação líquida (TAL e razão de área foliar (RAF às idades de 2, 5, 9 e 16 dias após as desfolhas, bem como da taxa fotossintética máxima às idades de 2, 6 e 13 dias das folhas remanescentes à desfolha. As desfolhas foram as seguintes: remoção de todas as lâminas foliares (desfolha total, a remoção da lâmina da folha adulta mais jovem (desfolha superior, a remoção das lâminas das duas folhas adultas mais velhas (desfolha inferior e controle (sem desfolha, juntamente com o corte dos demais perfilhos, realizado a 8 cm do solo. Foram observadas cinco repetições por tratamento, segundo o delineamento inteiramente casualizado. As folhas adultas não diferiram quanto às taxas fotossintéticas máximas, que exibiram aumento nos primeiros dias após a desfolha, e queda aos 13 dias. A desfolha reduziu os teores de CTNE da base do colmo, principalmente nas plantas sob desfolha total. Comprometimento do crescimento do sistema radicular e do teor de CTNE das raízes foi observado nas plantas sob desfolha total, que também tiveram sua TCR reduzida nos primeiros dias de rebrotação. Entretanto, o aumento na RAF possibilitou a estas plantas recuperação da TCR e alta taxa de expansão da área foliar, igualando a área foliar das demais plantas aos 16 dias de rebrota.An experiment was carried out to evaluate morphophysiological aspects of Mombaçagrass growth after defoliation. Defoliation treatments were performed when the number of green completely expanded leaves on main tiller had stabilized around 3. Four defoliation treatments were imposed to the main tiller: T1 - no
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Gomide Carlos Augusto de Miranda
2002-01-01
Full Text Available Buscando-se avaliar morfofisiologicamente a rebrota do capim-mombaça, quatro desfolhas foram impostas ao perfilho principal, sendo estudado o comportamento da planta em termos da taxa de expansão da área foliar, crescimento do sistema radicular, nível de carboidratos totais não estruturais (CTNE da raiz e do colmo, taxa de crescimento relativo (TCR, taxa de assimilação líquida (TAL e razão de área foliar (RAF às idades de 2, 5, 9 e 16 dias após as desfolhas, bem como da taxa fotossintética máxima às idades de 2, 6 e 13 dias das folhas remanescentes à desfolha. As desfolhas foram as seguintes: remoção de todas as lâminas foliares (desfolha total, a remoção da lâmina da folha adulta mais jovem (desfolha superior, a remoção das lâminas das duas folhas adultas mais velhas (desfolha inferior e controle (sem desfolha, juntamente com o corte dos demais perfilhos, realizado a 8 cm do solo. Foram observadas cinco repetições por tratamento, segundo o delineamento inteiramente casualizado. As folhas adultas não diferiram quanto às taxas fotossintéticas máximas, que exibiram aumento nos primeiros dias após a desfolha, e queda aos 13 dias. A desfolha reduziu os teores de CTNE da base do colmo, principalmente nas plantas sob desfolha total. Comprometimento do crescimento do sistema radicular e do teor de CTNE das raízes foi observado nas plantas sob desfolha total, que também tiveram sua TCR reduzida nos primeiros dias de rebrotação. Entretanto, o aumento na RAF possibilitou a estas plantas recuperação da TCR e alta taxa de expansão da área foliar, igualando a área foliar das demais plantas aos 16 dias de rebrota.
Radim Uhlář
2009-09-01
Full Text Available BACKGROUND: There are several factors (the initial ski jumper's body position and its changes at the transition to the flight phase, the magnitude and the direction of the velocity vector of the jumper's center of mass, the magnitude of the aerodynamic drag and lift forces, etc., which determine the trajectory of the jumper ski system along with the total distance of the jump. OBJECTIVE: The objective of this paper is to bring out a method based on Pontryagin's maximum principle, which allows us to obtain a solution of the optimization problem for flight style control with three constrained control variables – the angle of attack (a, body ski angle (b, and ski opening angle (V. METHODS: The flight distance was used as the optimality criterion. The borrowed regression function was taken as the source of information about the dependence of the drag (D and lift (L area on control variables with tabulated regression coefficients. The trajectories of the reference and optimized jumps were compared with the K = 125 m jumping hill profile in Frenštát pod Radhoštěm (Czech Republic and the appropriate lengths of the jumps, aerodynamic drag and lift forces, magnitudes of the ski jumper system's center of mass velocity vector and it's vertical and horizontal components were evaluated. Admissible control variables were taken at each time from the bounded set to respect the realistic posture of the ski jumper system in flight. RESULTS: It was found that a ski jumper should, within the bounded set of admissible control variables, minimize the angles (a and (b, whereas angle (V should be maximized. The length increment due to optimization is 17%. CONCLUSIONS: For future work it is necessary to determine the dependence of the aerodynamic forces acting on the ski jumper system on the flight via regression analysis of the experimental data as well as the application of the control variables related to the ski jumper's mental and motor abilities. [V
"Compressed" Compressed Sensing
Reeves, Galen
2010-01-01
The field of compressed sensing has shown that a sparse but otherwise arbitrary vector can be recovered exactly from a small number of randomly constructed linear projections (or samples). The question addressed in this paper is whether an even smaller number of samples is sufficient when there exists prior knowledge about the distribution of the unknown vector, or when only partial recovery is needed. An information-theoretic lower bound with connections to free probability theory and an upper bound corresponding to a computationally simple thresholding estimator are derived. It is shown that in certain cases (e.g. discrete valued vectors or large distortions) the number of samples can be decreased. Interestingly though, it is also shown that in many cases no reduction is possible.
Ultrasound beamforming using compressed data.
Li, Yen-Feng; Li, Pai-Chi
2012-05-01
The rapid advancements in electronics technologies have made software-based beamformers for ultrasound array imaging feasible, thus facilitating the rapid development of high-performance and potentially low-cost systems. However, one challenge to realizing a fully software-based system is transferring data from the analog front end to the software back end at rates of up to a few gigabits per second. This study investigated the use of data compression to reduce the data transfer requirements and optimize the associated trade-off with beamforming quality. JPEG and JPEG2000 compression techniques were adopted. The acoustic data of a line phantom were acquired with a 128-channel array transducer at a center frequency of 3.5 MHz, and the acoustic data of a cyst phantom were acquired with a 64-channel array transducer at a center frequency of 3.33 MHz. The receive-channel data associated with each transmit event are separated into 8 × 8 blocks and several tiles before JPEG and JPEG2000 data compression is applied, respectively. In one scheme, the compression was applied to raw RF data, while in another only the amplitude of baseband data was compressed. The maximum compression ratio of RF data compression to produce an average error of lower than 5 dB was 15 with JPEG compression and 20 with JPEG2000 compression. The image quality is higher with baseband amplitude data compression than with RF data compression; although the maximum overall compression ratio (compared with the original RF data size), which was limited by the data size of uncompressed phase data, was lower than 12, the average error in this case was lower than 1 dB when the compression ratio was lower than 8.
Bro, R.; Smilde, A.K.
2014-01-01
Principal component analysis is one of the most important and powerful methods in chemometrics as well as in a wealth of other areas. This paper provides a description of how to understand, use, and interpret principal component analysis. The paper focuses on the use of principal component analysis
Elementary School Principal Effectiveness.
Cross, Ray
A review of research linking elementary principal "antecedents" (defined as traits), behaviors, school conditions, and student outcomes furnishes few supportable generalizations. The studies relating principal antecedents with behavior and principal antecedents with organizational variables reveals that the trait theory of leadership has…
Principal Component Analysis in ECG Signal Processing
Andreas Bollmann
2007-01-01
Full Text Available This paper reviews the current status of principal component analysis in the area of ECG signal processing. The fundamentals of PCA are briefly described and the relationship between PCA and Karhunen-Loève transform is explained. Aspects on PCA related to data with temporal and spatial correlations are considered as adaptive estimation of principal components is. Several ECG applications are reviewed where PCA techniques have been successfully employed, including data compression, ST-T segment analysis for the detection of myocardial ischemia and abnormalities in ventricular repolarization, extraction of atrial fibrillatory waves for detailed characterization of atrial fibrillation, and analysis of body surface potential maps.
Principal Ports and Facilities
California Department of Resources — The Principal Port file contains USACE port codes, geographic locations (longitude, latitude), names, and commodity tonnage summaries (total tons, domestic, foreign,...
Murphy, Lee Ann
2006-01-01
Some principals have personalities that can drive teachers around the bend and back again. Sure, most are wonderful bosses who support teachers in any way, but woe betide teachers if they are unlucky enough to run across one of the six dreaded "problem principals" identified in this article. Teachers do not have to be held hostage by difficult…
Principal Preparation Programs
Butler, Kevin
2008-01-01
A school principal's job has never been tougher. The accountability movement--culminating with the federal No Child Left Behind law in 2001--has put pressure on principals to improve student performance, resulting in school leaders' transitioning from a more administrative role to becoming more heavily involved in assessment, instruction,…
Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)
2012-07-01
Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)
Xenaki, Angeliki; Mosegaard, Klaus
2014-01-01
Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex...
Principal noncommutative torus bundles
Echterhoff, Siegfried; Nest, Ryszard; Oyono-Oyono, Herve
2008-01-01
In this paper we study continuous bundles of C*-algebras which are non-commutative analogues of principal torus bundles. We show that all such bundles, although in general being very far away from being locally trivial bundles, are at least locally trivial with respect to a suitable bundle version...... of bivariant K-theory (denoted RKK-theory) due to Kasparov. Using earlier results of Echterhoff and Williams, we shall give a complete classification of principal non-commutative torus bundles up to equivariant Morita equivalence. We then study these bundles as topological fibrations (forgetting the group...... action) and give necessary and sufficient conditions for any non-commutative principal torus bundle being RKK-equivalent to a commutative one. As an application of our methods we shall also give a K-theoretic characterization of those principal torus-bundles with H-flux, as studied by Mathai...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Establishment of Maximum Voluntary Compressive Neck Tolerance Levels
2011-07-01
Bridges Casey Pirnstill Chris Burneka John Plaga Grant Roush Biosciences and Performance Division Vulnerability Analysis Branch July 2011...S) Michael Cote, John Buhrman, Nathaniel Bridges, Casey Pirnstill, Chris Burneka, John Plaga , Grant Roush 5d. PROJECT NUMBER OSMS 5e. TASK
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Principal component analysis for authorship attribution
Amir Jamak
2012-01-01
Full Text Available Background: To recognize the authors of the texts by the use of statistical tools, one first needs to decide about the features to be used as author characteristics, and then extract these features from texts. The features extracted from texts are mostly the counts of so called function words. Objectives: The data extracted are processed further to compress as a data with less number of features, such a way that the compressed data still has the power of effective discriminators. In this case feature space has less dimensionality then the text itself. Methods/Approach: In this paper, the data collected by counting words and characters in around a thousand paragraphs of each sample book, underwent a principal component analysis performed using neural networks. Once the analysis was complete, the first of the principal components is used to distinguish the books authored by a certain author. Results: The achieved results show that every author leaves a unique signature in written text that can be discovered by analyzing counts of short words per paragraph. Conclusions: In this article we have demonstrated that based on analyzing counts of short words per paragraph authorship could be traced using principal component analysis. Methodology could be used for other purposes, like fraud detection in auditing.
Dual compression is not an uncommon type of iliac vein compression syndrome.
Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu
2017-03-13
Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.
Hollar, Charlie
2004-01-01
They may never grace the pages of The Wall Street Journal or Fortune magazine, but they might possibly be the most important CEOs in our country. They are elementary school principals. Each of them typically serves the learning needs of 350-400 clients (students) while overseeing a multimillion-dollar facility staffed by 20-25 teachers and 10-15…
Strategic Principal Communication
Henry, Jake; Woody, Aaron
2013-01-01
As communities become increasingly diverse and criticism of traditional public schools intensifies, some states, such as North Carolina, have enacted legislation that encourages alternative forms of schooling. This condition has resulted in new challenges for principals to communicate broadly and often with stakeholders in an effort to build…
Optimized Large-Scale CMB Likelihood And Quadratic Maximum Likelihood Power Spectrum Estimation
Gjerløw, E; Eriksen, H K; Górski, K M; Gruppuso, A; Jewell, J B; Plaszczynski, S; Wehus, I K
2015-01-01
We revisit the problem of exact CMB likelihood and power spectrum estimation with the goal of minimizing computational cost through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al.\\ (1997), and here we develop it into a fully working computational framework for large-scale polarization analysis, adopting \\WMAP\\ as a worked example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked \\WMAP\\ sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8\\% at $\\ell\\le32$, and a...
Rodríguez-Ruiz, Alejandro; Agasthya, Greeshma A.; Sechopoulos, Ioannis
2017-09-01
To characterize and develop a patient-based 3D model of the compressed breast undergoing mammography and breast tomosynthesis. During this IRB-approved, HIPAA-compliant study, 50 women were recruited to undergo 3D breast surface imaging with structured light (SL) during breast compression, along with simultaneous acquisition of a tomosynthesis image. A pair of SL systems were used to acquire 3D surface images by projecting 24 different patterns onto the compressed breast and capturing their reflection off the breast surface in approximately 12-16 s. The 3D surface was characterized and modeled via principal component analysis. The resulting surface model was combined with a previously developed 2D model of projected compressed breast shapes to generate a full 3D model. Data from ten patients were discarded due to technical problems during image acquisition. The maximum breast thickness (found at the chest-wall) had an average value of 56 mm, and decreased 13% towards the nipple (breast tilt angle of 5.2°). The portion of the breast not in contact with the compression paddle or the support table extended on average 17 mm, 18% of the chest-wall to nipple distance. The outermost point along the breast surface lies below the midline of the total thickness. A complete 3D model of compressed breast shapes was created and implemented as a software application available for download, capable of generating new random realistic 3D shapes of breasts undergoing compression. Accurate characterization and modeling of the breast curvature and shape was achieved and will be used for various image processing and clinical tasks.
Improved forecasting with leading indicators: the principal covariate index
C. Heij (Christiaan)
2007-01-01
textabstractWe propose a new method of leading index construction that combines the need for data compression with the objective of forecasting. This so-called principal covariate index is constructed to forecast growth rates of the Composite Coincident Index. The forecast performance is compared
Stable Principal Component Pursuit
Zhou, Zihan; Wright, John; Candes, Emmanuel; Ma, Yi
2010-01-01
In this paper, we study the problem of recovering a low-rank matrix (the principal components) from a high-dimensional data matrix despite both small entry-wise noise and gross sparse errors. Recently, it has been shown that a convex program, named Principal Component Pursuit (PCP), can recover the low-rank matrix when the data matrix is corrupted by gross sparse errors. We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entrywise noise and robust to gross sparse errors. More precisely, our result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level. We present simulation results to support our result and demonstrate that the new convex program accurately recovers the principal components (the low-rank matrix) under quite broad conditions. To our knowledge, this is...
Robust Principal Component Analysis?
Candes, Emmanuel J; Ma, Yi; Wright, John
2009-01-01
This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for th...
Real-Time Principal-Component Analysis
Duong, Vu; Duong, Tuan
2005-01-01
A recently written computer program implements dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN), which was described in Method of Real-Time Principal-Component Analysis (NPO-40034) NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 59. To recapitulate: DOGEDYN is a method of sequential principal-component analysis (PCA) suitable for such applications as data compression and extraction of features from sets of data. In DOGEDYN, input data are represented as a sequence of vectors acquired at sampling times. The learning algorithm in DOGEDYN involves sequential extraction of principal vectors by means of a gradient descent in which only the dominant element is used at each iteration. Each iteration includes updating of elements of a weight matrix by amounts proportional to a dynamic initial learning rate chosen to increase the rate of convergence by compensating for the energy lost through the previous extraction of principal components. In comparison with a prior method of gradient-descent-based sequential PCA, DOGEDYN involves less computation and offers a greater rate of learning convergence. The sequential DOGEDYN computations require less memory than would parallel computations for the same purpose. The DOGEDYN software can be executed on a personal computer.
Transforming Principal Preparation. ERIC Digest.
Lashway, Larry
In the current climate of accountability, the responsibilities of principals have increased. The new role of principals requires new forms of training, and standards-based reform is generating major changes in principal-preparation programs. This digest examines some of those changes. First, it looks at the effectiveness of principal-preparation…
Compressive Sensing Over Networks
Feizi, Soheil; Effros, Michelle
2010-01-01
In this paper, we demonstrate some applications of compressive sensing over networks. We make a connection between compressive sensing and traditional information theoretic techniques in source coding and channel coding. Our results provide an explicit trade-off between the rate and the decoding complexity. The key difference of compressive sensing and traditional information theoretic approaches is at their decoding side. Although optimal decoders to recover the original signal, compressed by source coding have high complexity, the compressive sensing decoder is a linear or convex optimization. First, we investigate applications of compressive sensing on distributed compression of correlated sources. Here, by using compressive sensing, we propose a compression scheme for a family of correlated sources with a modularized decoder, providing a trade-off between the compression rate and the decoding complexity. We call this scheme Sparse Distributed Compression. We use this compression scheme for a general multi...
Compression limits in cascaded quadratic soliton compression
Bache, Morten; Bang, Ole; Krolikowski, Wieslaw;
2008-01-01
Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency.......Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency....
Huang, Bormin
2011-01-01
Satellite Data Compression covers recent progress in compression techniques for multispectral, hyperspectral and ultra spectral data. A survey of recent advances in the fields of satellite communications, remote sensing and geographical information systems is included. Satellite Data Compression, contributed by leaders in this field, is the first book available on satellite data compression. It covers onboard compression methodology and hardware developments in several space agencies. Case studies are presented on recent advances in satellite data compression techniques via various prediction-
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Recursive principal components analysis.
Voegtlin, Thomas
2005-10-01
A recurrent linear network can be trained with Oja's constrained Hebbian learning rule. As a result, the network learns to represent the temporal context associated to its input sequence. The operation performed by the network is a generalization of Principal Components Analysis (PCA) to time-series, called Recursive PCA. The representations learned by the network are adapted to the temporal statistics of the input. Moreover, sequences stored in the network may be retrieved explicitly, in the reverse order of presentation, thus providing a straight-forward neural implementation of a logical stack.
Principal Contradictions and Changes
Wang Zaibang
2006-01-01
@@ The September 11 terrorist attacks are the most notable events that occurred since the end of the Cold War. It is not only a logical outcome of world development post Cold War but is also an important variable influencing world development. In order to evaluate the influence of the development of the international relations in these five years, the international background after the Cold War must be taken into consideration and the characteristics and changes of the three principal contradictions described below need to be understood.
Clorius, Christian Odin; Pedersen, Martin Bo Uhre; Hoffmeyer, Preben;
1999-01-01
An investigation of fatigue failure in wood subjected to load cycles in compression parallel to grain is presented. Small clear specimens of spruce are taken to failure in square wave formed fatigue loading at a stress excitation level corresponding to 80% of the short term strength. Four...... frequencies ranging from 0.01 Hz to 10 Hz are used. The number of cycles to failure is found to be a poor measure of the fatigue performance of wood. Creep, maximum strain, stiffness and work are monitored throughout the fatigue tests. Accumulated creep is suggested identified with damage and a correlation...... is observed between stiffness reduction and accumulated creep. A failure model based on the total work during the fatigue life is rejected, and a modified work model based on elastic, viscous and non-recovered viscoelastic work is experimentally supported, and an explanation at a microstructural level...
Efficient compression of molecular dynamics trajectory files.
Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James
2012-10-15
We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases.
[The evolution of principal drugs in prescription compatibility].
Yuan, Bing; Shi, Dong-ping
2009-01-01
The principal drugs of principal, adjuvant, auxiliary and conductant compatiblity in prescriptions recorded in the ancient literatures had different meaning and quantities. According to the current literatures, Zhuangzi Xu Wugui took the one can cure diseases as the principal drug; The principal, adjuvant, auxiliary and conductant drugs in Shennong Bencao Jing (Shennong's Classic of Materia Medica) can be used to differentiate the good and bad of drugs; Yaoxing Lun (Treatise on medicinal property) of Zheng quan (Tang dynasty) stipulated some drugs as principal drugs; Zazhu Bencao of Jiang Xiaowan (Tang dynasty) took the one can cure yin diseases as the principal drugs; Yixue Qiyuan (the origination of medicine) of Zhang Yuansu (Jin dynasty) took the one of maximum dosage as principal drugs; Piwei Lun (Treatise on Spleen and Stomach) of LI gao (Jin dynasty) took the powerful one as the principal drug; The principal drugs in Yi Lun (medicine treatise) of Wang Kentang (Ming dynasty) changed according to different ages. The quantities of principal drugs can had two and three ingredients even took one prescription as principal drug instead of one ingredient.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
The Skills of Exemplary Principals.
Walker, John E.
1990-01-01
NASSP's Assessment Center Project has identified 12 key skills for successful principals: problem analysis, judgment, organizational ability, decisiveness, leadership, sensitivity, stress tolerance, oral communication, written communication, wide-ranging interests, personal motivation, and educational values. Effective principals succeed by…
Compressive imaging system design using task-specific information.
Ashok, Amit; Baheti, Pawan K; Neifeld, Mark A
2008-09-01
We present a task-specific information (TSI) based framework for designing compressive imaging (CI) systems. The task of target detection is chosen to demonstrate the performance of the optimized CI system designs relative to a conventional imager. In our optimization framework, we first select a projection basis and then find the associated optimal photon-allocation vector in the presence of a total photon-count constraint. Several projection bases, including principal components (PC), independent components, generalized matched-filter, and generalized Fisher discriminant (GFD) are considered for candidate CI systems, and their respective performance is analyzed for the target-detection task. We find that the TSI-optimized CI system design based on a GFD projection basis outperforms all other candidate CI system designs as well as the conventional imager. The GFD-based compressive imager yields a TSI of 0.9841 bits (out of a maximum possible 1 bit for the detection task), which is nearly ten times the 0.0979 bits achieved by the conventional imager at a signal-to-noise ratio of 5.0. We also discuss the relation between the information-theoretic TSI metric and a conventional statistical metric like probability of error in the context of the target-detection problem. It is shown that the TSI can be used to derive an upper bound on the probability of error that can be attained by any detection algorithm.
Compression of a bundle of light rays.
Marcuse, D
1971-03-01
The performance of ray compression devices is discussed on the basis of a phase space treatment using Liouville's theorem. It is concluded that the area in phase space of the input bundle of rays is determined solely by the required compression ratio and possible limitations on the maximum ray angle at the output of the device. The efficiency of tapers and lenses as ray compressors is approximately equal. For linear tapers and lenses the input angle of the useful rays must not exceed the compression ratio. The performance of linear tapers and lenses is compared to a particular ray compressor using a graded refractive index distribution.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Deformation quantization of principal bundles
Aschieri, Paolo
2016-01-01
We outline how Drinfeld twist deformation techniques can be applied to the deformation quantization of principal bundles into noncommutative principal bundles, and more in general to the deformation of Hopf-Galois extensions. First we twist deform the structure group in a quantum group, and this leads to a deformation of the fibers of the principal bundle. Next we twist deform a subgroup of the group of authomorphisms of the principal bundle, and this leads to a noncommutative base space. Considering both deformations we obtain noncommutative principal bundles with noncommutative fiber and base space as well.
Ferrari, Jérôme
2015-01-01
Fasciné par la figure du physicien allemand Werner Heisenberg (1901-1976), fondateur de la mécanique quantique, inventeur du célèbre "principe d'incertitude" et Prix Nobel de physique en 1932, un jeune aspirant-philosophe désenchanté s'efforce, à l'aube du XXIe siècle, de considérer l'incomplétude de sa propre existence à l'aune des travaux et de la destinée de cet exceptionnel homme de sciences qui incarne pour lui la rencontre du langage scientifique et de la poésie, lesquels, chacun à leur manière, en ouvrant la voie au scandale de l'inédit, dessillent les yeux sur le monde pour en révéler la mystérieuse beauté que ne cessent de confisquer le matérialisme à l'œuvre dans l'Histoire des hommes.
Problems That Principals Face In School Administration
Engin Aslanargun
2012-12-01
Full Text Available Transformation is the key concept in this century that should be taken into consideration in organization anyway. Educational organizations are also under the influence of such transformation as in the case for others, the role of administrators have taken more significance that leads them to apply emerging approaches in organizational life. The purpose of this study is to define the communicative problems that principals experience in school settings and possible problem solving behaviors towards teachers. 7 principals administering schools in the town of Akçakoca, in Düzce, included the study with qualitative case design and purposive, maximum sampling procedure. It was appeared that principals have displayed in some cases similar and different administrative behaviors while tackling with communicative and problem solving conditions. It was revealed that principals tend to do their jobs within limited scope and structure based administrative style instead of consideration as a result of this study. School climate that is essential for effective learning and teaching process have been asserted to be ignored or taken less significance behind the structural and material matters
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Prediction of three dimensional maximum isometric neck strength.
Fice, Jason B; Siegmund, Gunter P; Blouin, Jean-Sébastien
2014-09-01
We measured maximum isometric neck strength under combinations of flexion/extension, lateral bending and axial rotation to determine whether neck strength in three dimensions (3D) can be predicted from principal axes strength. This would allow biomechanical modelers to validate their neck models across many directions using only principal axis strength data. Maximum isometric neck moments were measured in 9 male volunteers (29±9 years) for 17 directions. The 3D moments were normalized by the principal axis moments, and compared to unity for all directions tested. Finally, each subject's maximum principal axis moments were used to predict their resultant moment in the off-axis directions. Maximum moments were 30±6 N m in flexion, 32±9 N m in lateral bending, 51±11 N m in extension, and 13±5 N m in axial rotation. The normalized 3D moments were not significantly different from unity (95% confidence interval contained one), except for three directions that combined ipsilateral axial rotation and lateral bending; in these directions the normalized moments exceeded one. Predicted resultant moments compared well to the actual measured values (r2=0.88). Despite exceeding unity, the normalized moments were consistent across subjects to allow prediction of maximum 3D neck strength using principal axes neck strength.
Focus on Compression Stockings
... the stocking every other day with a mild soap. Do not use Woolite™ detergent. Use warm water ... compression clothing will lose its elasticity and its effectiveness. Compression stockings last for about 4-6 months ...
A Compressive Superresolution Display
Heide, Felix
2014-06-22
In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.
Microbunching and RF Compression
Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.
2010-05-23
Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Hyperspectral data compression
Motta, Giovanni; Storer, James A
2006-01-01
Provides a survey of results in the field of compression of remote sensed 3D data, with a particular interest in hyperspectral imagery. This work covers topics such as compression architecture, lossless compression, lossy techniques, and more. It also describes a lossless algorithm based on vector quantization.
Hildebrand, Richard J.; Wozniak, John J.
2001-01-01
A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.
Compressing Binary Decision Diagrams
Hansen, Esben Rune; Satti, Srinivasa Rao; Tiedemann, Peter
2008-01-01
The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...
Compressing Binary Decision Diagrams
Rune Hansen, Esben; Srinivasa Rao, S.; Tiedemann, Peter
The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...
Compressing Binary Decision Diagrams
Hansen, Esben Rune; Satti, Srinivasa Rao; Tiedemann, Peter
2008-01-01
The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Principally Left Hereditary and Principally Left Strong Radicals
S. Tumurbat; R. Wiegandt
2001-01-01
A radical γ is normal if and only if γ is principally left hereditary and principally left strong (i.e., γ(L) = L e A and Lz ∈γ for all z ∈ L imply L γ(A)). Let a radical γ satisfy that A°∈γ and S° A° imply S°∈γ.Then γ is a hereditary normal radical if and only if γ is principally left strong and γ {A | (A, +,◇a) ∈γ a ∈ A}, where the multiplication ◇a is defined by x ◇a y ＝ xay. The Behrens radical class B is the largest principally left hereditary subclass of the Brown-McCoy radical class G. Neither3 nor G is principally left strong.
Issues in multiview autostereoscopic image compression
Shah, Druti; Dodgson, Neil A.
2001-06-01
Multi-view auto-stereoscopic images and image sequences require large amounts of space for storage and large bandwidth for transmission. High bandwidth can be tolerated for certain applications where the image source and display are close together but, for long distance or broadcast, compression of information is essential. We report on the results of our two- year investigation into multi-view image compression. We present results based on four techniques: differential pulse code modulation (DPCM), disparity estimation, three- dimensional discrete cosine transform (3D-DCT), and principal component analysis (PCA). Our work on DPCM investigated the best predictors to use for predicting a given pixel. Our results show that, for a given pixel, it is generally the nearby pixels within a view that provide better prediction than the corresponding pixel values in adjacent views. This led to investigations into disparity estimation. We use both correlation and least-square error measures to estimate disparity. Both perform equally well. Combining this with DPCM led to a novel method of encoding, which improved the compression ratios by a significant factor. The 3D-DCT has been shown to be a useful compression tool, with compression schemes based on ideas from the two-dimensional JPEG standard proving effective. An alternative to 3D-DCT is PCA. This has proved to be less effective than the other compression methods investigated.
We Need Principals Who Understand.
Fineman, Sharon
1981-01-01
Describes the need for principals to have a greater understanding both of the needs of special education students and of effective ways of handling their problems. Special education survey courses and practical experiences for administrators might help close the gap between principals and special education teachers. (WD)
Time Management for New Principals
Ruder, Robert
2008-01-01
Becoming a principal is a milestone in an educator's professional life. The principalship is an opportunity to provide leadership that will afford students opportunities to thrive in a nurturing and supportive environment. Despite the continuously expanding demands of being a new principal, effective time management will enable an individual to be…
The Effective and Reflective Principal
Ritchie, John M.
2013-01-01
For a "Kappan" issue focusing on the job of the principal, this is an essay from a retired, longtime principal and superintendent. "I have notebooks full of advice that I've collected over the years: tips, mantras, cautions, and quotations appropriate for any occasion," the author says. "I learned to focus less on…
Burnout among Elementary School Principals
Combs, Julie; Edmonson, Stacey L.; Jackson, Sherion H.
2009-01-01
As the understanding of burnout continues to be refined, studies that examine school principals and burnout will be helpful to those who provide support to school leaders and are concerned about principal attrition and pending shortages. The purpose of this study was to examine the relationship between burnout and gender, age, and years experience…
What Principals Think Motivates Teachers
Diamantes, Thomas
2004-01-01
How did a graduate class of teachers and principals come to explore what was really important to teachers? They had an idea that they all shared the same values (both teachers and principals) and would agree on what rewards teachers prize. Would administrators rate the motivation rewards the same way the teachers would? To find out, five schools…
The Principal as Formative Coach
Nidus, Gabrielle; Sadder, Maya
2011-01-01
Formative coaching, an approach that uses student work as the foundation for mentoring and professional development, can help principals become more effective instructional leaders. In formative coaching, teaches and coaches analyze student work to determine next steps for instruction. This article shows how a principal can use the steps of the…
Principals and SRO's: Defining Roles.
Bond, Bill
2001-01-01
Many principals have recently acquired school resource officers, police officers who are stationed in schools and report to local sheriffs or police chiefs. Working effectively with a resource officer requires that principals and officers understand each other's role and express partnership details in a memorandum of understanding. (MLH)
The Principal as Adult Developer.
Levine, Sarah L.
1989-01-01
Restructuring of the principalship must include the principal's role as an adult developer aware of the inextricable link between teacher growth and student development. Principal and teacher should work together to learn how adults develop, to discover conditions fostering growth, and to encourage each other to face new challenges. (MLH)
Time Management for New Principals
Ruder, Robert
2008-01-01
Becoming a principal is a milestone in an educator's professional life. The principalship is an opportunity to provide leadership that will afford students opportunities to thrive in a nurturing and supportive environment. Despite the continuously expanding demands of being a new principal, effective time management will enable an individual to be…
Innovation Management Perceptions of Principals
Bakir, Asli Agiroglu
2016-01-01
This study is aimed to determine the perceptions of principals about innovation management and to investigate whether there is a significant difference in this perception according to various parameters. In the study, descriptive research model is used and universe is consisted from principals who participated in "Acquiring Formation Course…
Great Principals at Scale: Toolkit
Ikemoto, Gina; Taliaferro, Lori; Fenton, Benjamin; Davis, Jacquelyn
2014-01-01
School leaders are critical in the lives of students and to the development of their teachers. Unfortunately, in too many instances, principals are effective in spite of--rather than because of--district conditions. To truly improve student achievement for all students across the country, well-prepared principals need the tools, support, and…
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Principal -bundles on Nodal Curves
Usha N Bhosle
2001-08-01
Let be a connected semisimple affine algebraic group defined over . We study the relation between stable, semistable -bundles on a nodal curve and representations of the fundamental group of . This study is done by extending the notion of (generalized) parabolic vector bundles to principal -bundles on the desingularization of and using the correspondence between them and principal -bundles on . We give an isomorphism of the stack of generalized parabolic bundles on with a quotient stack associated to loop groups. We show that if is simple and simply connected then the Picard group of the stack of principal -bundles on is isomorphic to ⊕ , being the number of components of .
Lossless Medical Image Compression
Nagashree G
2014-06-01
Full Text Available Image compression has become an important process in today‟s world of information exchange. Image compression helps in effective utilization of high speed network resources. Medical Image Compression is very important in the present world for efficient archiving and transmission of images. In this paper two different approaches for lossless image compression is proposed. One uses the combination of 2D-DWT & FELICS algorithm for lossy to lossless Image Compression and another uses combination of prediction algorithm and Integer wavelet Transform (IWT. To show the effectiveness of the methodology used, different image quality parameters are measured and shown the comparison of both the approaches. We observed the increased compression ratio and higher PSNR values.
Principal metals online property data
Principal Metals is a leading supplier of specialty metals. This database contains complete materials property data on more than 5000 ferrous and non-ferrous materials (chemistry, mechanicals, general description, applications, welding, machining, an
Celiac Artery Compression Syndrome
Mohammed Muqeetadnan
2013-01-01
Full Text Available Celiac artery compression syndrome is a rare disorder characterized by episodic abdominal pain and weight loss. It is the result of external compression of celiac artery by the median arcuate ligament. We present a case of celiac artery compression syndrome in a 57-year-old male with severe postprandial abdominal pain and 30-pound weight loss. The patient eventually responded well to surgical division of the median arcuate ligament by laparoscopy.
Principal modes in fiber amplifiers
Fridman, Moti; Dubinskii, Mark; Friesem, Asher A; Davidson, Nir
2010-01-01
The dynamics of the state of polarization in single mode and multimode fiber amplifiers are presented. The experimental results reveal that although the state of polarizations at the output can vary over a large range when changing the temperatures of the fiber amplifiers, the variations are significantly reduced when resorting to the principal states of polarization in single mode fiber amplifiers and principal modes in multimode fiber amplifiers.
Principal Fibrations from Noncommutative Spheres
Landi, Giovanni; Suijlekom, Walter Van
2005-11-01
We construct noncommutative principal fibrations Sθ7→Sθ4 which are deformations of the classical SU(2) Hopf fibration over the four sphere. We realize the noncommutative vector bundles associated to the irreducible representations of SU(2) as modules of coequivariant maps and construct corresponding projections. The index of Dirac operators with coefficients in the associated bundles is computed with the Connes-Moscovici local index formula. "The algebra inclusion is an example of a not-trivial quantum principal bundle."
Compressed sensing & sparse filtering
Carmi, Avishy Y; Godsill, Simon J
2013-01-01
This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary.Â Apart from compressed sensing this book contains other related app
Pearlman, William A
2013-01-01
This book explains the stages necessary to create a wavelet compression system for images and describes state-of-the-art systems used in image compression standards and current research. It starts with a high level discussion of the properties of the wavelet transform, especially the decomposition into multi-resolution subbands. It continues with an exposition of the null-zone, uniform quantization used in most subband coding systems and the optimal allocation of bitrate to the different subbands. Then the image compression systems of the FBI Fingerprint Compression Standard and the JPEG2000 S
Stiffness of compression devices
Giovanni Mosti
2013-03-01
Full Text Available This issue of Veins and Lymphatics collects papers coming from the International Compression Club (ICC Meeting on Stiffness of Compression Devices, which took place in Vienna on May 2012. Several studies have demonstrated that the stiffness of compression products plays a major role for their hemodynamic efficacy. According to the European Committee for Standardization (CEN, stiffness is defined as the pressure increase produced by medical compression hosiery (MCH per 1 cm of increase in leg circumference.1 In other words stiffness could be defined as the ability of the bandage/stockings to oppose the muscle expansion during contraction.
Early Career Principals: Working Productively with Difficult and Resistant Staff
Eller, John F.; Eller, Sheila A.
2012-01-01
Effective leaders must find ways to motivate their employees to provide maximum success for the organization. In today's world of high accountability, this ability is paramount to the success and survival of schools, and is an especially important skill for early career principals to master. Knowledge of the factors that might have contributed to…
Design Point for a Spheromak Compression Experiment
Woodruff, Simon; Romero-Talamas, Carlos A.; O'Bryan, John; Stuber, James; Darpa Spheromak Team
2015-11-01
Two principal issues for the spheromak concept remain to be addressed experimentally: formation efficiency and confinement scaling. We are therefore developing a design point for a spheromak experiment that will be heated by adiabatic compression, utilizing the CORSICA and NIMROD codes as well as analytic modeling with target parameters R_initial =0.3m, R_final =0.1m, T_initial =0.2keV, T_final =1.8keV, n_initial =1019m-3 and n_final = 1021m-3, with radial convergence of C =3. This low convergence differentiates the concept from MTF with C =10 or more, since the plasma will be held in equilibrium throughout compression. We present results from CORSICA showing the placement of coils and passive structure to ensure stability during compression, and design of the capacitor bank needed to both form the target plasma and compress it. We specify target parameters for the compression in terms of plasma beta, formation efficiency and energy confinement. Work performed under DARPA grant N66001-14-1-4044.
Kowalska-Strzęciwilk, Ewa; Skrzeczanowski, Wojciech; Czarnecka, Agata; Kubkowska, Monika; Paduch, Marian; Zielińska, Ewa
2014-05-01
The paper presents the analysis of soft x-ray signals generated in the PF-1000 facility equipped with a modified inner electrode with a central tungsten insert of 50 mm diameter in experiments with tungsten and carbon samples. The PF-1000 machine was operated with pure deuterium filling under the initial pressure of 1.3 hPa. The machine was powered using a condenser bank charged initially to 24 kV, corresponding to the stored energy of 380 kJ, with the maximum discharge current amounted to 1.8 MA. For investigation of plasma stream-sample interactions, we applied 16-frame laser interferometry, optical spectroscopy and soft x-ray measurements with the use of a system of four silicon pin-diodes. In this paper, we mainly focus on the principal component analysis (PCA) of the registered x-ray signals to find a corelation between the neutron yield and observed maxima in signals. X-ray signals collected by four pin-diodes covered a 9 cm range in front of the electrode ends. Each diode collected a signal from the circle of 3 cm diameter. The presented PCA analysis is based on 57 PF discharges and 16 parameters are taken into account in the analysis. The study of signals from the pin-diode system showed good correlation between the neutron yield and the maximum in the x-ray signal, which appeared about 1000-1300 ns after the maximum of plasma compression.
Attracting Principals to the Superintendency
Aimee Howley
2002-10-01
Full Text Available Responding to a perceived shortage of school superintendents in Ohio as well as elsewhere in the nation, this study examined the conditions of the job that make it attractive or unattractive as a career move for principals. The researchers surveyed a random sample of Ohio principals, receiving usable responses from 508 of these administrators. Analysis of the data revealed that principals perceived the ability to make a difference and the extrinsic motivators (e.g., salary and benefits associated with the superintendency as conditions salient to the decision to pursue such a job. Furthermore, they viewed the difficulties associated with the superintendency as extremely important. Among these difficulties, the most troubling were: (1 increased burden of responsibility for local, state, and federal mandates; (2 need to be accountable for outcomes that are beyond an educator’s control; (3 low levels of board support, and (4 excessive pressure to perform. The researchers also explored the personal and contextual characteristics that predisposed principals to see certain conditions of the superintendency as particularly attractive or particularly troublesome. Only two such characteristics, however, proved to be predictive: (1 principals with fewer years of teaching experience were more likely than their more experienced counterparts to rate the difficulty of the job as important to the decision to pursue a position as superintendent, and (2 principals who held cosmopolitan commitments were more likely than those who did not hold such commitments to view the salary and benefits associated with the superintendency as important. Findings from the study provided some guidance to those policy makers who are looking for ways to make the superintendency more attractive as a career move for principals. In particular, the study suggested that policy makers should work to design incentives that address school leaders’ interest in making a difference at the
An Enhanced Static Data Compression Scheme Of Bengali Short Message
Arif, Abu Shamim Mohammod; Islam, Rashedul
2009-01-01
This paper concerns a modified approach of compressing Short Bengali Text Message for small devices. The prime objective of this research technique is to establish a low complexity compression scheme suitable for small devices having small memory and relatively lower processing speed. The basic aim is not to compress text of any size up to its maximum level without having any constraint on space and time, rather than the main target is to compress short messages up to an optimal level which needs minimum space, consume less time and the processor requirement is lower. We have implemented Character Masking, Dictionary Matching, Associative rule of data mining and Hyphenation algorithm for syllable based compression in hierarchical steps to achieve low complexity lossless compression of text message for any mobile devices. The scheme to choose the diagrams are performed on the basis of extensive statistical model and the static Huffman coding is done through the same context.
Performance Analysis of Multi Spectral Band Image Compression using Discrete Wavelet Transform
S. S. Ramakrishnan
2012-01-01
Full Text Available Problem statement: Efficient and effective utilization of transmission bandwidth and storage capacity have been a core area of research for remote sensing images. Hence image compression is required for multi-band satellite imagery. In addition, image quality is also an important factor after compression and reconstruction. Approach: In this investigation, the discrete wavelet transform is used to compress the Landsat5 agriculture and forestry image using various wavelets and the spectral signature graph is drawn. Results: The compressed image performance is analyzed using Compression Ratio (CR, Peak Signal to Noise Ratio (PSNR. The compressed image using dmey wavelet is selected based on its Digital Number Minimum (DNmin and Digital Number Maximum (DNmax. Then it is classified using maximum likelihood classification and the accuracy is determined using error matrix, kappa statistics and over all accuracy. Conclusion: Hence the proposed compression technique is well suited to compress the agriculture and forestry multi-band image.
Akkerman, J. W.
1982-01-01
New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.
Equation-of-state model for shock compression of hot dense matter
Pain, J C
2007-01-01
A quantum equation-of-state model is presented and applied to the calculation of high-pressure shock Hugoniot curves beyond the asymptotic fourfold density, close to the maximum compression where quantum effects play a role. An analytical estimate for the maximum attainable compression is proposed. It gives a good agreement with the equation-of-state model.
Principal Curves on Riemannian Manifolds.
Hauberg, Soren
2016-09-01
Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.
Spectral Animation Compression
Chao Wang; Yang Liu; Xiaohu Guo; Zichun Zhong; Binh Le; Zhigang Deng
2015-01-01
This paper presents a spectral approach to compress dynamic animation consisting of a sequence of homeomor-phic manifold meshes. Our new approach directly compresses the field of deformation gradient defined on the surface mesh, by decomposing it into rigid-body motion (rotation) and non-rigid-body deformation (stretching) through polar decompo-sition. It is known that the rotation group has the algebraic topology of 3D ring, which is different from other operations like stretching. Thus we compress these two groups separately, by using Manifold Harmonics Transform to drop out their high-frequency details. Our experimental result shows that the proposed method achieves a good balance between the reconstruction quality and the compression ratio. We compare our results quantitatively with other existing approaches on animation compression, using standard measurement criteria.
Compression of EMG Signals by Super imposing Methods: Case of WPT and DCT
Aimé Joseph Oyobé-Okassa
2016-04-01
Full Text Available The objective of this work is to apply on the electromyographic signals (EMG a new compression approach. The originality of this algorithm, that improves the compression ratio of the EMG signals, compared to Modified Algorithm of Decomposition (MAD,is the association of the Discrete Wavelet Packet Transform (DWPT with the Discrete Cosine Transform (DCT. Indeed, the compression algorithms are intended principally to increase the compression ratio while maintaining the reconstructed signalquality. The results obtained by this method are interesting with regard to evaluation criteria of compression.
Principal Curves on Riemannian Manifolds
Hauberg, Søren
2015-01-01
Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only...... in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimize a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend...... from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls...
Equation of state of Mo from shock compression experiments on preheated samples
Fat'yanov, O. V.; Asimow, P. D.
2017-03-01
We present a reanalysis of reported Hugoniot data for Mo, including both experiments shocked from ambient temperature (T) and those preheated to 1673 K, using the most general methods of least-squares fitting to constrain the Grüneisen model. This updated Mie-Grüneisen equation of state (EOS) is used to construct a family of maximum likelihood Hugoniots of Mo from initial temperatures of 298 to 2350 K and a parameterization valid over this range. We adopted a single linear function at each initial temperature over the entire range of particle velocities considered. Total uncertainties of all the EOS parameters and correlation coefficients for these uncertainties are given. The improved predictive capabilities of our EOS for Mo are confirmed by (1) better agreement between calculated bulk sound speeds and published measurements along the principal Hugoniot, (2) good agreement between our Grüneisen data and three reported high-pressure γ ( V ) functions obtained from shock-compression of porous samples, and (3) very good agreement between our 1 bar Grüneisen values and γ ( T ) at ambient pressure recalculated from reported experimental data on the adiabatic bulk modulus K s ( T ) . Our analysis shows that an EOS constructed from shock compression data allows a much more accurate prediction of γ ( T ) values at 1 bar than those based on static compression measurements or first-principles calculations. Published calibrations of the Mie-Grüneisen EOS for Mo using static compression measurements only do not reproduce even low-pressure asymptotic values of γ ( T ) at 1 bar, where the most accurate experimental data are available.
Principal bundles the classical case
Sontz, Stephen Bruce
2015-01-01
This introductory graduate level text provides a relatively quick path to a special topic in classical differential geometry: principal bundles. While the topic of principal bundles in differential geometry has become classic, even standard, material in the modern graduate mathematics curriculum, the unique approach taken in this text presents the material in a way that is intuitive for both students of mathematics and of physics. The goal of this book is to present important, modern geometric ideas in a form readily accessible to students and researchers in both the physics and mathematics communities, providing each with an understanding and appreciation of the language and ideas of the other.
Surface analysis the principal techniques
Vickerman, John C
2009-01-01
This completely updated and revised second edition of Surface Analysis: The Principal Techniques, deals with the characterisation and understanding of the outer layers of substrates, how they react, look and function which are all of interest to surface scientists. Within this comprehensive text, experts in each analysis area introduce the theory and practice of the principal techniques that have shown themselves to be effective in both basic research and in applied surface analysis. Examples of analysis are provided to facilitate the understanding of this topic and to show readers how they c
Integrating Data Transformation in Principal Components Analysis
Maadooliat, Mehdi
2015-01-02
Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples.
Vascular compression syndromes.
Czihal, Michael; Banafsche, Ramin; Hoffmann, Ulrich; Koeppel, Thomas
2015-11-01
Dealing with vascular compression syndromes is one of the most challenging tasks in Vascular Medicine practice. This heterogeneous group of disorders is characterised by external compression of primarily healthy arteries and/or veins as well as accompanying nerval structures, carrying the risk of subsequent structural vessel wall and nerve damage. Vascular compression syndromes may severely impair health-related quality of life in affected individuals who are typically young and otherwise healthy. The diagnostic approach has not been standardised for any of the vascular compression syndromes. Moreover, some degree of positional external compression of blood vessels such as the subclavian and popliteal vessels or the celiac trunk can be found in a significant proportion of healthy individuals. This implies important difficulties in differentiating physiological from pathological findings of clinical examination and diagnostic imaging with provocative manoeuvres. The level of evidence on which treatment decisions regarding surgical decompression with or without revascularisation can be relied on is generally poor, mostly coming from retrospective single centre studies. Proper patient selection is critical in order to avoid overtreatment in patients without a clear association between vascular compression and clinical symptoms. With a focus on the thoracic outlet-syndrome, the median arcuate ligament syndrome and the popliteal entrapment syndrome, the present article gives a selective literature review on compression syndromes from an interdisciplinary vascular point of view.
Scoville, John
2011-01-01
A new approach to data compression is developed and applied to multimedia content. This method separates messages into components suitable for both lossless coding and 'lossy' or statistical coding techniques, compressing complex objects by separately encoding signals and noise. This is demonstrated by compressing the most significant bits of data exactly, since they are typically redundant and compressible, and either fitting a maximally likely noise function to the residual bits or compressing them using lossy methods. Upon decompression, the significant bits are decoded and added to a noise function, whether sampled from a noise model or decompressed from a lossy code. This results in compressed data similar to the original. For many test images, a two-part image code using JPEG2000 for lossy coding and PAQ8l for lossless coding produces less mean-squared error than an equal length of JPEG2000. Computer-generated images typically compress better using this method than through direct lossy coding, as do man...
School Uniforms: Guidelines for Principals.
Essex, Nathan L.
2001-01-01
Principals desiring to develop a school-uniform policy should involve parents, teachers, community leaders, and student representatives; beware restrictions on religious and political expression; provide flexibility and assistance for low-income families; implement a pilot program; align the policy with school-safety issues; and consider legal…
How Principals Support Teacher Effectiveness
Gallagher, Michael
2012-01-01
The current standards and accountability regime describes effective teaching as the ability to increase student achievement on standardized tests. This narrow definition of effectiveness can lead principals to create school cultures myopically focused on student achievement data. A "laser-like focus on academic achievement," if employed too…
Artificial Neural Network Model for Predicting Compressive
Salim T. Yousif
2013-05-01
Full Text Available Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature. The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor affecting the output of the model. The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.
Performance Improvement Of Bengali Text Compression Using Transliteration And Huffman Principle
Md. Mamun Hossain
2016-09-01
Full Text Available In this paper, we propose a new compression technique based on transliteration of Bengali text to English. Compared to Bengali, English is a less symbolic language. Thus transliteration of Bengali text to English reduces the number of characters to be coded. Huffman coding is well known for producing optimal compression. When Huffman principal is applied on transliterated text significant performance improvement is achieved in terms of decoding speed and space requirement compared to Unicode compression
Engelder, Terry; Peacock, David C. P.
2001-02-01
Alpine inversion in the Bristol Channel Basin includes reverse-reactivated normal faults with hanging wall buttress anticlines. At Lilstock Beach, joint sets in Lower Jurassic limestone beds cluster about the trend of the hinge of the Lilstock buttress anticline. In horizontal and gently north-dipping beds, J3 joints ( 295-285° strike) are rare, while other joint sets indicate an anticlockwise sequence of development. In the steeper south-dipping beds, J3 joints are the most frequent in the vicinity of the reverse-reactivated normal fault responsible for the anticline. The J3 joints strike parallel to the fold hinge, and their poles tilt to the south when bedding is restored to horizontal. This southward tilt aims at the direction of σ 1 for Alpine inversion. Finite-element analysis is used to explain the southward tilt of J3 joints that propagate under a local σ 3 in the direction of σ 1 for Alpine inversion. Tilted principal stresses are characteristic of limestone-shale sequences that are sheared during parallel (flexural-flow) folding. Shear tractions on the dipping beds generate a tensile stress in the stiffer limestone beds even when remote principal stresses are compressive. This situation favors the paradoxical opening of joints in the direction of the regional maximum horizontal stress. We conclude that J3 joints propagated during the Alpine compression caused the growth of the Lilstock buttress anticline.
Virtually Lossless Compression of Astrophysical Images
Alparone Luciano
2005-01-01
Full Text Available We describe an image compression strategy potentially capable of preserving the scientific quality of astrophysical data, simultaneously allowing a consistent bandwidth reduction to be achieved. Unlike strictly lossless techniques, by which moderate compression ratios are attainable, and conventional lossy techniques, in which the mean square error of the decoded data is globally controlled by users, near-lossless methods are capable of locally constraining the maximum absolute error, based on user's requirements. An advanced lossless/near-lossless differential pulse code modulation (DPCM scheme, recently introduced by the authors and relying on a causal spatial prediction, is adjusted to the specific characteristics of astrophysical image data (high radiometric resolution, generally low noise, etc.. The background noise is preliminarily estimated to drive the quantization stage for high quality, which is the primary concern in most of astrophysical applications. Extensive experimental results of lossless, near-lossless, and lossy compression of astrophysical images acquired by the Hubble space telescope show the advantages of the proposed method compared to standard techniques like JPEG-LS and JPEG2000. Eventually, the rationale of virtually lossless compression, that is, a noise-adjusted lossles/near-lossless compression, is highlighted and found to be in accordance with concepts well established for the astronomers' community.
Wave energy devices with compressible volumes.
Kurniawan, Adi; Greaves, Deborah; Chaplin, John
2014-12-08
We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m(3) and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.
Compressed Adjacency Matrices: Untangling Gene Regulatory Networks.
Dinkla, K; Westenberg, M A; van Wijk, J J
2012-12-01
We present a novel technique-Compressed Adjacency Matrices-for visualizing gene regulatory networks. These directed networks have strong structural characteristics: out-degrees with a scale-free distribution, in-degrees bound by a low maximum, and few and small cycles. Standard visualization techniques, such as node-link diagrams and adjacency matrices, are impeded by these network characteristics. The scale-free distribution of out-degrees causes a high number of intersecting edges in node-link diagrams. Adjacency matrices become space-inefficient due to the low in-degrees and the resulting sparse network. Compressed adjacency matrices, however, exploit these structural characteristics. By cutting open and rearranging an adjacency matrix, we achieve a compact and neatly-arranged visualization. Compressed adjacency matrices allow for easy detection of subnetworks with a specific structure, so-called motifs, which provide important knowledge about gene regulatory networks to domain experts. We summarize motifs commonly referred to in the literature, and relate them to network analysis tasks common to the visualization domain. We show that a user can easily find the important motifs in compressed adjacency matrices, and that this is hard in standard adjacency matrix and node-link diagrams. We also demonstrate that interaction techniques for standard adjacency matrices can be used for our compressed variant. These techniques include rearrangement clustering, highlighting, and filtering.
Nonrepetitive Colouring via Entropy Compression
Dujmović, Vida; Wood, David R
2011-01-01
A vertex colouring of a graph is \\emph{nonrepetitive} if there is no path whose first half receives the same sequence of colours as the second half. A graph is nonrepetitively $k$-choosable if given lists of at least $k$ colours at each vertex, there is a nonrepetitive colouring such that each vertex is coloured from its own list. It is known that every graph with maximum degree $\\Delta$ is $c\\Delta^2$-choosable, for some constant $c$. We prove this result with $c=4$. We then prove that every subdivision of a graph with sufficiently many division vertices per edge is nonrepetitively 6-choosable. The proofs of both these results are based on the Moser-Tardos entropy-compression method, and a recent extension by Grytczuk, Kozik and Micek for the nonrepetitive choosability of paths. Finally, we prove that every graph with pathwidth $k$ is nonrepetitively ($2k^2+6k+1$)-colourable.
LDPC Codes for Compressed Sensing
Dimakis, Alexandros G; Vontobel, Pascal O
2010-01-01
We present a mathematical connection between channel coding and compressed sensing. In particular, we link, on the one hand, \\emph{channel coding linear programming decoding (CC-LPD)}, which is a well-known relaxation of maximum-likelihood channel decoding for binary linear codes, and, on the other hand, \\emph{compressed sensing linear programming decoding (CS-LPD)}, also known as basis pursuit, which is a widely used linear programming relaxation for the problem of finding the sparsest solution of an under-determined system of linear equations. More specifically, we establish a tight connection between CS-LPD based on a zero-one measurement matrix over the reals and CC-LPD of the binary linear channel code that is obtained by viewing this measurement matrix as a binary parity-check matrix. This connection allows the translation of performance guarantees from one setup to the other. The main message of this paper is that parity-check matrices of "good" channel codes can be used as provably "good" measurement ...
Ratsaby, Joel
2010-01-01
It is well known that text compression can be achieved by predicting the next symbol in the stream of text data based on the history seen up to the current symbol. The better the prediction the more skewed the conditional probability distribution of the next symbol and the shorter the codeword that needs to be assigned to represent this next symbol. What about the opposite direction ? suppose we have a black box that can compress text stream. Can it be used to predict the next symbol in the stream ? We introduce a criterion based on the length of the compressed data and use it to predict the next symbol. We examine empirically the prediction error rate and its dependency on some compression parameters.
Dheemanth H N
2016-07-01
Full Text Available Lempel–Ziv–Welch (LZW is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. LZW compression is one of the Adaptive Dictionary techniques. The dictionary is created while the data are being encoded. So encoding can be done on the fly. The dictionary need not be transmitted. Dictionary can be built up at receiving end on the fly. If the dictionary overflows then we have to reinitialize the dictionary and add a bit to each one of the code words. Choosing a large dictionary size avoids overflow, but spoils compressions. A codebook or dictionary containing the source symbols is constructed. For 8-bit monochrome images, the first 256 words of the dictionary are assigned to the gray levels 0-255. Remaining part of the dictionary is filled with sequences of the gray levels.LZW compression works best when applied on monochrome images and text files that contain repetitive text/patterns.
Shocklets in compressible flows
袁湘江; 男俊武; 沈清; 李筠
2013-01-01
The mechanism of shocklets is studied theoretically and numerically for the stationary fluid, uniform compressible flow, and boundary layer flow. The conditions that trigger shock waves for sound wave, weak discontinuity, and Tollmien-Schlichting (T-S) wave in compressible flows are investigated. The relations between the three types of waves and shocklets are further analyzed and discussed. Different stages of the shocklet formation process are simulated. The results show that the three waves in compressible flows will transfer to shocklets only when the initial disturbance amplitudes are greater than the certain threshold values. In compressible boundary layers, the shocklets evolved from T-S wave exist only in a finite region near the surface instead of the whole wavefront.
Reference Based Genome Compression
Chern, Bobbie; Manolakos, Alexandros; No, Albert; Venkat, Kartik; Weissman, Tsachy
2012-01-01
DNA sequencing technology has advanced to a point where storage is becoming the central bottleneck in the acquisition and mining of more data. Large amounts of data are vital for genomics research, and generic compression tools, while viable, cannot offer the same savings as approaches tuned to inherent biological properties. We propose an algorithm to compress a target genome given a known reference genome. The proposed algorithm first generates a mapping from the reference to the target genome, and then compresses this mapping with an entropy coder. As an illustration of the performance: applying our algorithm to James Watson's genome with hg18 as a reference, we are able to reduce the 2991 megabyte (MB) genome down to 6.99 MB, while Gzip compresses it to 834.8 MB.
Singh, Shikha; Singhal, Vanika; Majumdar, Angshul
2016-01-01
This work addresses the problem of extracting deeply learned features directly from compressive measurements. There has been no work in this area. Existing deep learning tools only give good results when applied on the full signal, that too usually after preprocessing. These techniques require the signal to be reconstructed first. In this work we show that by learning directly from the compressed domain, considerably better results can be obtained. This work extends the recently proposed fram...
Reference Based Genome Compression
Chern, Bobbie; Ochoa, Idoia; Manolakos, Alexandros; No, Albert; Venkat, Kartik; Weissman, Tsachy
2012-01-01
DNA sequencing technology has advanced to a point where storage is becoming the central bottleneck in the acquisition and mining of more data. Large amounts of data are vital for genomics research, and generic compression tools, while viable, cannot offer the same savings as approaches tuned to inherent biological properties. We propose an algorithm to compress a target genome given a known reference genome. The proposed algorithm first generates a mapping from the reference to the target gen...
Alternative Compression Garments
Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.
2011-01-01
Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.
Forecasting with Leading Indicators by means of the Principal Covariate Index
P.J.F. Groenen (Patrick); C. Heij (Christiaan); D.J.C. van Dijk (Dick)
2011-01-01
textabstractA new method of leading index construction is proposed, which explicitly takes into account the purpose of using the index for forecasting a coincident economic indicator. This so-called principal covariate index combines the need for compressing the information in a large number of
Using Telecommunications for Principals' Professional Development.
Long, Claudia A.; Terry, Patricia D.
This paper describes the development, operations, and effectiveness of the Principals' Computer Network (PCN)--an experimental program created (1) to allow principals to use their schools' microcomputers to access other principals' solutions to common instructional management problems; (2) to enable principals to request suggestions from their…
Three Principals Who Make a Difference.
Sagor, Richard D.
1992-01-01
Principals who are transformative leaders consistently use three building blocks to promote school success: a clear, unified purpose; a common cultural perspective; and a constant push for improvement. In one study, an opinionated, assertive middle school principal; a nurturing, supportive principal; and a high-energy, charismatic principal all…
Ovalization of Tubes Under Bending and Compression
Demer, L J; Kavanaugh, E S
1944-01-01
An empirical equation has been developed that gives the approximate amount of ovalization for tubes under bending loads. Tests were made on tubes in the d/t range from 6 to 14, the latter d/t ratio being in the normal landing gear range. Within the range of the series of tests conducted, the increase in ovalization due to a compression load in combination with a bending load was very small. The bending load, being the principal factor in producing the ovalization, is a rather complex function of the bending moment, d/t ratio, cantilever length, and distance between opposite bearing faces. (author)
Envera Variable Compression Ratio Engine
Charles Mendler
2011-03-15
the compression ratio can be raised (to as much as 18:1) providing high engine efficiency. It is important to recognize that for a well designed VCR engine cylinder pressure does not need to be higher than found in current production turbocharged engines. As such, there is no need for a stronger crankcase, bearings and other load bearing parts within the VCR engine. The Envera VCR mechanism uses an eccentric carrier approach to adjust engine compression ratio. The crankshaft main bearings are mounted in this eccentric carrier or 'crankshaft cradle' and pivoting the eccentric carrier 30 degrees adjusts compression ratio from 9:1 to 18:1. The eccentric carrier is made up of a casting that provides rigid support for the main bearings, and removable upper bearing caps. Oil feed to the main bearings transits through the bearing cap fastener sockets. The eccentric carrier design was chosen for its low cost and rigid support of the main bearings. A control shaft and connecting links are used to pivot the eccentric carrier. The control shaft mechanism features compression ratio lock-up at minimum and maximum compression ratio settings. The control shaft method of pivoting the eccentric carrier was selected due to its lock-up capability. The control shaft can be rotated by a hydraulic actuator or an electric motor. The engine shown in Figures 3 and 4 has a hydraulic actuator that was developed under the current program. In-line 4-cylinder engines are significantly less expensive than V engines because an entire cylinder head can be eliminated. The cost savings from eliminating cylinders and an entire cylinder head will notably offset the added cost of the VCR and supercharging. Replacing V6 and V8 engines with in-line VCR 4-cylinder engines will provide high fuel economy at low cost. Numerous enabling technologies exist which have the potential to increase engine efficiency. The greatest efficiency gains are realized when the right combination of advanced and new
Working Characteristics of Variable Intake Valve in Compressed Air Engine
Qihui Yu
2014-01-01
Full Text Available A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine.
Working characteristics of variable intake valve in compressed air engine.
Yu, Qihui; Shi, Yan; Cai, Maolin
2014-01-01
A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine.
Information preserving image compression for archiving NMR images.
Li, C C; Gokmen, M; Hirschman, A D; Wang, Y
1991-01-01
This paper presents a result on information preserving compression of NMR images for the archiving purpose. Both Lynch-Davisson coding and linear predictive coding have been studied. For NMR images of 256 x 256 x 12 resolution, the Lynch-Davisson coding with a block size of 64 as applied to prediction error sequences in the Gray code bit planes of each image gave an average compression ratio of 2.3:1 for 14 testing images. The predictive coding with a third order linear predictor and the Huffman encoding of the prediction error gave an average compression ratio of 3.1:1 for 54 images under test, while the maximum compression ratio achieved was 3.8:1. This result is one step further toward the improvement, albeit small, of the information preserving image compression for medical applications.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
A visual basic program for principal components transformation of digital images
Carr, James R.
1998-04-01
Principal components transformation of multispectral and hyperspectral digital imagery is useful for: (1) reducing the number of useful bands, a distinct advantage when using hyperspectral imagery; (2) obtaining image bands that are orthogonal (statistically independent); (3) improving supervised and unsupervised classification; and (4) image compression. A Visual Basic program is presented for principal components transformation of digital images using principal components analysis or correspondence analysis. Principal components analysis is well known for this application. Correspondence analysis is only recently applied for such transformation. The program can import raw digital images, with or without header records; or, the program can accept Windows bitmap (BMP) files. After transformation, output, transformed images can be exported in raw or BMP format. The program can be used as a simple file format conversion program (raw to BMP or BMP to raw) without performing principal components transformation. An application demonstrates the use of the program.
Transverse Compression of Tendons.
Salisbury, S T Samuel; Buckley, C Paul; Zavatsky, Amy B
2016-04-01
A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon.
Existence of a principal eigenvalue for the Tricomi problem
Daniela Lupo
2000-10-01
Full Text Available The existence of a principal eigenvalue is established for the Tricomi problem in normal domains; that is, the existence of a positive eigenvalue of minimum modulus with an associated positive eigenfunction. The argument here uses prior results of the authors on the generalized solvability in weighted Sobolev spaces and associated maximum/minimum principles cite{[LP2]} coupled with known results of Krein-Rutman type.
SYMBOLIC VERSOR COMPRESSION ALGORITHM
Li Hongbo
2009-01-01
In an inner-product space, an invertible vector generates a reflection with re-spect to a hyperplane, and the Clifford product of several invertible vectors, called a versor in Clifford algebra, generates the composition of the corresponding reflections, which is an orthogonal transformation. Given a versor in a Clifford algebra, finding another sequence of invertible vectors of strictly shorter length but whose Clifford product still equals the input versor, is called versor compression. Geometrically, versor compression is equivalent to decomposing an orthogoual transformation into a shorter sequence of reflections. This paper proposes a simple algorithm of compressing versors of symbolic form in Clifford algebra. The algorithm is based on computing the intersections of lines with planes in the corresponding Grassmann-Cayley algebra, and is complete in the case of Euclidean or Minkowski inner-product space.
Image compression for dermatology
Cookson, John P.; Sneiderman, Charles; Colaianni, Joseph; Hood, Antoinette F.
1990-07-01
Color 35mm photographic slides are commonly used in dermatology for education, and patient records. An electronic storage and retrieval system for digitized slide images may offer some advantages such as preservation and random access. We have integrated a system based on a personal computer (PC) for digital imaging of 35mm slides that depict dermatologic conditions. Such systems require significant resources to accommodate the large image files involved. Methods to reduce storage requirements and access time through image compression are therefore of interest. This paper contains an evaluation of one such compression method that uses the Hadamard transform implemented on a PC-resident graphics processor. Image quality is assessed by determining the effect of compression on the performance of an image feature recognition task.
Fast Steerable Principal Component Analysis
Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit
2016-01-01
Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2D images as large as a few hundred pixels in each direction. Here we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of two-dimensional images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of $n$ images of size $L \\times L$ pixels, the computational complexity of our a...
Principal chiral model on superspheres
Mitev, V.; Schomerus, V. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Quella, T. [Amsterdam Univ. (Netherlands). Inst. for Theoretical Physics
2008-09-15
We investigate the spectrum of the principal chiral model (PCM) on odd-dimensional superspheres as a function of the curvature radius R. For volume-filling branes on S{sup 3} {sup vertical} {sup stroke} {sup 2}, we compute the exact boundary spectrum as a function of R. The extension to higher dimensional superspheres is discussed, but not carried out in detail. Our results provide very convincing evidence in favor of the strong-weak coupling duality between supersphere PCMs and OSP(2S+2 vertical stroke 2S) Gross-Neveu models that was recently conjectured by Candu and Saleur. (orig.)
ON THE ORIENTATION OF BUCKLING DIRECTION OF ANISOTROPIC ELASTIC PLATE UNDER UNIAXIAL COMPRESSION
Zhang Yitong
2001-01-01
The theory of small deformation superimposed on a large deformation of an elastic solid is used to investigate the buckling of anisotropic elastic plate under uniaxial compression. The buckling direction (the direction of buckling wave) is generally not aligned with the compression direction. The equation for determining the buckling direction is obtained. It is found that the out-of-plane buckling of anisotropic elastic plate is possible and both buckling conditions for flexural and extensional modes are presented. As a specific case of buckling of anisotropic elastic plate, the buckling of an orthotropic elastic plate subjected to a compression in a direction that forms an arbitrary angle with an elastic principal axis of the materials is analyzed. It is found that the buckling direction depends on the angle between the compression direction and the principal axis of the materials, the critical compressive force and plate-thickness parameters.In the case that the compression direction is aligned with the principal axis of the materials, the buckling direction will be aligned with the compression one irrespective of critical compressive force and plate-thickness.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Ohlsson, Henrik; Eldar, Yonina C.; Yang, Allen Y.; Sastry, S. Shankar
2014-08-01
The classical shift retrieval problem considers two signals in vector form that are related by a shift. The problem is of great importance in many applications and is typically solved by maximizing the cross-correlation between the two signals. Inspired by compressive sensing, in this paper, we seek to estimate the shift directly from compressed signals. We show that under certain conditions, the shift can be recovered using fewer samples and less computation compared to the classical setup. Of particular interest is shift estimation from Fourier coefficients. We show that under rather mild conditions only one Fourier coefficient suffices to recover the true shift.
Alberto Apostolico
2009-08-01
Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.
Image data compression investigation
Myrie, Carlos
1989-01-01
NASA continuous communications systems growth has increased the demand for image transmission and storage. Research and analysis was conducted on various lossy and lossless advanced data compression techniques or approaches used to improve the efficiency of transmission and storage of high volume stellite image data such as pulse code modulation (PCM), differential PCM (DPCM), transform coding, hybrid coding, interframe coding, and adaptive technique. In this presentation, the fundamentals of image data compression utilizing two techniques which are pulse code modulation (PCM) and differential PCM (DPCM) are presented along with an application utilizing these two coding techniques.
Image compression in local helioseismology
Löptien, Björn; Gizon, Laurent; Schou, Jesper
2014-01-01
Context. Several upcoming helioseismology space missions are very limited in telemetry and will have to perform extensive data compression. This requires the development of new methods of data compression. Aims. We give an overview of the influence of lossy data compression on local helioseismology. We investigate the effects of several lossy compression methods (quantization, JPEG compression, and smoothing and subsampling) on power spectra and time-distance measurements of supergranulation flows at disk center. Methods. We applied different compression methods to tracked and remapped Dopplergrams obtained by the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory. We determined the signal-to-noise ratio of the travel times computed from the compressed data as a function of the compression efficiency. Results. The basic helioseismic measurements that we consider are very robust to lossy data compression. Even if only the sign of the velocity is used, time-distance helioseismology is still...
Using Kernel Principal Components for Color Image Segmentation
Wesolkowski, Slawo
2002-11-01
Distinguishing objects on the basis of color is fundamental to humans. In this paper, a clustering approach is used to segment color images. Clustering is usually done using a single point or vector as a cluster prototype. The data can be clustered in the input or feature space where the feature space is some nonlinear transformation of the input space. The idea of kernel principal component analysis (KPCA) was introduced to align data along principal components in the kernel or feature space. KPCA is a nonlinear transformation of the input data that finds the eigenvectors along which this data has maximum information content (or variation). The principal components resulting from KPCA are nonlinear in the input space and represent principal curves. This is a necessary step as colors in RGB are not linearly correlated especially considering illumination effects such as shading or highlights. The performance of the k-means (Euclidean distance-based) and Mixture of Principal Components (vector angle-based) algorithms are analyzed in the context of the input space and the feature space obtained using KPCA. Results are presented on a color image segmentation task. The results are discussed and further extensions are suggested.
Parametric functional principal component analysis.
Sang, Peijun; Wang, Liangliang; Cao, Jiguo
2017-03-10
Functional principal component analysis (FPCA) is a popular approach in functional data analysis to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). Most existing FPCA approaches use a set of flexible basis functions such as B-spline basis to represent the FPCs, and control the smoothness of the FPCs by adding roughness penalties. However, the flexible representations pose difficulties for users to understand and interpret the FPCs. In this article, we consider a variety of applications of FPCA and find that, in many situations, the shapes of top FPCs are simple enough to be approximated using simple parametric functions. We propose a parametric approach to estimate the top FPCs to enhance their interpretability for users. Our parametric approach can also circumvent the smoothing parameter selecting process in conventional nonparametric FPCA methods. In addition, our simulation study shows that the proposed parametric FPCA is more robust when outlier curves exist. The parametric FPCA method is demonstrated by analyzing several datasets from a variety of applications. © 2017, The International Biometric Society.
Interpretable functional principal component analysis.
Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo
2016-09-01
Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data.
Great Assistant Principals and the (Great) Principals Who Mentor Them: A Practical Guide
Goodman, Carole C.; Berry, Christopher S.
2011-01-01
Written for principals and assistant principals to read and reflect on together, this book describes the most common challenges facing today's assistant principals--and provides practical solutions. Authors Carole Goodman and Christopher Berry examine how principals and assistant principals can develop the kinds of relationships that serve to meet…
Great Assistant Principals and the (Great) Principals Who Mentor Them: A Practical Guide
Goodman, Carole C.; Berry, Christopher S.
2011-01-01
Written for principals and assistant principals to read and reflect on together, this book describes the most common challenges facing today's assistant principals--and provides practical solutions. Authors Carole Goodman and Christopher Berry examine how principals and assistant principals can develop the kinds of relationships that serve to meet…
Female Traditional Principals and Co-Principals: Experiences of Role Conflict and Job Satisfaction
Eckman, Ellen Wexler; Kelber, Sheryl Talcott
2010-01-01
This paper presents a secondary analysis of survey data focusing on role conflict and job satisfaction of 102 female principals. Data were collected from 51 female traditional principals and 51 female co-principals. By examining the traditional and co-principal leadership models as experienced by female principals, this paper addresses the impact…
Spectral compression algorithms for the analysis of very large multivariate images
Keenan, Michael R.
2007-10-16
A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.
Chronic nerve root entrapment: compression and degeneration
Vanhoestenberghe, A.
2013-02-01
Electrode mounts are being developed to improve electrical stimulation and recording. Some are tight-fitting, or even re-shape the nervous structure they interact with, for a more selective, fascicular, access. If these are to be successfully used chronically with human nerve roots, we need to know more about the possible damage caused by the long-term entrapment and possible compression of the roots following electrode implantation. As there are, to date, no such data published, this paper presents a review of the relevant literature on alternative causes of nerve root compression, and a discussion of the degeneration mechanisms observed. A chronic compression below 40 mmHg would not compromise the functionality of the root as far as electrical stimulation and recording applications are concerned. Additionally, any temporary increase in pressure, due for example to post-operative swelling, should be limited to 20 mmHg below the patient’s mean arterial pressure, with a maximum of 100 mmHg. Connective tissue growth may cause a slower, but sustained, pressure increase. Therefore, mounts large enough to accommodate the root initially without compressing it, or compliant, elastic, mounts, that may stretch to free a larger cross-sectional area in the weeks after implantation, are recommended.
Negative linear compressibility in common materials
Miller, W.; Evans, K. E.; Marmier, A., E-mail: A.S.H.Marmier@exeter.ac.uk [College of Engineering Mathematics and Physical Science, University of Exeter, Exeter EX4 4QF (United Kingdom)
2015-06-08
Negative linear compressibility (NLC) is still considered an exotic property, only observed in a few obscure crystals. The vast majority of materials compress axially in all directions when loaded in hydrostatic compression. However, a few materials have been observed which expand in one or two directions under hydrostatic compression. At present, the list of materials demonstrating this unusual behaviour is confined to a small number of relatively rare crystal phases, biological materials, and designed structures, and the lack of widespread availability hinders promising technological applications. Using improved representations of elastic properties, this study revisits existing databases of elastic constants and identifies several crystals missed by previous reviews. More importantly, several common materials-drawn polymers, certain types of paper and wood, and carbon fibre laminates-are found to display NLC. We show that NLC in these materials originates from the misalignment of polymers/fibres. Using a beam model, we propose that maximum NLC is obtained for misalignment of 26°. The existence of such widely available materials increases significantly the prospects for applications of NLC.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Fingerprints in Compressed Strings
Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li
2013-01-01
The Karp-Rabin fingerprint of a string is a type of hash value that due to its strong properties has been used in many string algorithms. In this paper we show how to construct a data structure for a string S of size N compressed by a context-free grammar of size n that answers fingerprint queries...
Multiple snapshot compressive beamforming
Gerstoft, Peter; Xenaki, Angeliki; Mecklenbrauker, Christoph F.
2015-01-01
For sound fields observed on an array, compressive sensing (CS) reconstructs the multiple source signals at unknown directions-of-arrival (DOAs) using a sparsity constraint. The DOA estimation is posed as an underdetermined problem expressing the field at each sensor as a phase-lagged superposition...
Compressive CFAR radar detection
Anitori, L.; Otten, M.P.G.; Rossum, W.L. van; Maleki, A.; Baraniuk, R.
2012-01-01
In this paper we develop the first Compressive Sensing (CS) adaptive radar detector. We propose three novel architectures and demonstrate how a classical Constant False Alarm Rate (CFAR) detector can be combined with ℓ1-norm minimization. Using asymptotic arguments and the Complex Approximate Messag
Compressive CFAR Radar Processing
Anitori, L.; Rossum, W.L. van; Otten, M.P.G.; Maleki, A.; Baraniuk, R.
2013-01-01
In this paper we investigate the performance of a combined Compressive Sensing (CS) Constant False Alarm Rate (CFAR) radar processor under different interference scenarios using both the Cell Averaging (CA) and Order Statistic (OS) CFAR detectors. Using the properties of the Complex Approximate Mess
Beamforming Using Compressive Sensing
2011-10-01
dB to align the peak at 7.3o. Comparing peaks to val- leys , compressive sensing provides a greater main to interference (and noise) ratio...elements. Acknowledgments This research was supported by the Office of Naval Research. The authors would like to especially thank of Roger Gauss and Joseph
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
EVALUATION OF REGISTRATION, COMPRESSION AND CLASSIFICATION ALGORITHMS
Jayroe, R. R.
1994-01-01
Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available
A PDF closure model for compressible turbulent chemically reacting flows
Kollmann, W.
1992-01-01
The objective of the proposed research project was the analysis of single point closures based on probability density function (pdf) and characteristic functions and the development of a prediction method for the joint velocity-scalar pdf in turbulent reacting flows. Turbulent flows of boundary layer type and stagnation point flows with and without chemical reactions were be calculated as principal applications. Pdf methods for compressible reacting flows were developed and tested in comparison with available experimental data. The research work carried in this project was concentrated on the closure of pdf equations for incompressible and compressible turbulent flows with and without chemical reactions.
Comparative compressibility of hydrous wadsleyite
Chang, Y.; Jacobsen, S. D.; Thomas, S.; Bina, C. R.; Smyth, J. R.; Frost, D. J.; Hauri, E. H.; Meng, Y.; Dera, P. K.
2010-12-01
Determining the effects of hydration on the density and elastic properties of wadsleyite, β-Mg2SiO4, is critical to constraining Earth’s global geochemical water cycle. Whereas previous studies of the bulk modulus (KT) have studied either hydrous Mg-wadsleyite, or anhydrous Fe-bearing wadsleyite, the combined effects of hydration and iron are under investigation. Also, whereas KT from compressibility studies is relatively well constrained by equation of state fitting to P-V data, the pressure derivative of the bulk modulus (K’) is usually not well constrained either because of poor data resolution, uncertainty in pressure calibrations, or narrow pressure ranges of previous single-crystal studies. Here we report the comparative compressibility of dry versus hydrous wadsleyite with Fo90 composition containing 1.9(2) wt% H2O, nearly the maximum water storage capacity of this phase. The composition was characterized by EMPA and nanoSIMS. The experiments were carried out using high-pressure, single-crystal diffraction up to 30 GPa at HPCAT, Advanced Photon Source. By loading three crystals each of hydrous and anhydrous wadsleyite together in the same diamond-anvil cell, we achieve good hkl coverage and eliminate the pressure scale as a variable in comparing the relative value of K’ between the dry and hydrous samples. We used MgO as an internal diffraction standard, in addition to recording ruby fluorescence pressures. By using neon as a pressure medium and about 1 GPa pressure steps up to 30 GPa, we obtain high-quality diffraction data for constraining the effect of hydration on the density and K’ of hydrous wadsleyite. Due to hydration, the initial volume of hydrous Fo90 wadsleyite is larger than anhydrous Fo90 wadsleyite, however the higher compressibility of hydrous wadsleyite leads to a volume crossover at 6 GPa. Hydration to 2 wt% H2O reduces the bulk modulus of Fo90 wadsleyite from 170(2) to 157(2) GPa, or about 7.6% reduction. In contrast to previous
Magellan: Principal Venus science findings
Saunders, R. Stephen
1993-01-01
This is a brief summary of the science findings of the Magellan mission, principally based on data from the radar system. Future plans for Magellan include acquisition of high resolution gravity data from a nearly circular orbit and atmospheric drag and occultation experiments. The Magellan science results represent the combined effort of more than 100 Magellan investigators and their students and colleagues. More extensive discussions can be found in the August and October, 1992 issues of the Journal of Geophysical Research, Planets. The Magellan mission's scientific objectives were to provide a global characterization of landforms and tectonic features; to distinguish and understand impact processes; to define and explain erosion, deposition, and chemical processes; and to model the interior density distribution. All but the last objective, which requires new global gravity data, have been accomplished, or we have acquired the data that are required to accomplish them.
Fast Steerable Principal Component Analysis.
Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit
2016-03-01
Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL(3) + L(4)), while existing algorithms take O(nL(4)). The new algorithm computes the expansion coefficients of the images in a Fourier-Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA.
Hu, W; Hu, Wayne; Okamoto, Takemi
2004-01-01
We study the physical limitations placed on CMB temperature and polarization measurements of the initial power spectrum by geometric projection, acoustic physics, gravitational lensing and the joint fitting of cosmological parameters. Detailed information on the spectrum is greatly assisted by polarization information and localized to the acoustic regime k = 0.02-0.2 Mpc^{-1} with a fundamental resolution of Delta k/k>0.05. From this study we construct principal component based statistics, which are orthogonal to cosmological parameters including the initial amplitude and tilt of the spectrum, that best probe deviations from scale-free initial conditions. These statistics resemble Fourier modes confined to the acoustic regime and ultimately can yield ~50 independent measurements of the power spectrum features to percent level precision. They are straightforwardly related to more traditional parameterizations such as the the running of the tilt and in the future can provide many statistically independent measu...
Mortar constituent of concrete under cyclic compression
Maher, A.; Darwin, D.
1980-10-01
The behavior of the mortar constituent of concrete under cyclic compression was studied and a simple analytic model was developed to represent its cyclic behavior. Experimental work consisted of monotonic and cyclic compressive loading of mortar. Two mixes were used, with proportions corresponding to concretes having water cement ratios of 0.5 and 0.6. Forty-four groups of specimens were tested at ages ranging from 5 to 70 days. complete monotonic and cyclic stress strain envelopes were obtained. A number of loading regimes were investigated, including cycles to a constant maximum strain. Major emphasis was placed on tests using relatively high stress cycles. Degradation was shown to be a continuous process and a function of both total strain and load history. No stability or fatigue limit was apparent.
Randomness Testing of Compressed Data
Chang, Weiling; Yun, Xiaochun; Wang, Shupeng; Yu, Xiangzhan
2010-01-01
Random Number Generators play a critical role in a number of important applications. In practice, statistical testing is employed to gather evidence that a generator indeed produces numbers that appear to be random. In this paper, we reports on the studies that were conducted on the compressed data using 8 compression algorithms or compressors. The test results suggest that the output of compression algorithms or compressors has bad randomness, the compression algorithms or compressors are not suitable as random number generator. We also found that, for the same compression algorithm, there exists positive correlation relationship between compression ratio and randomness, increasing the compression ratio increases randomness of compressed data. As time permits, additional randomness testing efforts will be conducted.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Principal. 919.995 Section 919.995 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 919.995 Principal. Principal means— (a) An...
12 CFR 561.39 - Principal office.
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Principal office. 561.39 Section 561.39 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY DEFINITIONS FOR REGULATIONS AFFECTING ALL SAVINGS ASSOCIATIONS § 561.39 Principal office. The term principal office means the...
New Principal Coaching as a Safety Net
Celoria, Davide; Roberson, Ingrid
2015-01-01
This study examines new principal coaching as an induction process and explores the emotional dimensions of educational leadership. Twelve principal coaches and new principals--six of each--participated in this qualitative study that employed emergent coding (Creswell, 2008; Denzin, 2005; Glaser & Strauss, 1998; Spradley, 1979). The major…
Teacher Supervision Practices and Principals' Characteristics
April, Daniel; Bouchamma, Yamina
2015-01-01
A questionnaire was used to determine the individual and collective teacher supervision practices of school principals and vice-principals in Québec (n = 39) who participated in a research-action study on pedagogical supervision. These practices were then analyzed in terms of the principals' sociodemographic and socioprofessional characteristics…
A principal-agent model of corruption
Groenendijk, Nico
1997-01-01
One of the new avenues in the study of political corruption is that of neo-institutional economics, of which the principal-agent theory is a part. In this article a principal-agent model of corruption is presented, in which there are two principals (one of which is corrupting), and one agent (who is
Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.
2015-08-02
One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental
Maximum Work of Free-Piston Stirling Engine Generators
Kojima, Shinji
2017-04-01
Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.
Tree compression with top trees
Bille, Philip; Gørtz, Inge Li; Landau, Gad M.;
2015-01-01
We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...
Tree compression with top trees
Bille, Philip; Gørtz, Inge Li; Landau, Gad M.
2013-01-01
We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...
Tree compression with top trees
Bille, Philip; Gørtz, Inge Li; Landau, Gad M.
2015-01-01
We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...
Reinterpreting Compression in Infinitary Rewriting
Ketema, J.; Tiwari, Ashish
2012-01-01
Departing from a computational interpretation of compression in infinitary rewriting, we view compression as a degenerate case of standardisation. The change in perspective comes about via two observations: (a) no compression property can be recovered for non-left-linear systems and (b) some standar
Lossless Compression of Broadcast Video
Martins, Bo; Eriksen, N.; Faber, E.
1998-01-01
We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Algorithm for Compressing Time-Series Data
Hawkins, S. Edward, III; Darlington, Edward Hugo
2012-01-01
An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").
Principal component regression analysis with SPSS.
Liu, R X; Kuang, J; Gong, Q; Hou, X L
2003-06-01
The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.
Building indifferentiable compression functions from the PGV compression functions
Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde
2016-01-01
Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black...... cipher is ideal. We address the problem of building indifferentiable compression functions from the PGV compression functions. We consider a general form of 64 PGV compression functions and replace the linear feed-forward operation in this generic PGV compression function with an ideal block cipher...... independent of the one used in the generic PGV construction. This modified construction is called a generic modified PGV (MPGV). We analyse indifferentiability of the generic MPGV construction in the ideal cipher model and show that 12 out of 64 MPGV compression functions in this framework...
On Network Functional Compression
Feizi, Soheil
2010-01-01
In this paper, we consider different aspects of the network functional compression problem where computation of a function (or, some functions) of sources located at certain nodes in a network is desired at receiver(s). The rate region of this problem has been considered in the literature under certain restrictive assumptions, particularly in terms of the network topology, the functions and the characteristics of the sources. In this paper, we present results that significantly relax these assumptions. Firstly, we consider this problem for an arbitrary tree network and asymptotically lossless computation. We show that, for depth one trees with correlated sources, or for general trees with independent sources, a modularized coding scheme based on graph colorings and Slepian-Wolf compression performs arbitrarily closely to rate lower bounds. For a general tree network with independent sources, optimal computation to be performed at intermediate nodes is derived. We introduce a necessary and sufficient condition...
Zhou, Tianyi
2011-01-01
Compressed sensing (CS) and 1-bit CS cannot directly recover quantized signals and require time consuming recovery. In this paper, we introduce \\textit{Hamming compressed sensing} (HCS) that directly recovers a k-bit quantized signal of dimensional $n$ from its 1-bit measurements via invoking $n$ times of Kullback-Leibler divergence based nearest neighbor search. Compared with CS and 1-bit CS, HCS allows the signal to be dense, takes considerably less (linear) recovery time and requires substantially less measurements ($\\mathcal O(\\log n)$). Moreover, HCS recovery can accelerate the subsequent 1-bit CS dequantizer. We study a quantized recovery error bound of HCS for general signals and "HCS+dequantizer" recovery error bound for sparse signals. Extensive numerical simulations verify the appealing accuracy, robustness, efficiency and consistency of HCS.
Compressive Spectral Renormalization Method
Bayindir, Cihan
2016-01-01
In this paper a novel numerical scheme for finding the sparse self-localized states of a nonlinear system of equations with missing spectral data is introduced. As in the Petviashivili's and the spectral renormalization method, the governing equation is transformed into Fourier domain, but the iterations are performed for far fewer number of spectral components (M) than classical versions of the these methods with higher number of spectral components (N). After the converge criteria is achieved for M components, N component signal is reconstructed from M components by using the l1 minimization technique of the compressive sampling. This method can be named as compressive spectral renormalization (CSRM) method. The main advantage of the CSRM is that, it is capable of finding the sparse self-localized states of the evolution equation(s) with many spectral data missing.
Speech Compression and Synthesis
1980-10-01
phonological rules combined with diphone improved the algorithms used by the phonetic synthesis prog?Im for gain normalization and time... phonetic vocoder, spectral template. i0^Th^TreprtTörc"u’d1sTuV^ork for the past two years on speech compression’and synthesis. Since there was an...from Block 19: speech recognition, pnoneme recogmtion. initial design for a phonetic recognition program. We also recorded ana partially labeled a
Shock compression of nitrobenzene
Kozu, Naoshi; Arai, Mitsuru; Tamura, Masamitsu; Fujihisa, Hiroshi; Aoki, Katsutoshi; Yoshida, Masatake; Kondo, Ken-Ichi
1999-06-01
The Hugoniot (4 - 30 GPa) and the isotherm (1 - 7 GPa) of nitrobenzene have been investigated by shock and static compression experiments. Nitrobenzene has the most basic structure of nitro aromatic compounds, which are widely used as energetic materials, but nitrobenzene has been considered not to explode in spite of the fact its calculated heat of detonation is similar to TNT, about 1 kcal/g. Explosive plane-wave generators and diamond anvil cell were used for shock and static compression, respectively. The obtained Hugoniot consists of two linear lines, and the kink exists around 10 GPa. The upper line agrees well with the Hugoniot of detonation products calculated by KHT code, so it is expected that nitrobenzene detonates in that area. Nitrobenzene solidifies under 1 GPa of static compression, and the isotherm of solid nitrobenzene was obtained by X-ray diffraction technique. Comparing the Hugoniot and the isotherm, nitrobenzene is in liquid phase under experimented shock condition. From the expected phase diagram, shocked nitrobenzene seems to remain metastable liquid in solid phase region on that diagram.
Compressed sensing electron tomography
Leary, Rowan, E-mail: rkl26@cam.ac.uk [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ (United Kingdom); Saghi, Zineb; Midgley, Paul A. [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ (United Kingdom); Holland, Daniel J. [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom)
2013-08-15
The recent mathematical concept of compressed sensing (CS) asserts that a small number of well-chosen measurements can suffice to reconstruct signals that are amenable to sparse or compressible representation. In addition to powerful theoretical results, the principles of CS are being exploited increasingly across a range of experiments to yield substantial performance gains relative to conventional approaches. In this work we describe the application of CS to electron tomography (ET) reconstruction and demonstrate the efficacy of CS–ET with several example studies. Artefacts present in conventional ET reconstructions such as streaking, blurring of object boundaries and elongation are markedly reduced, and robust reconstruction is shown to be possible from far fewer projections than are normally used. The CS–ET approach enables more reliable quantitative analysis of the reconstructions as well as novel 3D studies from extremely limited data. - Highlights: • Compressed sensing (CS) theory and its application to electron tomography (ET) is described. • The practical implementation of CS–ET is outlined and its efficacy demonstrated with examples. • High fidelity tomographic reconstruction is possible from a small number of images. • The CS–ET reconstructions can be more reliably segmented and analysed quantitatively. • CS–ET is applicable to different image content by choice of an appropriate sparsifying transform.
Ultraspectral sounder data compression review
Bormin HUANG; Hunglung HUANG
2008-01-01
Ultraspectral sounders provide an enormous amount of measurements to advance our knowledge of weather and climate applications. The use of robust data compression techniques will be beneficial for ultraspectral data transfer and archiving. This paper reviews the progress in lossless compression of ultra-spectral sounder data. Various transform-based, pre-diction-based, and clustering-based compression methods are covered. Also studied is a preprocessing scheme for data reordering to improve compression gains. All the coding experiments are performed on the ultraspectral compression benchmark dataset col-lected from the NASA Atmospheric Infrared Sounder (AIRS) observations.
Engineering Relative Compression of Genomes
Grabowski, Szymon
2011-01-01
Technology progress in DNA sequencing boosts the genomic database growth at faster and faster rate. Compression, accompanied with random access capabilities, is the key to maintain those huge amounts of data. In this paper we present an LZ77-style compression scheme for relative compression of multiple genomes of the same species. While the solution bears similarity to known algorithms, it offers significantly higher compression ratios at compression speed over a order of magnitude greater. One of the new successful ideas is augmenting the reference sequence with phrases from the other sequences, making more LZ-matches available.
Determination of Optimum Compression Ratio: A Tribological Aspect
L. Yüksek
2013-12-01
Full Text Available Internal combustion engines are the primary energy conversion machines both in industry and transportation. Modern technologies are being implemented to engines to fulfill today's low fuel consumption demand. Friction energy consumed by the rubbing parts of the engines are becoming an important parameter for higher fuel efficiency. Rate of friction loss is primarily affected by sliding speed and the load acting upon rubbing surfaces. Compression ratio is the main parameter that increases the peak cylinder pressure and hence normal load on components. Aim of this study is to investigate the effect of compression ratio on total friction loss of a diesel engine. A variable compression ratio diesel engine was operated at four different compression ratios which were "12.96", "15:59", "18:03", "20:17". Brake power and speed was kept constant at predefined value while measuring the in- cylinder pressure. Friction mean effective pressure ( FMEP data were obtained from the in cylinder pressure curves for each compression ratio. Ratio of friction power to indicated power of the engine was increased from 22.83% to 37.06% with varying compression ratio from 12.96 to 20:17. Considering the thermal efficiency , FMEP and maximum in- cylinder pressure optimum compression ratio interval of the test engine was determined as 18.8 ÷ 19.6.
T.R. Neelakantan; S. Ramasundaram; Shanmugavel, R.; R. Vinoth
2013-01-01
Predicting 28-day compressive strength of concrete is an important research task for many years. In this study, concrete specimens were cured in two phases, initially at room temperature for a maximum of 30 h and later at a higher temperature for accelerated curing for a maximum of 3 h. Using the early strength obtained after the two-phase curing and the curing parameters, regression equations were developed to predict the 28-day compressive strength. For the accelerated curing (higher temper...
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Stress analysis of single joint rock mass under triaxial compression
LIU Xin-rong(刘新荣); JIANG Shu-ping(蒋树屏); LI Xiao-hong(李晓红); BAO Tai(包太)
2004-01-01
Based on the fundamental principle of rock mechanics, the stresses of single joint rock mass under three-dimensional compression were analyzed. The effect of the intermediate principle stress on the strength of single joint rock mass were discussed in particular. It is found that the strength of single joint rock are affected by the intermediate principal stress, which may be the main factor in some conditions.
Fast algorithm for exploring and compressing of large hyperspectral images
Kucheryavskiy, Sergey
2011-01-01
A new method for calculation of latent variable space for exploratory analysis and dimension reduction of large hyperspectral images is proposed. The method is based on significant downsampling of image pixels with preservation of pixels’ structure in feature (variable) space. To achieve this, in...... can be used first of all for fast compression of large data arrays with principal component analysis or similar projection techniques....
Mathematical Theory of Compressible Viscous Fluids: Analysis and Numerics
Feireisl, E. (Eduard); Karper, T.; Pokorný, M.
2016-01-01
This book offers an essential introduction to the mathematical theory of compressible viscous fluids. The main goal is to present analytical methods from the perspective of their numerical applications. Accordingly, we introduce the principal theoretical tools needed to handle well-posedness of the underlying Navier-Stokes system, study the problems of sequential stability, and, lastly, construct solutions by means of an implicit numerical scheme. Offering a unique contribution – by exploring...
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Method of Real-Time Principal-Component Analysis
Duong, Tuan; Duong, Vu
2005-01-01
Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.
Sparse Principal Component Analysis with missing observations
Lounici, Karim
2012-01-01
In this paper, we study the problem of sparse Principal Component Analysis (PCA) in the high-dimensional setting with missing observations. Our goal is to estimate the first principal component when we only have access to partial observations. Existing estimation techniques are usually derived for fully observed data sets and require a prior knowledge of the sparsity of the first principal component in order to achieve good statistical guarantees. Our contributions is threefold. First, we establish the first information-theoretic lower bound for the sparse PCA problem with missing observations. Second, we propose a simple procedure that does not require any prior knowledge on the sparsity of the unknown first principal component or any imputation of the missing observations, adapts to the unknown sparsity of the first principal component and achieves the optimal rate of estimation up to a logarithmic factor. Third, if the covariance matrix of interest admits a sparse first principal component and is in additi...
Whalley, E.
The compression of liquids can be measured either directly by applying a pressure and noting the volume change, or indirectly, by measuring the magnitude of the fluctuations of the local volume. The methods used in Ottawa for the direct measurement of the compression are reviewed. The mean-square deviation of the volume from the mean at constant temperature can be measured by X-ray and neutron scattering at low angles, and the meansquare deviation at constant entropy can be measured by measuring the speed of sound. The speed of sound can be measured either acoustically, using an acoustic transducer, or by Brillouin spectroscopy. Brillouin spectroscopy can also be used to study the shear waves in liquids if the shear relaxation time is > ∼ 10 ps. The relaxation time of water is too short for the shear waves to be studied in this way, but they do occur in the low-frequency Raman and infrared spectra. The response of the structure of liquids to pressure can be studied by neutron scattering, and recently experiments have been done at Atomic Energy of Canada Ltd, Chalk River, on liquid D 2O up to 15.6 kbar. They show that the near-neighbor intermolecular O-D and D-D distances are less spread out and at shorter distances at high pressure. Raman spectroscopy can also provide information on the structural response. It seems that the O-O distance in water decreases much less with pressure than it does in ice. Presumably, the bending of O-O-O angles tends to increase the O-O distance, and so to largely compensate the compression due to the direct effect of pressure.
Sun, Qilin
2017-04-01
High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.
Statistical Mechanical Analysis of Compressed Sensing Utilizing Correlated Compression Matrix
Takeda, Koujin
2010-01-01
We investigate a reconstruction limit of compressed sensing for a reconstruction scheme based on the L1-norm minimization utilizing a correlated compression matrix with a statistical mechanics method. We focus on the compression matrix modeled as the Kronecker-type random matrix studied in research on multi-input multi-output wireless communication systems. We found that strong one-dimensional correlations between expansion bases of original information slightly degrade reconstruction performance.
Osmotic compressibility of soft colloidal systems.
Tan, Beng H; Tam, Kam C; Lam, Yee C; Tan, Chee B
2005-05-10
A turbidimetric analysis of particle interaction of model pH-responsive microgel systems consisting of methacrylic acid-ethyl acrylate cross-linked with diallyl phthalate in colloidal suspensions is described. The structure factor at zero scattering angle, S(0), can be determined with good precision for wavelengths greater than 500 nm, and it measures the dispersion's resistance to particle compression. The structure factor of microgels at various cross-linked densities and ionic strengths falls onto a master curve when plotted against the effective volume fraction, phi(eff) = kc, which clearly suggests that particle interaction potential and osmotic compressibility is a function of effective volume fraction. In addition, the deviation of the structure factor, S(0), of our microgel systems with the structure factor of hard spheres, S(PY)(0), exhibits a maximum at phi(eff) approximately 0.2. Beyond this point the osmotic de-swelling force exceeds the osmotic pressure inside the soft particles resulting in particle shrinkage. Good agreement was obtained when the structural properties of our microgel systems obtained from turbidimetric analysis and rheology measurements were compared. Therefore, a simple turbidimetric analysis of these model pH-responsive microgel systems permits a quantitative evaluation of factors governing particle osmotic compressibility.
Compressive full waveform lidar
Yang, Weiyi; Ke, Jun
2017-05-01
To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.
Johnson, Terry A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bowman, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Barton [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Anovitz, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jensen, Craig [Hawaii Hydrogen Carriers LLC, Honolulu, HI (United States)
2017-07-01
Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methods of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H_{2} at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H_{2}) gas compressor with a
Beamforming using compressive sensing.
Edelmann, Geoffrey F; Gaumond, Charles F
2011-10-01
Compressive sensing (CS) is compared with conventional beamforming using horizontal beamforming of at-sea, towed-array data. They are compared qualitatively using bearing time records and quantitatively using signal-to-interference ratio. Qualitatively, CS exhibits lower levels of background interference than conventional beamforming. Furthermore, bearing time records show increasing, but tolerable, levels of background interference when the number of elements is decreased. For the full array, CS generates signal-to-interference ratio of 12 dB, but conventional beamforming only 8 dB. The superiority of CS over conventional beamforming is much more pronounced with undersampling.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Compressive sensing in medical imaging.
Graff, Christian G; Sidky, Emil Y
2015-03-10
The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.
Speech Compression Using Multecirculerletet Transform
Sulaiman Murtadha
2012-01-01
Full Text Available Compressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension on speech compression. DWT and MCT performances in terms of compression ratio (CR, mean square error (MSE and peak signal to noise ratio (PSNR are assessed. Computer simulation results indicate that the two dimensions MCT offer a better compression ratio, MSE and PSNR than DWT.
libpolycomp: Compression/decompression library
Tomasi, Maurizio
2016-04-01
Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.
Image Compression using GSOM Algorithm
SHABBIR AHMAD
2015-10-01
Full Text Available
Data compression on the sphere
McEwen, J D; Eyers, D M; 10.1051/0004-6361/201015728
2011-01-01
Large data-sets defined on the sphere arise in many fields. In particular, recent and forthcoming observations of the anisotropies of the cosmic microwave background (CMB) made on the celestial sphere contain approximately three and fifty mega-pixels respectively. The compression of such data is therefore becoming increasingly important. We develop algorithms to compress data defined on the sphere. A Haar wavelet transform on the sphere is used as an energy compression stage to reduce the entropy of the data, followed by Huffman and run-length encoding stages. Lossless and lossy compression algorithms are developed. We evaluate compression performance on simulated CMB data, Earth topography data and environmental illumination maps used in computer graphics. The CMB data can be compressed to approximately 40% of its original size for essentially no loss to the cosmological information content of the data, and to approximately 20% if a small cosmological information loss is tolerated. For the topographic and il...
Energy transfer in compressible turbulence
Bataille, Francoise; Zhou, YE; Bertoglio, Jean-Pierre
1995-01-01
This letter investigates the compressible energy transfer process. We extend a methodology developed originally for incompressible turbulence and use databases from numerical simulations of a weak compressible turbulence based on Eddy-Damped-Quasi-Normal-Markovian (EDQNM) closure. In order to analyze the compressible mode directly, the well known Helmholtz decomposition is used. While the compressible component has very little influence on the solenoidal part, we found that almost all of the compressible turbulence energy is received from its solenoidal counterpart. We focus on the most fundamental building block of the energy transfer process, the triadic interactions. This analysis leads us to conclude that, at low turbulent Mach number, the compressible energy transfer process is dominated by a local radiative transfer (absorption) in both inertial and energy containing ranges.
Perceptually Lossless Wavelet Compression
Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John
1996-01-01
The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.
Compressive Sensing DNA Microarrays
Richard G. Baraniuk
2009-01-01
Full Text Available Compressive sensing microarrays (CSMs are DNA-based sensors that operate using group testing and compressive sensing (CS principles. In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM, each sensor responds to a set of targets. We study the problem of designing CSMs that simultaneously account for both the constraints from CS theory and the biochemistry of probe-target DNA hybridization. An appropriate cross-hybridization model is proposed for CSMs, and several methods are developed for probe design and CS signal recovery based on the new model. Lab experiments suggest that in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, out-of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications in which only short hybridization times are allowed.
Compressive light field sensing.
Babacan, S Derin; Ansorge, Reto; Luessi, Martin; Matarán, Pablo Ruiz; Molina, Rafael; Katsaggelos, Aggelos K
2012-12-01
We propose a novel design for light field image acquisition based on compressive sensing principles. By placing a randomly coded mask at the aperture of a camera, incoherent measurements of the light passing through different parts of the lens are encoded in the captured images. Each captured image is a random linear combination of different angular views of a scene. The encoded images are then used to recover the original light field image via a novel Bayesian reconstruction algorithm. Using the principles of compressive sensing, we show that light field images with a large number of angular views can be recovered from only a few acquisitions. Moreover, the proposed acquisition and recovery method provides light field images with high spatial resolution and signal-to-noise-ratio, and therefore is not affected by limitations common to existing light field camera designs. We present a prototype camera design based on the proposed framework by modifying a regular digital camera. Finally, we demonstrate the effectiveness of the proposed system using experimental results with both synthetic and real images.
S. Abhishek
2016-07-01
Full Text Available It is well understood that in any data acquisition system reduction in the amount of data reduces the time and energy, but the major trade-off here is the quality of outcome normally, lesser the amount of data sensed, lower the quality. Compressed Sensing (CS allows a solution, for sampling below the Nyquist rate. The challenging problem of increasing the reconstruction quality with less number of samples from an unprocessed data set is addressed here by the use of representative coordinate selected from different orders of splines. We have made a detailed comparison with 10 orthogonal and 6 biorthogonal wavelets with two sets of data from MIT Arrhythmia database and our results prove that the Spline coordinates work better than the wavelets. The generation of two new types of splines such as exponential and double exponential are also briefed here .We believe that this is one of the very first attempts made in Compressed Sensing based ECG reconstruction problems using raw data.
Maximum twin shear stress factor criterion for sliding mode fracture initiation
黎振兹; 李慧剑; 黎晓峰; 周洪彬; 郝圣旺
2002-01-01
Previous researches on the mixed mode fracture initiation criteria were mostly focused on opening mode fracture. In this study, the authors proposed a new criterion for mixed mode sliding fracture initiation, which is the maximum twin shear stress factor criterion. The authors studied a finite width plate with central slant crack, subject to a far-field uniform uniaxial tensile or compressive stress.
Mroueh, Youssef; Rosasco, Lorenzo
2013-01-01
We introduce q-ary compressive sensing, an extension of 1-bit compressive sensing. We propose a novel sensing mechanism and a corresponding recovery procedure. The recovery properties of the proposed approach are analyzed both theoretically and empirically. Results in 1-bit compressive sensing are recovered as a special case. Our theoretical results suggest a tradeoff between the quantization parameter q, and the number of measurements m in the control of the error of the resulting recovery a...
Introduction to compressible fluid flow
Oosthuizen, Patrick H
2013-01-01
IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices
PHYSICAL MODELING OF ODOMETRIC COMPRESSION OF SAND
Lyashenko P. A.
2016-10-01
Full Text Available The odometric compression of sand with constant rate of loading (CRL or constant rate of deformation (CRD and continuous registration of the corresponding reaction allows to identify the effect of stepwise changes of deformation (at the CRL and the power reaction (at the CRD. Physical modeling of compression on the sandy model showed the same effect. The physical model was made of fine sand with marks, mimicking large inclusions. Compression of the soil at the CRD was uneven, stepwise, and the strain rate of the upper boundary of the sandy model changed cyclically. Maximum amplitudes of cycles passed through a maximum. Inside of the sand model, the uneven strain resulted in the mutual displacement of the adjacent parts located at the same depth. The growth of external pressure, the marks showed an increase or decrease in displacement and even move opposite to the direction of movement (settlement the upper boundary of the model ‒ "floating" of marks. Marks, at different depths, got at the same time different movements, including mutually contradictory. The mark settlements sudden growth when the sufficiently large pressure. These increments in settlements remained until the end of loading decreasing with depth. They were a confirmation of the hypothesis about the total destruction of the soil sample at a pressure of "structural strength". The hypothesis of the "floating" reason based on the obvious assumption that the marks are moved together with the surrounding sand. The explanation of the effect of "floating" is supported by the fact that the value of "floating" the more, the greater the depth
Watermarking Based on Principal Component Analysis%基于主分量分析的数字水印
王朔中
2000-01-01
A new watermarking scheme using principal component analysis (PCA) is described. The proposed method inserts highly robust watermarks into still images without degrading their visual quality. Experimental results are presented, showing that the PCA-based watermarks can resist malicious attacks including lowpass filtering, re-scaling, and compression coding.
Compressive sensing of sparse tensors.
Friedland, Shmuel; Li, Qun; Schonfeld, Dan
2014-10-01
Compressive sensing (CS) has triggered an enormous research activity since its first appearance. CS exploits the signal's sparsity or compressibility in a particular domain and integrates data compression and acquisition, thus allowing exact reconstruction through relatively few nonadaptive linear measurements. While conventional CS theory relies on data representation in the form of vectors, many data types in various applications, such as color imaging, video sequences, and multisensor networks, are intrinsically represented by higher order tensors. Application of CS to higher order data representation is typically performed by conversion of the data to very long vectors that must be measured using very large sampling matrices, thus imposing a huge computational and memory burden. In this paper, we propose generalized tensor compressive sensing (GTCS)-a unified framework for CS of higher order tensors, which preserves the intrinsic structure of tensor data with reduced computational complexity at reconstruction. GTCS offers an efficient means for representation of multidimensional data by providing simultaneous acquisition and compression from all tensor modes. In addition, we propound two reconstruction procedures, a serial method and a parallelizable method. We then compare the performance of the proposed method with Kronecker compressive sensing (KCS) and multiway compressive sensing (MWCS). We demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both reconstruction accuracy (within a range of compression ratios) and processing speed. The major disadvantage of our methods (and of MWCS as well) is that the compression ratios may be worse than that offered by KCS.
Uncommon upper extremity compression neuropathies.
Knutsen, Elisa J; Calfee, Ryan P
2013-08-01
Hand surgeons routinely treat carpal and cubital tunnel syndromes, which are the most common upper extremity nerve compression syndromes. However, more infrequent nerve compression syndromes of the upper extremity may be encountered. Because they are unusual, the diagnosis of these nerve compression syndromes is often missed or delayed. This article reviews the causes, proposed treatments, and surgical outcomes for syndromes involving compression of the posterior interosseous nerve, the superficial branch of the radial nerve, the ulnar nerve at the wrist, and the median nerve proximal to the wrist. Copyright © 2013 Elsevier Inc. All rights reserved.
Image Compression Algorithms Using Dct
Er. Abhishek Kaushik
2014-04-01
Full Text Available Image compression is the application of Data compression on digital images. The discrete cosine transform (DCT is a technique for converting a signal into elementary frequency components. It is widely used in image compression. Here we develop some simple functions to compute the DCT and to compress images. An image compression algorithm was comprehended using Matlab code, and modified to perform better when implemented in hardware description language. The IMAP block and IMAQ block of MATLAB was used to analyse and study the results of Image Compression using DCT and varying co-efficients for compression were developed to show the resulting image and error image from the original images. Image Compression is studied using 2-D discrete Cosine Transform. The original image is transformed in 8-by-8 blocks and then inverse transformed in 8-by-8 blocks to create the reconstructed image. The inverse DCT would be performed using the subset of DCT coefficients. The error image (the difference between the original and reconstructed image would be displayed. Error value for every image would be calculated over various values of DCT co-efficients as selected by the user and would be displayed in the end to detect the accuracy and compression in the resulting image and resulting performance parameter would be indicated in terms of MSE , i.e. Mean Square Error.
Maximum-entropy probability distributions under Lp-norm constraints
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
Compression and texture in socks enhance football kicking performance.
Hasan, Hosni; Davids, Keith; Chow, Jia Yi; Kerr, Graham
2016-08-01
The purpose of this study was to observe effects of wearing textured insoles and clinical compression socks on organisation of lower limb interceptive actions in developing athletes of different skill levels in association football. Six advanced learners and six completely novice football players (15.4±0.9years) performed 20 instep kicks with maximum velocity, in four randomly organised insoles and socks conditions, (a) Smooth Socks with Smooth Insoles (SSSI); (b) Smooth Socks with Textured Insoles (SSTI); (c) Compression Socks with Smooth Insoles (CSSI) and (d), Compression Socks with Textured Insoles (CSTI). Reflective markers were placed on key anatomical locations and the ball to facilitate three-dimensional (3D) movement recording and analysis. Data on 3D kinematic variables and initial ball velocity were analysed using one-way mixed model ANOVAs. Results revealed that wearing textured and compression materials enhanced performance in key variables, such as the maximum velocity of the instep kick and increased initial ball velocity, among advanced learners compared to the use of non-textured and compression materials. Adding texture to football boot insoles appeared to interact with compression materials to improve kicking performance, captured by these important measures. This improvement in kicking performance is likely to have occurred through enhanced somatosensory system feedback utilised for foot placement and movement organisation of the lower limbs. Data suggested that advanced learners were better at harnessing the augmented feedback information from compression and texture to regulate emerging movement patterns compared to novices. Copyright © 2016. Published by Elsevier B.V.
Acceleration of dynamic fluorescence molecular tomography with principal component analysis.
Zhang, Guanglei; He, Wei; Pu, Huangsheng; Liu, Fei; Chen, Maomao; Bai, Jing; Luo, Jianwen
2015-06-01
Dynamic fluorescence molecular tomography (FMT) is an attractive imaging technique for three-dimensionally resolving the metabolic process of fluorescent biomarkers in small animal. When combined with compartmental modeling, dynamic FMT can be used to obtain parametric images which can provide quantitative pharmacokinetic information for drug development and metabolic research. However, the computational burden of dynamic FMT is extremely huge due to its large data sets arising from the long measurement process and the densely sampling device. In this work, we propose to accelerate the reconstruction process of dynamic FMT based on principal component analysis (PCA). Taking advantage of the compression property of PCA, the dimension of the sub weight matrix used for solving the inverse problem is reduced by retaining only a few principal components which can retain most of the effective information of the sub weight matrix. Therefore, the reconstruction process of dynamic FMT can be accelerated by solving the smaller scale inverse problem. Numerical simulation and mouse experiment are performed to validate the performance of the proposed method. Results show that the proposed method can greatly accelerate the reconstruction of parametric images in dynamic FMT almost without degradation in image quality.
Untapped Resources: Assistant Principals as Instructional Leaders
Bartholomew, Selma K.; Melendez-Delaney, Genis; Orta, Awilda; White, Sharon
2005-01-01
Assistant principals are often overlooked as a resource for creating, advancing, and sustaining a compelling vision for mathematics. The Math Collaborative Project developed in New York City examined the process of developing and implementing programs designed to help assistant principals network and strengthen their instructional leadership…
New Principals' Perspectives of Their Multifaceted Roles
Gentilucci, James L.; Denti, Lou; Guaglianone, Curtis L.
2013-01-01
This study utilizes Symbolic Interactionism to explore perspectives of neophyte principals. Findings explain how these perspectives are modified through complex interactions throughout the school year, and they also suggest preparation programs can help new principals most effectively by teaching "soft" skills such as active listening…
Perceived Educational Values of Omani School Principals
Al-Ani, Wajeha Thabit; Al-Harthi, Aisha Salim
2017-01-01
This qualitative study investigated the perceived educational values of Omani school principals. Data were collected using a semi-structured interview form which focused on the core values of school administration as perceived by a sample of 44 school principals; a focus group interview was also held. Data were analysed using Nvivo software. The…
Principals: Human Capital Managers at Every School
Kimball, Steven M.
2011-01-01
Being a principal is more than just being an instructional leader. Principals also must manage their schools' teaching talent in a strategic way so that it is linked to school instructional improvement strategies, to the competencies needed to enact the strategies, and to success in boosting student learning. Teacher acquisition and performance…
Assistant Principals: Their Readiness as Instructional Leaders
Searby, Linda; Browne-Ferrigno, Tricia; Wang, Chih-hsuan
2017-01-01
This article reports findings from a study investigating the capacity of assistant principals to be instructional leaders. Analyses of survey responses yielded four interesting findings: (a) years of experience as a teacher and age had no significance on assistant principals' perceived readiness as an instructional leader; (b) those completing…
Career Paths of Female Elementary Assistant Principals
Baier, Hope C.
2013-01-01
The purpose of this research was to explore the worklife experiences and personal issues of female elementary assistant principals and examine the influence of these factors on their intent to remain in their position or leave. The worklife experiences and perceptions of female elementary assistant principals were categorized as institutional or…
Assistant Principals and Reform: A Socialization Paradox?
Best, Marguerita L.
2013-01-01
Framed in the critical race theory of structuration (CRTS), this sequential explanatory mixed methods study seeks to identify the socialization practices by examining the realities of practices of assistant principals and the ways in which they impact the disciplinary actions of assistant principals at middle and high schools. The mixed methods…
Principals as Maverick Leaders: Rethinking Democratic Schools
Walker, Sharron Goldman; Chirichello, Michael
2011-01-01
After her school wins the coveted United States National Secondary Education Award, a school principal embarks upon an educational odyssey. The principal discovers that the reasons for winning the award are a sham! As her school falls apart, she begins to reflect on the stagnant school organization and the ineffective prescriptions for…
Principals' Perceptions of School Public Relations
Morris, Robert C.; Chan, Tak Cheung; Patterson, Judith
2009-01-01
This study was designed to investigate school principals' perceptions on school public relations in five areas: community demographics, parental involvement, internal and external communications, school council issues, and community resources. Findings indicated that principals' concerns were as follows: rapid population growth, change of…
Principals' Pupil Control Behavior and School Robustness.
Smedley, Stanley R.; Willower, Donald J.
1981-01-01
A survey of 3,100 students, teachers, and principals in 47 elementary and secondary schools in the Middle Atlantic region, using the Pupil Control Behavior Form, revealed a positive association between principals' humanistic pupil control behavior and schools'"robustness" (the degree of meaning and excitement students find in school).…
Geographical Distribution of Principals in Israeli Schools
Lebental, Dana M.
2015-01-01
This quantitative investigation focuses on women high school principals at Jewish secular schools throughout Israel. Despite challenges, Israeli women have succeeded in obtaining over half of the principal positions at Jewish secular high schools, but the degree to which there is equal gender access to leadership roles in the school system remains…
Principals: Human Capital Managers at Every School
Kimball, Steven M.
2011-01-01
Being a principal is more than just being an instructional leader. Principals also must manage their schools' teaching talent in a strategic way so that it is linked to school instructional improvement strategies, to the competencies needed to enact the strategies, and to success in boosting student learning. Teacher acquisition and performance…
A Latina Principal Leading for Social Justice
Hernandez, Frank; Murakami, Elizabeth T.; Cerecer, Patricia Quijada
2014-01-01
In this study, the role that racial identity plays among Latina school principals is examined through a case study of a principal in a K-3 elementary school. Based on a Latina/o critical race framework and a phenomenological research approach, the study explores the degree to which having a strong understanding of one's racial identity…
Instructional Leadership: Are Women Principals Better?
Andrews, Richard L.; Basom, Margaret R.
1990-01-01
A 1984 study found that female elementary school principals spent 38.4 percent of their time on instructional leadership activities, while their male counterparts spent only 21.8 percent. A 1989 follow-up study found that women principals were more likely to be seen by their staffs as instructional leaders. A sidebar examines sex discrimination in…
Evaluation of Principals; Leadership Excellence Achievement Plan.
Redfern, George B.; Hersey, Paul W.
1981-01-01
The Leadership Excellence Achievement Plan (LEAP) presented here is a way for principals to improve leadership ability with an emphasis on evaluation. It first recommends formulating a precise definition of the principal's job divided into three areas: technical competencies, administrative skills, and performance goals. Among the technical…
Assistant Principals and Reform: A Socialization Paradox?
Best, Marguerita L.
2013-01-01
Framed in the critical race theory of structuration (CRTS), this sequential explanatory mixed methods study seeks to identify the socialization practices by examining the realities of practices of assistant principals and the ways in which they impact the disciplinary actions of assistant principals at middle and high schools. The mixed methods…
Micropolitics: Empowering Principals to Accomplish Goals.
Cilo, Daniel C.
1994-01-01
Examines how principals manage in situations that defy conventional administrative authority and methods. Fully 80% of the 30 Pennsylvania high school principals interviewed admitted using at least 1 micropolitical strategy, such as exchange theory, divide and conquer, information control, cooptation, displacement, and discretionary behavior. Most…
Elementary Principals' Role in Science Instruction
Casey, Patricia; Dunlap, Karen; Brown, Kristen; Davison, Michele
2012-01-01
This study explores the role elementary school principals play in science education. Specifically, the study employed an online survey of 16 elementary school principals at high-performing campuses in North Texas to explore their perceptions of how they influenced science education on their campuses. The survey used a combination of Likert-type…
Permutation Tests in Principal Component Analysis.
Pohlmann, John T.; Perkins, Kyle; Brutten, Shelia
Structural changes in an English as a Second Language (ESL) 30-item reading comprehension test were examined through principal components analysis on a small sample (n=31) of students. Tests were administered on three occasions during intensive ESL training. Principal components analysis of the items was performed for each test occasion.…
Social Media Strategies for School Principals
Cox, Dan; McLeod, Scott
2014-01-01
The purpose of this qualitative study was to describe, analyze, and interpret the experiences of school principals who use multiple social media tools with stakeholders as part of their comprehensive communications practices. Additionally, it examined why school principals have chosen to communicate with their stakeholders through social media.…
Women Principals Leading Learning at "Poverty's Edge"
Lyman, Linda L.
2008-01-01
The author profiles two women principals of color who have successfully enhanced student learning in high-poverty schools. In their leadership narratives, the principals address how the complexity of poverty affects their work, how they affirm the worth and dignity of all, how they influence beliefs and attitudes of staff, why they think their…
Women Principals Leading Learning at "Poverty's Edge"
Lyman, Linda L.
2008-01-01
The author profiles two women principals of color who have successfully enhanced student learning in high-poverty schools. In their leadership narratives, the principals address how the complexity of poverty affects their work, how they affirm the worth and dignity of all, how they influence beliefs and attitudes of staff, why they think their…
Instructional or Managerial Leadership: The Principal Role!
Jazzar, Michael
2004-01-01
"Instructional Or Managerial Leadership: The Principal Role" is a case study written to challenge the beliefs of graduate students preparing for educational leadership roles and educational leaders already in these positions as to the importance of the principal as an instructional leader. This case explores communication between superintendents…
A Latina Principal Leading for Social Justice
Hernandez, Frank; Murakami, Elizabeth T.; Cerecer, Patricia Quijada
2014-01-01
In this study, the role that racial identity plays among Latina school principals is examined through a case study of a principal in a K-3 elementary school. Based on a Latina/o critical race framework and a phenomenological research approach, the study explores the degree to which having a strong understanding of one's racial identity formation…
Superintendents' Perceptions of the Principal Shortage
Pijanowski, John C.; Hewitt, Paul M.; Brady, Kevin P.
2009-01-01
The research literature on the principal shortage is inconsistent regarding the actual scope of the shortage and a clear articulation of factors contributing to the successful recruitment and retention of today's school leaders. Often, critical data related to the principal shortage are ignored, including the number of younger principals…
Exploring Principals' Perceptions of Supervised Agricultural Experience
Rayfield, John; Wilson, Elizabeth
2009-01-01
This study explored the perceptions of principals at high schools with agricultural education programs in regard to Supervised Agricultural Experience (SAE). There is evidence that suggests that high school principals' attitudes may both directly and indirectly affect factors that influence school climate and student achievement. In this study,…
Investigating Roles of Online School Principals
Quilici, Sarah B.; Joki, Russell
2012-01-01
This study explores the instructional leadership skills required from online principals, as defined by one state's (Idaho) adaptation of the Interstate School Leaders Licensure Consortium (ISLLC) standards (1996) as a requirement for professional certification. Specifically, this qualitative study examined six sets of paired online principals and…
School Principal Speech about Fiscal Mismanagement
Hassenpflug, Ann
2015-01-01
A review of two recent federal court cases concerning school principals who experienced adverse job actions after they engaged in speech about fiscal misconduct by other employees indicates that the courts found that the principal's speech was made as part of his or her job duties and was not protected by the First Amendment.
Social Media Strategies for School Principals
Cox, Dan; McLeod, Scott
2014-01-01
The purpose of this qualitative study was to describe, analyze, and interpret the experiences of school principals who use multiple social media tools with stakeholders as part of their comprehensive communications practices. Additionally, it examined why school principals have chosen to communicate with their stakeholders through social media.…
Estimating Principal Effectiveness. Working Paper 32
Branch, Gregory; Hanushek, Eric; Rivkin, Steven
2009-01-01
Much has been written about the importance of school leadership, but there is surprisingly little systematic evidence on this topic. This paper presents preliminary estimates of key elements of the market for school principals, employing rich panel data on principals from Texas State. The consideration of teacher movements across schools suggests…
Framing Research on School Principals' Identities
Crow, Gary; Day, Christopher; Møller, Jorunn
2017-01-01
This paper provides a basis for a tentative framework for guiding future research into principals' identity construction and development. It is situated in the context of persisting emphases placed by government policies on the need for technocratic competencies in principals as a means of demonstrating success defined largely as compliance with…
Should Principals Know More about Law?
Doctor, Tyrus L.
2013-01-01
Educational law is a critical piece of the education conundrum. Principals reference law books on a daily basis in order to address the wide range of complex problems in the school system. A principal's knowledge of law issues and legal decision-making are essential to provide effective feedback for a successful school.
Principals as Maverick Leaders: Rethinking Democratic Schools
Walker, Sharron Goldman; Chirichello, Michael
2011-01-01
After her school wins the coveted United States National Secondary Education Award, a school principal embarks upon an educational odyssey. The principal discovers that the reasons for winning the award are a sham! As her school falls apart, she begins to reflect on the stagnant school organization and the ineffective prescriptions for…
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
An Approach Towards Lossless Compression Through Artificial Neural Network Techinique
Mayur Prakash
2015-07-01
Full Text Available An image consists of significant info along with demands much more space within the memory. The particular significant info brings about much more indication moment from transmitter to device. Any time intake is usually lowered by utilizing info compression techniques. In this particular method, it's possible to eliminate the repetitive info within an image. The particular condensed image demands a lesser amount of storage along with a lesser amount of time for you to monitor by means of data from transmitter to device. Unnatural neural community along with give food to ahead back again propagation method can be utilized for image compression. In this particular cardstock, this Bipolar Code Method is offered along with executed for image compression along with received the higher results as compared to Principal Part Analysis (PCA method. However, this LM protocol can be offered along with executed which will acts as being a powerful way of image compression. It is seen how the Bipolar Code along with LM protocol fits the very best for image compression along with control applications.
Axial Compressive Strength of Foamcrete with Different Profiles and Dimensions
Othuman Mydin M.A.
2014-01-01
Full Text Available Lightweight foamcrete is a versatile material; primarily consist of a cement based mortar mixed with at least 20% volume of air. High flow ability, lower self-weight, minimal requirement of aggregate, controlled low strength and good thermal insulation properties are a few characteristics of foamcrete. Its dry densities, typically, is below 1600kg/m3 with compressive strengths maximum of 15MPa. The ASTM standard provision specifies a correction factor for concrete strengths of between 14 and 42MPa to compensate for the reduced strength when the aspect height-to-diameter ratio of specimen is less than 2.0, while the CEB-FIP provision specifically mentions the ratio of 150 x 300mm cylinder strength to 150 mm cube strength. However, both provisions requirements do not specifically clarify the applicability and/or modification of the correction factors for the compressive strength of foamcrete. This proposed laboratory work is intended to study the effect of different dimensions and profiles on the axial compressive strength of concrete. Specimens of various dimensions and profiles are cast with square and circular cross-sections i.e., cubes, prisms and cylinders, and to investigate their behavior in compression strength at 7 and 28 days. Hypothetically, compressive strength will decrease with the increase of concrete specimen dimension and concrete specimen with cube profile would yield comparable compressive strength to cylinder (100 x 100 x 100mm cube to 100dia x 200mm cylinder.
Compressive sensing by learning a Gaussian mixture model from measurements.
Yang, Jianbo; Liao, Xuejun; Yuan, Xin; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence
2015-01-01
Compressive sensing of signals drawn from a Gaussian mixture model (GMM) admits closed-form minimum mean squared error reconstruction from incomplete linear measurements. An accurate GMM signal model is usually not available a priori, because it is difficult to obtain training signals that match the statistics of the signals being sensed. We propose to solve that problem by learning the signal model in situ, based directly on the compressive measurements of the signals, without resorting to other signals to train a model. A key feature of our method is that the signals being sensed are treated as random variables and are integrated out in the likelihood. We derive a maximum marginal likelihood estimator (MMLE) that maximizes the likelihood of the GMM of the underlying signals given only their linear compressive measurements. We extend the MMLE to a GMM with dominantly low-rank covariance matrices, to gain computational speedup. We report extensive experimental results on image inpainting, compressive sensing of high-speed video, and compressive hyperspectral imaging (the latter two based on real compressive cameras). The results demonstrate that the proposed methods outperform state-of-the-art methods by significant margins.
Conceptual design of heavy ion beam compression using a wedge
Jonathan C. Wong
2015-10-01
Full Text Available Heavy ion beams are a useful tool for conducting high energy density physics (HEDP experiments. Target heating can be enhanced by beam compression, because a shorter pulse diminishes hydrodynamic expansion during irradiation. A conceptual design is introduced to compress ∼100 MeV/u to ∼GeV/u heavy ion beams using a wedge. By deflecting the beam with a time-varying field and placing a tailor-made wedge amid its path downstream, each transverse slice passes through matter of different thickness. The resulting energy loss creates a head-to-tail velocity gradient, and the wedge shape can be designed by using stopping power models to give maximum compression at the target. The compression ratio at the target was found to vary linearly with (head-to-tail centroid offset/spot radius at the wedge. The latter should be approximately 10 to attain tenfold compression. The decline in beam quality due to projectile ionization, energy straggling, fragmentation, and scattering is shown to be acceptable for well-chosen wedge materials. A test experiment is proposed to verify the compression scheme and to study the beam-wedge interaction and its associated beam dynamics, which will facilitate further efforts towards a HEDP facility.
An underwater acoustic data compression method based on compressed sensing
郭晓乐; 杨坤德; 史阳; 段睿
2016-01-01
The use of underwater acoustic data has rapidly expanded with the application of multichannel, large-aperture underwater detection arrays. This study presents an underwater acoustic data compression method that is based on compressed sensing. Underwater acoustic signals are transformed into the sparse domain for data storage at a receiving terminal, and the improved orthogonal matching pursuit (IOMP) algorithm is used to reconstruct the original underwater acoustic signals at a data processing terminal. When an increase in sidelobe level occasionally causes a direction of arrival estimation error, the proposed compression method can achieve a 10 times stronger compression for narrowband signals and a 5 times stronger compression for wideband signals than the orthogonal matching pursuit (OMP) algorithm. The IOMP algorithm also reduces the computing time by about 20% more than the original OMP algorithm. The simulation and experimental results are discussed.
Non-random structures in universal compression and the Fermi paradox
Gurzadyan, A. V.; Allahverdyan, A. E.
2016-02-01
We study the hypothesis of information panspermia assigned recently among possible solutions of the Fermi paradox ("where are the aliens?"). It suggests that the expenses of alien signaling can be significantly reduced, if their messages contained compressed information. To this end we consider universal compression and decoding mechanisms ( e.g. the Lempel-Ziv-Welch algorithm) that can reveal non-random structures in compressed bit strings. The efficiency of the Kolmogorov stochasticity parameter for detection of non-randomness is illustrated, along with the Zipf's law. The universality of these methods, i.e. independence from data details, can be principal in searching for intelligent messages.
Non-random structures in universal compression and the Fermi paradox
Gurzadyan, A V
2016-01-01
We study the hypothesis of information panspermia assigned recently among possible solutions of the Fermi paradox ("where are the aliens?"). It suggests that the expenses of alien signaling can be significantly reduced, if their messages contain compressed information. To this end we consider universal compression and decoding mechanisms (e.g. the Lempel-Ziv-Welch algorithm) that can reveal non-random structures in compressed bit strings. The efficiency of Kolmogorov stochasticity parameter for detection of non-randomness is illustrated, along with the Zipf's law. The universality of these methods, i.e. independence on data details, can be principal in searching for intelligent messages.
Berger, Jens; Frankenfeld, Ulrich; Lindenstruth, Volker; Plamper, Patrick; Roehrich, Dieter; Schaefer, Erich; W. Schulz, Markus; M. Steinbeck, Timm; Stock, Reinhard; Sulimma, Kolja; Vestboe, Anders; Wiebalck, Arne E-mail: wiebalck@kip.uni-heidelberg.de
2002-08-21
In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.
Berger, Jens; Lindenstruth, Volker; Plamper, Patrick; Röhrich, Dieter; Schafer, Erich; Schulz, M W; Steinbeck, T M; Stock, Reinhard; Sulimma, Kolja; Vestbo, Anders S; Wiebalck, Arne
2002-01-01
In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.
Berger, Jens; Frankenfeld, Ulrich; Lindenstruth, Volker; Plamper, Patrick; Röhrich, Dieter; Schäfer, Erich; Schulz, Markus W.; Steinbeck, Timm M.; Stock, Reinhard; Sulimma, Kolja; Vestbø, Anders; Wiebalck, Arne
2002-08-01
In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.
Ockendon, Hilary
2016-01-01
Now in its second edition, this book continues to give readers a broad mathematical basis for modelling and understanding the wide range of wave phenomena encountered in modern applications. New and expanded material includes topics such as elastoplastic waves and waves in plasmas, as well as new exercises. Comprehensive collections of models are used to illustrate the underpinning mathematical methodologies, which include the basic ideas of the relevant partial differential equations, characteristics, ray theory, asymptotic analysis, dispersion, shock waves, and weak solutions. Although the main focus is on compressible fluid flow, the authors show how intimately gasdynamic waves are related to wave phenomena in many other areas of physical science. Special emphasis is placed on the development of physical intuition to supplement and reinforce analytical thinking. Each chapter includes a complete set of carefully prepared exercises, making this a suitable textbook for students in applied mathematics, ...
Central cooling: compressive chillers
Christian, J.E.
1978-03-01
Representative cost and performance data are provided in a concise, useable form for three types of compressive liquid packaged chillers: reciprocating, centrifugal, and screw. The data are represented in graphical form as well as in empirical equations. Reciprocating chillers are available from 2.5 to 240 tons with full-load COPs ranging from 2.85 to 3.87. Centrifugal chillers are available from 80 to 2,000 tons with full load COPs ranging from 4.1 to 4.9. Field-assemblied centrifugal chillers have been installed with capacities up to 10,000 tons. Screw-type chillers are available from 100 to 750 tons with full load COPs ranging from 3.3 to 4.5.
Vitanyi, Paul M B
2011-01-01
First we consider pair-wise distances for literal objects consisting of finite binary files. These files are taken to contain all of their meaning, like genomes or books. The distances are based on compression of the objects concerned, normalized, and can be viewed as similarity distances. Second, we consider pair-wise distances between names of objects, like "red" or "christianity." In this case the distances are based on searches of the Internet. Such a search can be performed by any search engine that returns aggregate page counts. We can extract a code length from the numbers returned, use the same formula as before, and derive a similarity or relative semantics between names for objects. The theory is based on Kolmogorov complexity. We test both similarities extensively experimentally.
Adaptively Compressed Exchange Operator
Lin, Lin
2016-01-01
The Fock exchange operator plays a central role in modern quantum chemistry. The large computational cost associated with the Fock exchange operator hinders Hartree-Fock calculations and Kohn-Sham density functional theory calculations with hybrid exchange-correlation functionals, even for systems consisting of hundreds of atoms. We develop the adaptively compressed exchange operator (ACE) formulation, which greatly reduces the computational cost associated with the Fock exchange operator without loss of accuracy. The ACE formulation does not depend on the size of the band gap, and thus can be applied to insulating, semiconducting as well as metallic systems. In an iterative framework for solving Hartree-Fock-like systems, the ACE formulation only requires moderate modification of the code, and can be potentially beneficial for all electronic structure software packages involving exchange calculations. Numerical results indicate that the ACE formulation can become advantageous even for small systems with tens...
Time series analysis by the Maximum Entropy method
Kirk, B.L.; Rust, B.W.; Van Winkle, W.
1979-01-01
The principal subject of this report is the use of the Maximum Entropy method for spectral analysis of time series. The classical Fourier method is also discussed, mainly as a standard for comparison with the Maximum Entropy method. Examples are given which clearly demonstrate the superiority of the latter method over the former when the time series is short. The report also includes a chapter outlining the theory of the method, a discussion of the effects of noise in the data, a chapter on significance tests, a discussion of the problem of choosing the prediction filter length, and, most importantly, a description of a package of FORTRAN subroutines for making the various calculations. Cross-referenced program listings are given in the appendices. The report also includes a chapter demonstrating the use of the programs by means of an example. Real time series like the lynx data and sunspot numbers are also analyzed. 22 figures, 21 tables, 53 references.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Adaptive compressive sensing camera
Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold
2013-05-01
We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).
The Rookie's Playbook: Insights and Dirt for New Principals.
Tooms, Autumn
2003-01-01
Principal shares lessons and insights with beginning principals. Discusses differences between principals and assistant principals, staff relationships, competition for resources, giving and receiving loyalty, identifying and following a moral compass. (PKP)
HE Zhen-jun; SONG Yu-pu
2008-01-01
Multiaxial compression tests were performed on 100 mm × 100 mm × 100 nun high-strength high-performance concrete (HSHPC) cubes and normal strength concrete (NSC) cubes. The failure modes of specimens were presented, the static compressive strengths in principal directions were measured, the influence of the stress ratios was analyzed. The experimental results show that the ultimate strengths for HSHPC and NSC under multiaxial compression are greater than the uniaxial compressive strengths at all stress ratios, and the multiaxial strength is dependent on the brittleness and stiffness of concrete, the stress state and the stress ratios. In addition, the Kupfer-Gerstle and Ottosen's failure criteria for plain HSHPC and NSC under multiaxial compressive loading were modified.
Grissom, Jason A.; Loeb, Susanna; Mitani, Hajime
2015-01-01
Purpose: Time demands faced by school principals make principals' work increasingly difficult. Research outside education suggests that effective time management skills may help principals meet job demands, reduce job stress, and improve their performance. The purpose of this paper is to investigate these hypotheses. Design/methodology/approach:…
What Is a "Good" Principal? Perspectives of Aspiring Principals in Singapore
Ng, Pak Tee
2016-01-01
This paper presents the findings of an exploratory research project that examines what aspiring principals in Singapore think a good principal is, based on a framework of personal, interpersonal, and organizational dimensions of school leadership. According to the findings, a good principal has a moral purpose centered on personal values, a humble…
Principal Self-Efficacy and Work Engagement: Assessing a Norwegian Principal Self-Efficacy Scale
Federici, Roger A.; Skaalvik, Einar M.
2011-01-01
One purpose of the present study was to develop and test the factor structure of a multidimensional and hierarchical Norwegian Principal Self-Efficacy Scale (NPSES). Another purpose of the study was to investigate the relationship between principal self-efficacy and work engagement. Principal self-efficacy was measured by the 22-item NPSES. Work…
Principals and Blogs: In What Ways Does Blogging Support the Practices of School Principals?
Engebritson, Reggie Marie
2011-01-01
This study paper explores the factors that motivate school principals to blog and the effectiveness of those blogs in terms of instructional and technology leadership. Participants were school principals who blog and were sent a web-based survey. Fifty responded. Results indicate that principals blog to communicate to others, including parents,…
Grissom, Jason A.; Loeb, Susanna; Mitani, Hajime
2015-01-01
Purpose: Time demands faced by school principals make principals' work increasingly difficult. Research outside education suggests that effective time management skills may help principals meet job demands, reduce job stress, and improve their performance. The purpose of this paper is to investigate these hypotheses. Design/methodology/approach:…
Application specific compression : final report.
Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.
2008-12-01
With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.
Streaming Compression of Hexahedral Meshes
Isenburg, M; Courbet, C
2010-02-03
We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.
Data Compression with Linear Algebra
Etler, David
2015-01-01
A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.
Compressed sensing for body MRI.
Feng, Li; Benkert, Thomas; Block, Kai Tobias; Sodickson, Daniel K; Otazo, Ricardo; Chandarana, Hersh
2017-04-01
The introduction of compressed sensing for increasing imaging speed in magnetic resonance imaging (MRI) has raised significant interest among researchers and clinicians, and has initiated a large body of research across multiple clinical applications over the last decade. Compressed sensing aims to reconstruct unaliased images from fewer measurements than are traditionally required in MRI by exploiting image compressibility or sparsity. Moreover, appropriate combinations of compressed sensing with previously introduced fast imaging approaches, such as parallel imaging, have demonstrated further improved performance. The advent of compressed sensing marks the prelude to a new era of rapid MRI, where the focus of data acquisition has changed from sampling based on the nominal number of voxels and/or frames to sampling based on the desired information content. This article presents a brief overview of the application of compressed sensing techniques in body MRI, where imaging speed is crucial due to the presence of respiratory motion along with stringent constraints on spatial and temporal resolution. The first section provides an overview of the basic compressed sensing methodology, including the notion of sparsity, incoherence, and nonlinear reconstruction. The second section reviews state-of-the-art compressed sensing techniques that have been demonstrated for various clinical body MRI applications. In the final section, the article discusses current challenges and future opportunities. 5 J. Magn. Reson. Imaging 2017;45:966-987. © 2016 International Society for Magnetic Resonance in Medicine.
Compression Maps and Stable Relations
Price, Kenneth L
2011-01-01
Balanced relations were defined by G. Abrams to extend the convolution product used in the construction of incidence rings. We define stable relations,which form a class between balanced relations and preorders. We also define a compression map to be a surjective function between two sets which preserves order, preserves off-diagonal relations, and has the additional property every transitive triple is the image of a transitive triple. We show a compression map preserves the balanced and stable properties but the compression of a preorder may be stable and not transitive. We also cover an example of a stable relation which is not the compression of a preorder. In our main theorem we provide necessary and sufficient conditions for a finite stable relation to be the compression of a preorder.
砌体局部受压有限元分析%Finite-element analysis of masonry under local compressive load
杨卫忠; 王博
2011-01-01
采用有限元方法,分析了典型砌体局部受压时主应力和Mises应力分别沿截面宽度和深度的变化规律.结果表明,最大主拉应力位置与已有试验结果大致吻合；应力云图可清晰地反映砌体局压时的应力扩散作用,主要应力扩散范围约为一倍的局压范围的边长；最大Mises应力值可反映不同局压位置对局压强度的影响,研究成果有助于进一步了解砌体局压的受力机理,为下一步修订规范提供参考.%Finite-element analysis can reveal the complex mechanism of structural element. The principal stress and Mises stress of masonry under local compressive load are analyzed by FEA, and the main variability is the position of local compression. The result shows that the position of the maximum principle tensile stress has a good agree with the experimental result. The stress dispersion can be validated by comparing the stress cloud atlas of various sections,and the dispersion range is about specimen length around the local bearing area. The maximum Mises stress has a strong affect on the increased strength of masonry. It is very helpful for the further study the mechanism of masonry under local compression,and it can also provide a reference for revising the masonry code.
Deng, Xingli; Yang, Zhiyong; Liu, Ruen; Yi, Meiying; Lei, Deqiang; Wang, Zhi; Zhao, Hongyang
2013-01-01
The safety of gamma knife radiosurgery should be considered when treating pituitary adenomas. To determine the maximum tolerated dose of radiation delivered by gamma knife radiosurgery to optic nerves. An animal model designed to establish prolonged balloon compression of the optic chiasm and parasellar region was developed to mimic the optic nerve compression caused by pituitary adenomas. Twenty cats underwent surgery to place a balloon for compression effect and 20 cats in a sham operation group received microsurgery without any treatment. The effects of gamma knife irradiation at 10-13 Gy on normal (sham operation group) and compressed (optic nerve compression group) optic nerves were investigated by pattern visual evoked potential examination and histopathology. Gamma knife radiosurgery at 10 Gy had almost no effect. At 11 Gy, P100 latency was significantly prolonged and P100 amplitude was significantly decreased in compressed optic nerves, but there was little change in the normal optic nerves. Doses of 11 Gy and higher induced significant electrophysiological variations and degeneration of the myelin sheath and axons in both normal and compressed optic nerves. Compressed optic nerves are more sensitive to gamma knife radiosurgery than normal optic nerves. The minimum dose of gamma knife radiosurgery that causes radiation injury in normal optic nerves is 12 Gy; however, the minimum dose is 11 Gy in compressed optic nerves. Copyright © 2013 S. Karger AG, Basel.
Compressive Sensing for Quantum Imaging
Howland, Gregory A.
This thesis describes the application of compressive sensing to several challenging problems in quantum imaging with practical and fundamental implications. Compressive sensing is a measurement technique that compresses a signal during measurement such that it can be dramatically undersampled. Compressive sensing has been shown to be an extremely efficient measurement technique for imaging, particularly when detector arrays are not available. The thesis first reviews compressive sensing through the lens of quantum imaging and quantum measurement. Four important applications and their corresponding experiments are then described in detail. The first application is a compressive sensing, photon-counting lidar system. A novel depth mapping technique that uses standard, linear compressive sensing is described. Depth maps up to 256 x 256 pixel transverse resolution are recovered with depth resolution less than 2.54 cm. The first three-dimensional, photon counting video is recorded at 32 x 32 pixel resolution and 14 frames-per-second. The second application is the use of compressive sensing for complementary imaging---simultaneously imaging the transverse-position and transverse-momentum distributions of optical photons. This is accomplished by taking random, partial projections of position followed by imaging the momentum distribution on a cooled CCD camera. The projections are shown to not significantly perturb the photons' momenta while allowing high resolution position images to be reconstructed using compressive sensing. A variety of objects and their diffraction patterns are imaged including the double slit, triple slit, alphanumeric characters, and the University of Rochester logo. The third application is the use of compressive sensing to characterize spatial entanglement of photon pairs produced by spontaneous parametric downconversion. The technique gives a theoretical speedup N2/log N for N-dimensional entanglement over the standard raster scanning technique
TURBULENT RECONNECTION IN RELATIVISTIC PLASMAS AND EFFECTS OF COMPRESSIBILITY
Takamoto, Makoto [Max-Planck-Institut für Kernphysik, Heidelberg (Germany); Inoue, Tsuyoshi [Division of Theoretical Astronomy, National Astronomical Observatory of Japan (Japan); Lazarian, Alexandre, E-mail: mtakamoto@eps.s.u-tokyo.ac.jp, E-mail: tsuyoshi.inoue@nao.ac.jp, E-mail: alazarian@facstaff.wisc.edu [Department of Astronomy, University of Wisconsin, 475 North Charter Street, Madison, WI 53706 (United States)
2015-12-10
We report on the turbulence effects on magnetic reconnection in relativistic plasmas using three-dimensional relativistic resistive magnetohydrodynamics simulations. We found that the reconnection rate became independent of the plasma resistivity due to turbulence effects similarly to non-relativistic cases. We also found that compressible turbulence effects modified the turbulent reconnection rate predicted in non-relativistic incompressible plasmas; the reconnection rate saturates, and even decays, as the injected velocity approaches to the Alfvén velocity. Our results indicate that compressibility cannot be neglected when a compressible component becomes about half of the incompressible mode, occurring when the Alfvén Mach number reaches about 0.3. The obtained maximum reconnection rate is around 0.05–0.1, which will be able to reach around 0.1–0.2 if injection scales are comparable to the sheet length.
Prediction of Concrete Compressive Strength by Evolutionary Artificial Neural Networks
Mehdi Nikoo
2015-01-01
Full Text Available Compressive strength of concrete has been predicted using evolutionary artificial neural networks (EANNs as a combination of artificial neural network (ANN and evolutionary search procedures, such as genetic algorithms (GA. In this paper for purpose of constructing models samples of cylindrical concrete parts with different characteristics have been used with 173 experimental data patterns. Water-cement ratio, maximum sand size, amount of gravel, cement, 3/4 sand, 3/8 sand, and coefficient of soft sand parameters were considered as inputs; and using the ANN models, the compressive strength of concrete is calculated. Moreover, using GA, the number of layers and nodes and weights are optimized in ANN models. In order to evaluate the accuracy of the model, the optimized ANN model is compared with the multiple linear regression (MLR model. The results of simulation verify that the recommended ANN model enjoys more flexibility, capability, and accuracy in predicting the compressive strength of concrete.
Test Method for Compression Resilience Evaluation of Textiles
Shui-yuan Hong
2013-02-01
Full Text Available A test method was proposed and a measurement system was developed to characterize the compression resilience properties of textiles based on the mechanical device, microelectronics, sensors and control system. Derived from the typical pressure-displacement curve and test data, four indices were defined to characterize the compression performance of textiles. The test principle and the evaluation method for compression resilience of textiles were introduced. Twelve types of textile fabrics with different structural features and made from different textile materials were tested. The one-way ANOVA analysis was carried out to identify the significance of the differences of the evaluation indices among the fabrics. The results show that each index is significantly different among different fabrics. The denim has the maximum compressional resilience and the polar fleece has the minimum compressional resilience.
Families in Crisis: What Principals Say.
National Elementary Principal, 1979
1979-01-01
Of principals responding to a survey, 95 percent agreed that children show behavioral and academic problems when their parents are undergoing separation or divorce. Offered are specific suggestions to help one-parent families adjust. (Author/LD)
Principal Hawaiian Islands Geoid Heights (GEOID96)
National Oceanic and Atmospheric Administration, Department of Commerce — This 2' geoid height grid for the Principal Hawaiian Islands is distributed as a GEOID96 model. The computation used 61,000 terrestrial and marine gravity data held...
A Comparative Study of Principals' Administrative Behaviour.
Chung, Kyung Ae
1989-01-01
Compared are the managerial behaviors and beliefs of Korean and American secondary school principals. Generalizations are proposed in the areas of work hours, work pace, communication skills, organizational style, instructional leadership, and other managerial behaviors. (16 references) (SI)
Principal Leader Behaviour and Shared Decision Making.
Moyle, Colin R. J.
1979-01-01
The leadership of the principal is a crucial factor in the functioning of the Instructional Improvement Committees (IICs) in the multiunit schools studies. IICs are representative cabinet-type leadership committees. (Author/IRT)
Noncommutative principal bundles through twist deformation
Aschieri, Paolo; Pagani, Chiara; Schenkel, Alexander
2016-01-01
We construct noncommutative principal bundles deforming principal bundles with a Drinfeld twist (2-cocycle). If the twist is associated with the structure group then we have a deformation of the fibers. If the twist is associated with the automorphism group of the principal bundle, then we obtain noncommutative deformations of the base space as well. Combining the two twist deformations we obtain noncommutative principal bundles with both noncommutative fibers and base space. More in general, the natural isomorphisms proving the equivalence of a closed monoidal category of modules and its twist related one are used to obtain new Hopf-Galois extensions as twists of Hopf-Galois extensions. A sheaf approach is also considered, and examples presented.
Check List: Are You a Gifted Principal?
Taylor, Vicki L.
1984-01-01
An 18-item check list is provided for principals to evaluate themselves relative to encouraging gifts and talents of their most able students. Suggestions are given in the areas of educational needs, specialized materials, and counseling. (MC)
Paranoia: Perceptions of Public School Principals.
Salmon, Daniel A.
1980-01-01
Examines forces which are undermining the principal's leadership role and ability to effectively administer the school: teachers and unions; the competency movement; political and community interest groups; and media pundits. (SJL)
General frame structures on quantum principal bundles
Durdevic, M
1996-01-01
A noncommutative-geometric generalization of the classical formalism of frame bundles is developed, incorporating into the theory of quantum principal bundles the concept of the Levi-Civita connection. The construction of a natural differential calculus on quantum principal frame bundles is presented, including the construction of the associated differential calculus on the structure group. General torsion operators are defined and analyzed. Illustrative examples are presented.
A Geometric Approach to Noncommutative Principal Bundles
Wagner, Stefan
2011-01-01
From a geometrical point of view it is, so far, not sufficiently well understood what should be a "noncommutative principal bundle". Still, there is a well-developed abstract algebraic approach using the theory of Hopf algebras. An important handicap of this approach is the ignorance of topological and geometrical aspects. The aim of this thesis is to develop a geometrically oriented approach to the noncommutative geometry of principal bundles based on dynamical systems and the representation theory of the corresponding transformation group.
Advances in compressible turbulent mixing
Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.
1992-01-01
This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.
Karaarslan, Ahmet Adnan; Karakaşli, Ahmet; Karci, Tolga; Aycan, Hakan; Yildirim, Serhat; Sesli, Erhan
2015-06-01
The aim is to present our new method of compression, a compression tube instead of conventional compression screw and to investigate the difference of proximal locking screw bending resistance between compression screw application (6 mm wide contact) and compression tube (two contact points with 13 mm gap) application. We formed six groups each consisting of 10 proximal locking screws. On metal cylinder representing lesser trochanter level, we performed 3-point bending tests with compression screw and with compression tube. We determined the yield points of the screws in 3-point bending tests using an axial compression testing machine. We determined the yield point of 5 mm screws as 1963±53 N (mean±SD) with compression screw, and as 2929±140 N with compression tubes. We found 51% more locking screw bending resistance with compression tube than with compression screw (p=0,000). Therefore compression tubes instead of compression screw must be preferred at femur compression nails.
Compressed Submanifold Multifactor Analysis.
Luu, Khoa; Savvides, Marios; Bui, Tien; Suen, Ching
2016-04-14
Although widely used, Multilinear PCA (MPCA), one of the leading multilinear analysis methods, still suffers from four major drawbacks. First, it is very sensitive to outliers and noise. Second, it is unable to cope with missing values. Third, it is computationally expensive since MPCA deals with large multi-dimensional datasets. Finally, it is unable to maintain the local geometrical structures due to the averaging process. This paper proposes a novel approach named Compressed Submanifold Multifactor Analysis (CSMA) to solve the four problems mentioned above. Our approach can deal with the problem of missing values and outliers via SVD-L1. The Random Projection method is used to obtain the fast low-rank approximation of a given multifactor dataset. In addition, it is able to preserve the geometry of the original data. Our CSMA method can be used efficiently for multiple purposes, e.g. noise and outlier removal, estimation of missing values, biometric applications. We show that CSMA method can achieve good results and is very efficient in the inpainting problem as compared to [1], [2]. Our method also achieves higher face recognition rates compared to LRTC, SPMA, MPCA and some other methods, i.e. PCA, LDA and LPP, on three challenging face databases, i.e. CMU-MPIE, CMU-PIE and Extended YALE-B.
Immorally obtained principal increases investors’ risk preference
Chen, Jiaxin; He, Guibing
2017-01-01
Capital derived from immoral sources is increasingly circulated in today’s financial markets. The moral associations of capital are important, although their impact on investment remains unknown. This research aims to explore the influence of principal source morality on investors’ risk preferences. Three studies were conducted in this regard. Study 1 finds that investors are more risk-seeking when their principal is earned immorally (through lying), whereas their risk preferences do not change when they invest money earned from neutral sources after engaging in immoral behavior. Study 2 reveals that guilt fully mediates the relationship between principal source morality and investors’ risk preferences. Studies 3a and 3b introduce a new immoral principal source and a new manipulation method to improve external validity. Guilt is shown to the decrease the subjective value of morally flawed principal, leading to higher risk preference. The findings show the influence of morality-related features of principal on people’s investment behavior and further support mental account theory. The results also predict the potential threats of “grey principal” to market stability. PMID:28369117
Nutrition education for adolescents: principals' views.
Lai-Yeung, Wai-Ling Theresa
2011-01-01
This study aimed to examine school principals' perceptions of the school environment in Hong Kong as a context for the dissemination of food knowledge and inculcation of healthy eating habits. A questionnaire survey was administered in secondary schools in Hong Kong to survey Principals' views of students' food choices, operation of the school tuck shop, and promotion of healthy eating at school. Questionnaires were disseminated to all the secondary schools offering Home Economics (300 out of 466), and 188 schools responded, making up a response rate of 63%. Collected data were analyzed using SPSS. Most of the schools (82%) claimed to have a food policy to monitor the operation of the school canteen, and about half (52%) asserted there were insufficient resources to promote healthy eating at school. Principals (88%) generally considered it not acceptable for the school tuck shop to sell junk food; however, 45% thought that banning junk food at school would not help students develop good eating habits. Only 4% of the principals believed nutrition education influenced eating habits; whereas the majority (94%) felt that even with acquisition of food knowledge, students may not be able to put theory into practice. Cooking skills were considered important but principals (92%) considered transmission of cooking skills the responsibility of the students' families. Most of the principals (94%) believed that school-family collaboration is important in promoting healthy eating. Further efforts should be made to enhance the effectiveness of school food policies and to construct healthy school environments in secondary schools.
Compressibility effects on the flow past a rotating cylinder
Teymourtash, A. R.; Salimipour, S. E.
2017-01-01
In this paper, laminar flow past a rotating circular cylinder placed in a compressible uniform stream is investigated via a two-dimensional numerical simulation and the compressibility effects due to the combination of the free-stream and cylinder rotation on the flow pattern such as forming, shedding, and removing of vortices and also the lift and drag coefficients are studied. The numerical simulation of the flow is based on the discretization of convective fluxes of the unsteady Navier-Stokes equations by second-order Roe's scheme and an explicit finite volume method. Because of the importance of the time dependent parameters in the solution, the second-order time accurate is applied by a dual time stepping approach. In order to validate the operation of a computer program, some results are compared with previous experimental and numerical data. The results of this study show that the effects due to flow compressibility such as normal shock wave caused the interesting variations on the flow around the cylinder even at a free-stream with a low Mach number. At incompressible flow around the rotating cylinder, increasing the speed ratio, α (ratio of the surface speed to free-stream velocity), causes the ongoing increase in the lift coefficient, but in compressible flow for each free-stream Mach number, increasing the speed ratio results in obtaining a limited lift coefficient (a maximum mean lift coefficient). In addition, results from the compressible flow indicate that by increasing the free-stream Mach number, the maximum mean lift coefficient is decreased, while the mean drag coefficient is increased. It is also found that by increasing the Reynolds number at low Mach numbers, the maximum mean lift coefficient and critical speed ratio are decreased and the mean drag coefficient and Strouhal number are increased. However at the higher Mach numbers, these parameters become independent of the Reynolds number.
The OMV Data Compression System Science Data Compression Workshop
Lewis, Garton H., Jr.
1989-01-01
The Video Compression Unit (VCU), Video Reconstruction Unit (VRU), theory and algorithms for implementation of Orbital Maneuvering Vehicle (OMV) source coding, docking mode, channel coding, error containment, and video tape preprocessed space imagery are presented in viewgraph format.
Wearable EEG via lossless compression.
Dufort, Guillermo; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo
2016-08-01
This work presents a wearable multi-channel EEG recording system featuring a lossless compression algorithm. The algorithm, based in a previously reported algorithm by the authors, exploits the existing temporal correlation between samples at different sampling times, and the spatial correlation between different electrodes across the scalp. The low-power platform is able to compress, by a factor between 2.3 and 3.6, up to 300sps from 64 channels with a power consumption of 176μW/ch. The performance of the algorithm compares favorably with the best compression rates reported up to date in the literature.
Context-Aware Image Compression.
Jacky C K Chan
Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.
Compressive sensing for urban radar
Amin, Moeness
2014-01-01
With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki
Designing experiments through compressed sensing.
Young, Joseph G.; Ridzal, Denis
2013-06-01
In the following paper, we discuss how to design an ensemble of experiments through the use of compressed sensing. Specifically, we show how to conduct a small number of physical experiments and then use compressed sensing to reconstruct a larger set of data. In order to accomplish this, we organize our results into four sections. We begin by extending the theory of compressed sensing to a finite product of Hilbert spaces. Then, we show how these results apply to experiment design. Next, we develop an efficient reconstruction algorithm that allows us to reconstruct experimental data projected onto a finite element basis. Finally, we verify our approach with two computational experiments.
Compressive myelopathy in fluorosis: MRI
Gupta, R.K. [MR Section, Department of Radiology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow-226014 (India); Agarwal, P. [MR Section, Department of Radiology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow-226014 (India); Kumar, S. [MR Section, Department of Radiology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow-226014 (India); Surana, P.K. [Department of Neurology, SGPGIMS, Lucknow-226014 (India); Lal, J.H. [MR Section, Department of Radiology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow-226014 (India); Misra, U.K. [Department of Neurology, SGPGIMS, Lucknow-226014 (India)
1996-05-01
We examined four patients with fluorosis, presenting with compressive myelopathy, by MRI, using spin-echo and fast low-angle shot sequences. Cord compression due to ossification of the posterior longitudinal ligament (PLL) and ligamentum flavum (LF) was demonstrated in one and ossification of only the LF in one. Marrow signal was observed in the PLL and LF in all the patients on all pulse sequences. In patients with compressive myelopathy secondary to ossification of PLL and/or LF, fluorosis should be considered as a possible cause, especially in endemic regions. (orig.). With 2 figs., 1 tab.
Partial transparency of compressed wood
Sugimoto, Hiroyuki; Sugimori, Masatoshi
2016-05-01
We have developed novel wood composite with optical transparency at arbitrary region. Pores in wood cells have a great variation in size. These pores expand the light path in the sample, because the refractive indexes differ between constituents of cell and air in lumen. In this study, wood compressed to close to lumen had optical transparency. Because the condition of the compression of wood needs the plastic deformation, wood was impregnated phenolic resin. The optimal condition for high transmission is compression ratio above 0.7.
A high-dynamic range transimpedance amplifier with compression
Mičušík, D.; Zimmermann, H.
2007-02-01
This paper presents a transimpedance amplifier (TIA) with the logarithmic compression of the input current signal. The presented TIA has two regions of operation: a linear one for small input current signals and a compression one for high input currents, that could otherwise saturate the TIA. The measured -3dB bandwidth in the linear region of operation is 102MHz. The measured maximum input current overdrive is 20.5mA. However, the maximum of the monotonic compression is approx. 8mA. Using the compression technique we could achieve low rms equivalent input noise current (~20.2nA) within the measured bandwidth and with approx. 2pF capacitance at the input. Thus the dynamic range at the input of the TIA is approx. 120dB considering the maximal current overdrive. The proposed TIA represents the input stage of a optical receiver with integrated differential 50Ω output driver. The optical receiver occupies approx. 1.24mm2 in 0.35 μm SiGe BiCMOS technology and consumes 78mA from 5V supply.
Spinal meningioma: relationship between degree of cord compression and outcome.
Davies, Simon; Gregson, Barbara; Mitchell, Patrick
2017-04-01
The aim of this study was to find the relationships between the degree of cord compression as seen on MRIs with persisting cord atrophy after decompression and patient outcomes in spinal meningiomas. We undertook a retrospective analysis of 31 patients' pre- and postoperative MRIs, preoperative functional status and their outcomes at follow-up. The following metrics were analysed; percentage cord area at maximum compression, percentage tumour occupancy and percentage cord occupancy. These were then compared with outcome as measured by the Nurick scale. Of the 31 patients, 27 (87%) had thoracic meningiomas, 3 (10%) cervical and 1 (3%) cervicothoracic. The meningiomas were pathologically classified as grade 1 (29) or grade 2 (2) according to the WHO classification. The average remaining cord cross-sectional area was 61% of the estimated original value. The average tumour occupancy of the canal was 72%. The average cord occupancy of the spinal canal at maximum compression was 20%. No correlation between cord cross-section area and Nurick Scale was seen. On the postoperative scan, the average cord area had increased to 84%. No correlation was seen between this value and outcome. We found that cross-section area measurements on MRI scans have no obvious relationship with function before or after surgery. This is a base for future research into the mechanism of cord recovery and other compressive cord conditions.
Secondary instability of compressible boundary layer to subharmonic three-dimensional disturbances
El Hady, Nabil M.
1989-01-01
Three-dimensional linear secondary instability theory is extended for compressible boundary layers on a flat plate in the presence of finite amplitude Tollmien-Schlichting (T-S) waves. The focus is on principal parametric resonance responsible for the strong growth of harmonics in a low disturbance environment.
Compressive phase-only filtering at extreme compression rates
Pastor-Calle, David; Pastuszczak, Anna; Mikołajczyk, Michał; Kotyński, Rafał
2017-01-01
We introduce an efficient method for the reconstruction of the correlation between a compressively measured image and a phase-only filter. The proposed method is based on two properties of phase-only filtering: such filtering is a unitary circulant transform, and the correlation plane it produces is usually sparse. Thanks to these properties, phase-only filters are perfectly compatible with the framework of compressive sensing. Moreover, the lasso-based recovery algorithm is very fast when phase-only filtering is used as the compression matrix. The proposed method can be seen as a generalization of the correlation-based pattern recognition technique, which is hereby applied directly to non-adaptively acquired compressed data. At the time of measurement, any prior knowledge of the target object for which the data will be scanned is not required. We show that images measured at extremely high compression rates may still contain sufficient information for target classification and localization, even if the compression rate is high enough, that visual recognition of the target in the reconstructed image is no longer possible. The method has been applied by us to highly undersampled measurements obtained from a single-pixel camera, with sampling based on randomly chosen Walsh-Hadamard patterns.
M. E. Usanova; I. R. Mann; Z. C. Kale; I. J. Rae; R. D. Sydora; M. Sandanger; F. Søraas; K.-H. Glassmeier; K.-H. Fornacon; H. Matsui; P. A. Puhl-Quinn; A. Masson; X. Vallières
2010-01-01
...) waves from 25 September 2005. On the ground, dayside structured EMIC wave activity was observed by the CARISMA and STEP magnetometer arrays for several hours during the period of maximum compression...
Strategies for high-performance resource-efficient compression of neural spike recordings.
Thorbergsson, Palmi Thor; Garwicz, Martin; Schouenborg, Jens; Johansson, Anders J
2014-01-01
Brain-machine interfaces (BMIs) based on extracellular recordings with microelectrodes provide means of observing the activities of neurons that orchestrate fundamental brain function, and are therefore powerful tools for exploring the function of the brain. Due to physical restrictions and risks for post-surgical complications, wired BMIs are not suitable for long-term studies in freely behaving animals. Wireless BMIs ideally solve these problems, but they call for low-complexity techniques for data compression that ensure maximum utilization of the wireless link and energy resources, as well as minimum heat dissipation in the surrounding tissues. In this paper, we analyze the performances of various system architectures that involve spike detection, spike alignment and spike compression. Performance is analyzed in terms of spike reconstruction and spike sorting performance after wireless transmission of the compressed spike waveforms. Compression is performed with transform coding, using five different compression bases, one of which we pay special attention to. That basis is a fixed basis derived, by singular value decomposition, from a large assembly of experimentally obtained spike waveforms, and therefore represents a generic basis specially suitable for compressing spike waveforms. Our results show that a compression factor of 99.8%, compared to transmitting the raw acquired data, can be achieved using the fixed generic compression basis without compromising performance in spike reconstruction and spike sorting. Besides illustrating the relative performances of various system architectures and compression bases, our findings show that compression of spikes with a fixed generic compression basis derived from spike data provides better performance than compression with downsampling or the Haar basis, given that no optimization procedures are implemented for compression coefficients, and the performance is similar to that obtained when the optimal SVD based
Strategies for high-performance resource-efficient compression of neural spike recordings.
Palmi Thor Thorbergsson
Full Text Available Brain-machine interfaces (BMIs based on extracellular recordings with microelectrodes provide means of observing the activities of neurons that orchestrate fundamental brain function, and are therefore powerful tools for exploring the function of the brain. Due to physical restrictions and risks for post-surgical complications, wired BMIs are not suitable for long-term studies in freely behaving animals. Wireless BMIs ideally solve these problems, but they call for low-complexity techniques for data compression that ensure maximum utilization of the wireless link and energy resources, as well as minimum heat dissipation in the surrounding tissues. In this paper, we analyze the performances of various system architectures that involve spike detection, spike alignment and spike compression. Performance is analyzed in terms of spike reconstruction and spike sorting performance after wireless transmission of the compressed spike waveforms. Compression is performed with transform coding, using five different compression bases, one of which we pay special attention to. That basis is a fixed basis derived, by singular value decomposition, from a large assembly of experimentally obtained spike waveforms, and therefore represents a generic basis specially suitable for compressing spike waveforms. Our results show that a compression factor of 99.8%, compared to transmitting the raw acquired data, can be achieved using the fixed generic compression basis without compromising performance in spike reconstruction and spike sorting. Besides illustrating the relative performances of various system architectures and compression bases, our findings show that compression of spikes with a fixed generic compression basis derived from spike data provides better performance than compression with downsampling or the Haar basis, given that no optimization procedures are implemented for compression coefficients, and the performance is similar to that obtained when the
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
13 CFR 107.845 - Maximum rate of amortization on Loans and Debt Securities.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum rate of amortization on... ADMINISTRATION SMALL BUSINESS INVESTMENT COMPANIES Financing of Small Businesses by Licensees Structuring... rate of amortization on Loans and Debt Securities. The principal of any Loan (or the loan portion...
Xiangwei Li
2014-12-01
Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.
Li, Xiangwei; Lan, Xuguang; Yang, Meng; Xue, Jianru; Zheng, Nanning
2014-12-05
Compressive Sensing Imaging (CSI) is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS) acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.
When 'exact recovery' is exact recovery in compressed sensing simulation
Sturm, Bob L.
2012-01-01
In a simulation of compressed sensing (CS), one must test whether the recovered solution \\(\\vax\\) is the true solution \\(\\vx\\), i.e., ``exact recovery.'' Most CS simulations employ one of two criteria: 1) the recovered support is the true support; or 2) the normalized squared error is less than...... for a given distribution of \\(\\vx\\)? We show that, in a best case scenario, \\(\\epsilon^2\\) sets a maximum allowed missed detection rate in a majority sense....
Buckling localization in a cylindrical panel under axial compression
Tvergaard, Viggo; Needleman, A.
2000-01-01
Localization of an initially periodic buckling pattern is investigated for an axially compressed elastic-plastic cylindrical panel of the type occurring between axial stiffeners on cylindrical shells. The phenomenon of buckling localization and its analogy with plastic flow localization in tensile...... test specimens is discussed in general. For the cylindrical panel, it is shown that buckling localization develops shortly after a maximum load has been attained, and this occurs for a purely elastic panel as well as for elastic-plastic panels. In a case where localization occurs after a load maximum...
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Shock compression response of poly(4-methyl-1-pentene) plastic to 985 GPa
Root, Seth, E-mail: sroot@sandia.gov; Mattsson, Thomas R.; Cochrane, Kyle; Lemke, Raymond W. [Sandia National Laboratories, Albuquerque, New Mexico 87125 (United States); Knudson, Marcus D. [Sandia National Laboratories, Albuquerque, New Mexico 87125 (United States); Institute for Shock Physics and Department of Physics, Washington State University, Pullman, Washington 99164 (United States)
2015-11-28
Poly(4-methyl-1-pentene) plastic (PMP) is a hydrocarbon polymer with potential applications to inertial confinement fusion experiments and as a Hugoniot impedance matching standard for equation of state experiments. Using Sandia's Z-machine, we performed a series of flyer plate experiments to measure the principal Hugoniot and reshock states of PMP up to 985 GPa. The principal Hugoniot measurements validate density functional theory (DFT) calculations along the Hugoniot. The DFT calculations are further analyzed using a bond tracking method to understand the dissociation pathway under shock compression. Complete dissociation occurs at a compression factor similar to other sp3-hybridized, C-C bonded systems, which suggests a limiting compression for C-C bonds. The combined experimental and DFT results provide a solid basis for constructing an equation of state model for PMP.
SEED BANKS FOR MAGNETIC FLUX COMPRESSION GENERATORS
Fulkerson, E S
2008-05-14
In recent years the Lawrence Livermore National Laboratory (LLNL) has been conducting experiments that require pulsed high currents to be delivered into inductive loads. The loads fall into two categories (1) pulsed high field magnets and (2) the input stage of Magnetic Flux Compression Generators (MFCG). Three capacitor banks of increasing energy storage and controls sophistication have been designed and constructed to drive these loads. One bank was developed for the magnet driving application (20kV {approx} 30kJ maximum stored energy.) Two banks where constructed as MFCG seed banks (12kV {approx} 43kJ and 26kV {approx} 450kJ). This paper will describe the design of each bank including switching, controls, circuit protection and safety.
Transfer induced compressive strain in graphene
Larsen, Martin Benjamin Barbour Spanget; Mackenzie, David; Caridad, Jose
2014-01-01
We have used spatially resolved micro Raman spectroscopy to map the full width at half maximum (FWHM) of the graphene G-band and the 2D and G peak positions, for as-grown graphene on copper catalyst layers, for transferred CVD graphene and for micromechanically exfoliated graphene, in order...... to characterize the effects of a transfer process on graphene properties. Here we use the FWHM(G) as an indicator of the doping level of graphene, and the ratio of the shifts in the 2D and G bands as an indicator of strain. We find that the transfer process introduces an isotropic, spatially uniform, compressive...... strain in graphene, and increases the carrier concentration....
Full-frame compression of discrete wavelet and cosine transforms
Lo, Shih-Chung B.; Li, Huai; Krasner, Brian; Freedman, Matthew T.; Mun, Seong K.
1995-04-01
At the foreground of computerized radiology and the filmless hospital are the possibilities for easy image retrieval, efficient storage, and rapid image communication. This paper represents the authors' continuous efforts in compression research on full-frame discrete wavelet (FFDWT) and full-frame discrete cosine transforms (FFDCT) for medical image compression. Prior to the coding, it is important to evaluate the global entropy in the decomposed space. It is because of the minimum entropy, that a maximum compression efficiency can be achieved. In this study, each image was split into the top three most significant bit (MSB) and the remaining remapped least significant bit (RLSB) images. The 3MSB image was compressed by an error-free contour coding and received an average of 0.1 bit/pixel. The RLSB image was either transformed to a multi-channel wavelet or the cosine transform domain for entropy evaluation. Ten x-ray chest radiographs and ten mammograms were randomly selected from our clinical database and were used for the study. Our results indicated that the coding scheme in the FFDCT domain performed better than in FFDWT domain for high-resolution digital chest radiographs and mammograms. From this study, we found that decomposition efficiency in the DCT domain for relatively smooth images is higher than that in the DWT. However, both schemes worked just as well for low resolution digital images. We also found that the image characteristics of the `Lena' image commonly used in the compression literature are very different from those of radiological images. The compression outcome of the radiological images can not be extrapolated from the compression result based on the `Lena.'
Compressive Acquisition of Dynamic Scenes
Sankaranarayanan, Aswin C; Chellappa, Rama; Baraniuk, Richard G
2012-01-01
Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models difficult. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, and then reconstructing the image frames. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the...
Normalized Compression Distance of Multiples
Cohen, Andrew R
2012-01-01
Normalized compression distance (NCD) is a parameter-free similarity measure based on compression. The NCD between pairs of objects is not sufficient for all applications. We propose an NCD of finite multisets (multiples) of objacts that is metric and is better for many applications. Previously, attempts to obtain such an NCD failed. We use the theoretical notion of Kolmogorov complexity that for practical purposes is approximated from above by the length of the compressed version of the file involved, using a real-world compression program. We applied the new NCD for multiples to retinal progenitor cell questions that were earlier treated with the pairwise NCD. Here we get significantly better results. We also applied the NCD for multiples to synthetic time sequence data. The preliminary results are as good as nearest neighbor Euclidean classifier.
Compression fractures of the back
Taking steps to prevent and treat osteoporosis is the most effective way to prevent compression or insufficiency fractures. Getting regular load-bearing exercise (such as walking) can help you avoid bone loss.
Compressed sensing for distributed systems
Coluccia, Giulio; Magli, Enrico
2015-01-01
This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...
Preprocessing of compressed digital video
Segall, C. Andrew; Karunaratne, Passant V.; Katsaggelos, Aggelos K.
2000-12-01
Pre-processing algorithms improve on the performance of a video compression system by removing spurious noise and insignificant features from the original images. This increases compression efficiency and attenuates coding artifacts. Unfortunately, determining the appropriate amount of pre-filtering is a difficult problem, as it depends on both the content of an image as well as the target bit-rate of compression algorithm. In this paper, we explore a pre- processing technique that is loosely coupled to the quantization decisions of a rate control mechanism. This technique results in a pre-processing system that operates directly on the Displaced Frame Difference (DFD) and is applicable to any standard-compatible compression system. Results explore the effect of several standard filters on the DFD. An adaptive technique is then considered.
Compressed gas fuel storage system
Wozniak, John J. (Columbia, MD); Tiller, Dale B. (Lincoln, NE); Wienhold, Paul D. (Baltimore, MD); Hildebrand, Richard J. (Edgemere, MD)
2001-01-01
A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.
Shock compression of polyvinyl chloride
Neogi, Anupam; Mitra, Nilanjan
2016-04-01
This study presents shock compression simulation of atactic polyvinyl chloride (PVC) using ab-initio and classical molecular dynamics. The manuscript also identifies the limits of applicability of classical molecular dynamics based shock compression simulation for PVC. The mechanism of bond dissociation under shock loading and its progression is demonstrated in this manuscript using the density functional theory based molecular dynamics simulations. The rate of dissociation of different bonds at different shock velocities is also presented in this manuscript.
Thermal reservoir sizing for adiabatic compressed air energy storage
Kere, Amelie; Goetz, Vincent; Py, Xavier; Olives, Regis; Sadiki, Najim [Perpignan Univ. (France). PROMES CNRS UPR 8521; Mercier-Allart, Eric [EDF R et D, Chatou (France)
2012-07-01
Despite the operation of the two existing industrial facilities to McIntosh (Alabama), and for more than thirty years, Huntorf (Germany), electricity storage in the form of compressed air in underground cavern (CAES) has not seen the development that was expected in the 80s. The efficiency of this form of storage was with the first generation CAES, less than 50%. The evolving context technique can significantly alter this situation. The new generation so-called Adiabatic CAES (A-CAES) is to retrieve the heat produced by the compression via thermal storage, thus eliminating the necessity of gas to burn and would allow consideration efficiency overall energy of the order of 70%. To date, there is no existing installation of A-CAES. Many studies describe the principal and the general working mode of storage systems by adiabatic compression of air. So, efficiencies of different configurations of adiabatic compression process were analyzed. The aim of this paper is to simulate and analyze the performances of a thermal storage reservoir integrated in the system and adapted to the working conditions of a CAES.
Bridgman's concern (shock compression science)
Graham, R. A.
1994-07-01
In 1956 P. W. Bridgman published a letter to the editor in the Journal of Applied Physics reporting results of electrical resistance measurements on iron under static high pressure. The work was undertaken to verify the existence of a polymorphic phase transition at 130 kbar (13 GPa) reported in the same journal and year by the Los Alamos authors, Bancroft, Peterson, and Minshall for high pressure, shock-compression loading. In his letter, Bridgman reported that he failed to find any evidence for the transition. Further, he raised some fundamental concerns as to the state of knowledge of shock-compression processes in solids. Later it was determined that Bridgman's static pressure scale was in error, and the shock observations became the basis for calibration of pressure values in static high pressure apparatuses. In spite of the error in pressure scales, Bridgman's concerns on descriptions of shock-compression processes were perceptive and have provided the basis for subsequent fundamental studies of shock-compressed solids. The present paper, written in response to receipt of the 1993 American Physical Society Shock-Compression Science Award, provides a brief contemporary assessment of those shock-compression issues which were the basis of Bridgman's 1956 concerns.
Hidden force opposing ice compression
Sun, Chang Q; Zheng, Weitao
2012-01-01
Coulomb repulsion between the unevenly-bound bonding and nonbonding electron pairs in the O:H-O hydrogen-bond is shown to originate the anomalies of ice under compression. Consistency between experimental observations, density functional theory and molecular dynamics calculations confirmed that the resultant force of the compression, the repulsion, and the recovery of electron-pair dislocations differentiates ice from other materials in response to pressure. The compression shortens and strengthens the longer-and-softer intermolecular O:H lone-pair virtual-bond; the repulsion pushes the bonding electron pair away from the H+/p and hence lengthens and weakens the intramolecular H-O real-bond. The virtual-bond compression and the real-bond elongation symmetrize the O:H-O as observed at ~60 GPa and result in the abnormally low compressibility of ice. The virtual-bond stretching phonons ( 3000 cm-1) softened upon compression. The cohesive energy of the real-bond dominates and its loss lowers the critical temperat...
Shock compression experiments on Lithium Deuteride single crystals.
Knudson, Marcus D.; Desjarlais, Michael Paul; Lemke, Raymond W.
2014-10-01
S hock compression exper iments in the few hundred GPa (multi - Mabr) regime were performed on Lithium Deuteride (LiD) single crystals . This study utilized the high velocity flyer plate capability of the Sandia Z Machine to perform impact experiments at flyer plate velocities in the range of 17 - 32 km/s. Measurements included pressure, density, and temperature between %7E200 - 600 GPa along the Principal Hugoniot - the locus of end states achievable through compression by large amplitude shock waves - as well as pressure and density of re - shock states up to %7E900 GPa . The experimental measurements are compared with recent density functional theory calculations as well as a new tabular equation of state developed at Los Alamos National Labs.
Aspects of forward scattering from the compression paddle in the dosimetry of mammography.
Toroi, Paula; Könönen, Niina; Timonen, Marjut; Kortesniemi, Mika
2013-05-01
The best compression paddle position during air kerma measurement in mammography dosimetry was studied. The amount of forward scattering as a function of the compression paddle distance was measured with different X-ray spectra and different types of paddles and dose meters. The contribution of forward scattering to the air kerma did not present significant dependency on the beam quality or of the compression paddle type. The tested dose meter types detected different amounts of forward scattering due to different internal collimation. When the paddle was adjusted to its maximum clinical distance, the proportion of the detected forward scattering was only 1 % for all dose meter types. The most consistent way of performing air kerma measurements is to position the compression paddle at the maximum distance from the dose meter and use a constant forward scattering factor for all dose meters. Thus, the dosimetric uncertainty due to the forward scatter can be minimised.
Matsuoka, R.
2014-05-01
This paper reports an experiment conducted to investigate the effect of lossy JPEG compression of an image with chromatic aberrations on the measurement accuracy of target center by the intensity-weighted centroid method. I utilized six images shooting a white sheet with 30 by 20 black filled circles in the experiment. The images were acquired by a digital camera Canon EOS 20D. The image data were compressed by using two compression parameter sets of a downsampling ratio, a quantization table and a Huffman code table utilized in EOS 20D. The experiment results clearly indicate that lossy JPEG compression of an image with chromatic aberrations would produce a significant effect on measurement accuracy of target center by the intensity-weighted centroid method. The maximum displacements of red, green and blue components caused by lossy JPEG compression were 0.20, 0.09, and 0.20 pixels respectively. The results also suggest that the downsampling of the chrominance components Cb and Cr in lossy JPEG compression would produce displacements between uncompressed image data and compressed image data. In conclusion, since the author consider that it would be unable to correct displacements caused by lossy JPEG compression, the author would recommend that lossy JPEG compression before recording an image in a digital camera should not be executed in case of highly precise image measurement by using color images acquired by a non-metric digital camera.
Choreographies with Secure Boxes and Compromised Principals
Carbone, Marco; 10.4204/EPTCS.12.1
2009-01-01
We equip choreography-level session descriptions with a simple abstraction of a security infrastructure. Message components may be enclosed within (possibly nested) "boxes" annotated with the intended source and destination of those components. The boxes are to be implemented with cryptography. Strand spaces provide a semantics for these choreographies, in which some roles may be played by compromised principals. A skeleton is a partially ordered structure containing local behaviors (strands) executed by regular (non-compromised) principals. A skeleton is realized if it contains enough regular strands so that it could actually occur, in combination with any possible activity of compromised principals. It is delivery guaranteed (DG) realized if, in addition, every message transmitted to a regular participant is also delivered. We define a novel transition system on skeletons, in which the steps add regular strands. These steps solve tests, i.e. parts of the skeleton that could not occur without additional regu...
Principal component regression for crop yield estimation
Suryanarayana, T M V
2016-01-01
This book highlights the estimation of crop yield in Central Gujarat, especially with regard to the development of Multiple Regression Models and Principal Component Regression (PCR) models using climatological parameters as independent variables and crop yield as a dependent variable. It subsequently compares the multiple linear regression (MLR) and PCR results, and discusses the significance of PCR for crop yield estimation. In this context, the book also covers Principal Component Analysis (PCA), a statistical procedure used to reduce a number of correlated variables into a smaller number of uncorrelated variables called principal components (PC). This book will be helpful to the students and researchers, starting their works on climate and agriculture, mainly focussing on estimation models. The flow of chapters takes the readers in a smooth path, in understanding climate and weather and impact of climate change, and gradually proceeds towards downscaling techniques and then finally towards development of ...
Effect of intermediate principal stress on strength of soft rock under complex stress states
马宗源; 廖红建; 党发宁
2014-01-01
A series of numerical simulations of conventional and true triaxial tests for soft rock materials using the three-dimensional finite difference code FLAC3D were presented. A hexahedral element and a strain hardening/softening constitutive model based on the unified strength theory (UST) were used to simulate both the consolidated-undrained (CU) triaxial and the consolidated-drained (CD) true triaxial tests. Based on the results of the true triaxial tests simulation, the effect of the intermediate principal stress on the strength of soft rock was investigated. Finally, an example of an axial compression test for a hard rock pillar with a soft rock interlayer was analyzed using the two-dimensional finite difference code FLAC. The CD true triaxial test simulations for diatomaceous soft rock suggest the peak and residual strengths increase by 30%when the effect of the intermediate principal stress is taken into account. The axial compression for a rock pillar indicated the peak and residual strengths increase six-fold when the soft rock interlayer approached the vertical and the effect of the intermediate principal stress is taken into account.
Comparing image compression methods in biomedical applications
Libor Hargas
2004-01-01
Full Text Available Compression methods suitable for image processing are described in this article in biomedical applications. The compression is often realized by reduction of irrelevance or redundancy. There are described lossless and lossy compression methods which can be use for compress of images in biomedical applications and comparison of these methods based on fidelity criteria.
29 CFR 1917.154 - Compressed air.
2010-07-01
... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed...
Outlier Mining Based on Principal Component Estimation
Hu Yang; Ting Yang
2005-01-01
Outlier mining is an important aspect in data mining and the outlier mining based on Cook distance is most commonly used. But we know that when the data have multicollinearity, the traditional Cook method is no longer effective. Considering the excellence of the principal component estimation, we use it to substitute the least squares estimation, and then give the Cook distance measurement based on principal component estimation, which can be used in outlier mining. At the same time, we have done some research on related theories and application problems.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
Musatenko, Yurij S.; Kurashov, Vitalij N.
1998-10-01
The paper presents improved version of our new method for compression of correlated image sets Optimal Image Coding using Karhunen-Loeve transform (OICKL). It is known that Karhunen-Loeve (KL) transform is most optimal representation for such a purpose. The approach is based on fact that every KL basis function gives maximum possible average contribution in every image and this contribution decreases most quickly among all possible bases. So, we lossy compress every KL basis function by Embedded Zerotree Wavelet (EZW) coding with essentially different loss that depends on the functions' contribution in the images. The paper presents new fast low memory consuming algorithm of KL basis construction for compression of correlated image ensembles that enable our OICKL system to work on common hardware. We also present procedure for determining of optimal losses of KL basic functions caused by compression. It uses modified EZW coder which produce whole PSNR (bitrate) curve during the only compression pass.
Lossless compression of hyperspectral images based on the prediction error block
Li, Yongjun; Li, Yunsong; Song, Juan; Liu, Weijia; Li, Jiaojiao
2016-05-01
A lossless compression algorithm of hyperspectral image based on distributed source coding is proposed, which is used to compress the spaceborne hyperspectral data effectively. In order to make full use of the intra-frame correlation and inter-frame correlation, the prediction error block scheme are introduced. Compared with the scalar coset based distributed compression method (s-DSC) proposed by E.Magli et al., that is , the bitrate of the whole block is determined by its maximum prediction error, and the s-DSC-classify scheme proposed by Song Juan that is based on classification and coset coding, the prediction error block scheme could reduce the bitrate efficiently. Experimental results on hyperspectral images show that the proposed scheme can offer both high compression performance and low encoder complexity and decoder complexity, which is available for on-board compression of hyperspectral images.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
The Principals' Center Movement: When School Leaders Become Learners.
Levine, Sarah L.
1986-01-01
This article describes principals' centers and the principals' center movement, identifying key issues raised by them. Future directions and persistent questions for the principals' center movement are developed. (MT)
Haller, Alicia; Hunt, Erika
2016-01-01
Research has demonstrated that principals have a powerful impact on school improvement and student learning. Principals play a vital role in recruiting, developing, and retaining effective teachers; creating a school-wide culture of learning; and implementing a continuous improvement plan aimed at increasing student achievement. Leithwood, Louis,…
Knuth, Richard K.
2004-01-01
Probably no effort has been more successful than the Interstate School Leaders Licensure Consortium (ISLLC) in capturing the current complexity of the principal's role and in providing direction for the professional development and selection of principals. As more states require universities to restructure training programs to align with the ISLLC…
Principal Self-Efficacy, Teacher Perceptions of Principal Performance, and Teacher Job Satisfaction
Evans, Molly Lynn
2016-01-01
In public schools, the principal's role is of paramount importance in influencing teachers to excel and to keep their job satisfaction high. The self-efficacy of leaders is an important characteristic of leadership, but this issue has not been extensively explored in school principals. Using internet-based questionnaires, this study obtained…
Talbot, Danny; Crow, Gary M.
Researchers who have focused on issues of interpersonal communication in organizations have concluded that it is an essential component of organizational life. This paper presents findings of a study that examined the role conceptions of principals in the Centenial Schools Program (CSP) and those of principals in non-CSP schools. Communicator…
Great Principals at Scale: Creating District Conditions That Enable All Principals to Be Effective
Ikemoto, Gina; Taliaferro, Lori; Fenton, Benjamin; Davis, Jacquelyn
2014-01-01
School leaders are critical in the lives of students and to the development of their teachers. Unfortunately, in too many instances, principals are effective in spite of--rather than because of--district conditions. To truly improve student achievement for all students across the country, well-prepared principals need the tools, support, and…
Principal Self-Efficacy, Teacher Perceptions of Principal Performance, and Teacher Job Satisfaction
Evans, Molly Lynn
2016-01-01
In public schools, the principal's role is of paramount importance in influencing teachers to excel and to keep their job satisfaction high. The self-efficacy of leaders is an important characteristic of leadership, but this issue has not been extensively explored in school principals. Using internet-based questionnaires, this study obtained…
Compression or tension? The stress distribution in the proximal femur
Meakin JR
2006-02-01
Full Text Available Abstract Background Questions regarding the distribution of stress in the proximal human femur have never been adequately resolved. Traditionally, by considering the femur in isolation, it has been believed that the effect of body weight on the projecting neck and head places the superior aspect of the neck in tension. A minority view has proposed that this region is in compression because of muscular forces pulling the femur into the pelvis. Little has been done to study stress distributions in the proximal femur. We hypothesise that under physiological loading the majority of the proximal femur is in compression and that the internal trabecular structure functions as an arch, transferring compressive stresses to the femoral shaft. Methods To demonstrate the principle, we have developed a 2D finite element model of the femur in which body weight, a representation of the pelvis, and ligamentous forces were included. The regions of higher trabecular bone density in the proximal femur (the principal trabecular systems were assigned a higher modulus than the surrounding trabecular bone. Two-legged and one-legged stances, the latter including an abductor force, were investigated. Results The inclusion of ligamentous forces in two-legged stance generated compressive stresses in the proximal femur. The increased modulus in areas of greater structural density focuses the stresses through the arch-like internal structure. Including an abductor muscle force in simulated one-legged stance also produced compression, but with a different distribution. Conclusion This 2D model shows, in principle, that including ligamentous and muscular forces has the effect of generating compressive stresses across most of the proximal femur. The arch-like trabecular structure transmits the compressive loads to the shaft. The greater strength of bone in compression than in tension is then used to advantage. These results support the hypothesis presented. If correct, a
Compressibility, turbulence and high speed flow
Gatski, Thomas B
2013-01-01
Compressibility, Turbulence and High Speed Flow introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range, through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. The book provides the reader with the necessary background and current trends in the theoretical and experimental aspects of compressible turbulent flows and compressible turbulence. Detailed derivations of the pertinent equations describing the motion of such turbulent flows is provided and
30 CFR 75.1730 - Compressed air; general; compressed air systems.
2010-07-01
... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... pressure has been relieved from that part of the system to be repaired. (d) At no time shall compressed air... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems...
Imperfection analysis of flexible pipe armor wires in compression and bending
Østergaard, Niels Højen; Lyckegaard, Anders; Andreasen, Jens H.
2012-01-01
The work presented in this paper is motivated by a specific failure mode known as lateral wire buckling occurring in the tensile armor layers of flexible pipes. The tensile armor is usually constituted by two layers of initially helically wound steel wires with opposite lay directions. During pipe...... laying in ultra deep waters, a flexible pipe experiences repeated bending cycles and longitudinal compression. These loading conditions are known to impose a danger to the structural integrity of the armoring layers, if the compressive load on the pipe exceeds the total maximum compressive load carrying...
Keith Walker
2003-09-01
Full Text Available This paper reports the findings related to the International Beginning Principals study, which examined factors perceived by first year principals to both complicate, and account for, first year principalship successes in rural jurisdictions. Specifically, for this paper we deal with factors seen as significant in establishing oneself as a first time principal in a rural Canadian school. The general findings from this study centred on training and experience related to administration of schools. Many first time principals in rural schools had limited specific preparation for the principalship, or other related administrative roles such as the vice principalship. Such findings have taken on more importance in the last several years as school districts find it increasingly difficult to recruit principals for smaller rural schools.
Authentication Scheme Based on Principal Component Analysis for Satellite Images
Ashraf. K. Helmy
2009-09-01
Full Text Available This paper presents a multi-band wavelet image content authentication scheme for satellite images by incorporating the principal component analysis (PCA. The proposed schemeachieves higher perceptual transparency and stronger robustness. Specifically, the developed watermarking scheme can successfully resist common signal processing such as JPEG compression and geometric distortions such as cropping. In addition, the proposed scheme can be parameterized, thus resulting in more security. That is, an attacker may not be able to extract the embedded watermark if the attacker does not know the parameter.In an order to meet these requirements, the host image is transformed to YIQ to decrease the correlation between different bands, Then Multi-band Wavelet transform (M-WT is applied to each channel separately obtaining one approximate sub band and fifteen detail sub bands. PCA is then applied to the coefficients corresponding to the same spatial location in all detail sub bands. The last principle component band represents an excellent domain forinserting the water mark since it represents lowest correlated features in high frequency area of host image.One of the most important aspects of satellite images is spectral signature, the behavior of different features in different spectral bands, the results of proposed algorithm shows that the spectral stamp for different features doesn't tainted after inserting the watermark.
Sex Education: The Principle and the Principal.
Wayne, Joseph E.
1981-01-01
The school principal is in a propitious position to offer leadership in developing a sex education program. His position of leadership and respect can facilitate the development of a Citizens Advisory Committee which, in turn, can ensure cooperation and leadership in setting the goals for developing a sex education program. (Author/CM)
Transformational Leadership Behaviors in Elementary School Principals
Ergle, Barbara
2012-01-01
School leaders face high expectations from society for leadership effectiveness. While it is commonly accepted that leadership practices contribute to school excellence, specific behaviors of effective elementary principals in the local context were not well understood. The purpose of this mixed methods study was to investigate self-reported…
Quantum principal bundles and corresponding gauge theories
Durdevic, M
1995-01-01
A generalization of classical gauge theory is presented, in the framework of a noncommutative-geometric formalism of quantum principal bundles over smooth manifolds. Quantum counterparts of classical gauge bundles, and classical gauge transformations, are introduced and investigated. A natural differential calculus on quantum gauge bundles is constructed and analyzed. Kinematical and dynamical properties of corresponding gauge theories are discussed.
Principals and the Power of Recommending Students
无
2009-01-01
Attending one of China’s most prestigious institutions of higher learning, Peking University (PKU), is a dream for the vast majority of China’s middle school graduates. On November 16, the university released a list of 39 senior middle schools across the country whose principals have been
Principals in Partnership with Math Coaches
Grant, Catherine Miles; Davenport, Linda Ruiz
2009-01-01
One of the most promising developments in math education is the fact that many districts are hiring math coaches--also called math resource teachers, math facilitators, math lead teachers, or math specialists--to assist elementary-level teachers with math instruction. What must not be lost, however, is that principals play an essential role in…
Principals' Transformational Leadership in School Improvement
Yang, Yingxiu
2013-01-01
Purpose: This paper aims to contribute experience and ideas of the transformational leadership, not only for the principal want to improve leadership himself (herself), but also for the school at critical period of improvement, through summarizing forming process and the problem during the course and key factors that affect the course.…
The Relationship between Principals' Managerial Approaches and ...
Nekky Umera
Cell phone - +254 724 249 730; Office - +254 065 32369 ... Ultimately, student discipline may be affected. This paper focuses on findings of a study to establish the ... behaviour is an essential variable in enhancing school outcomes (Nasibi, .... Three questionnaires were used to collect data from principals, teachers and.
Principal component analysis implementation in Java
Wójtowicz, Sebastian; Belka, Radosław; Sławiński, Tomasz; Parian, Mahnaz
2015-09-01
In this paper we show how PCA (Principal Component Analysis) method can be implemented using Java programming language. We consider using PCA algorithm especially in analysed data obtained from Raman spectroscopy measurements, but other applications of developed software should also be possible. Our goal is to create a general purpose PCA application, ready to run on every platform which is supported by Java.
Principals' Leadership Styles and Student Achievement
Harnish, David Alan
2012-01-01
Many schools struggle to meet No Child Left Behind's stringent adequate yearly progress standards, although the benchmark has stimulated national creativity and reform. The purpose of this study was to explore teacher perceptions of principals' leadership styles, curriculum reform, and student achievement to ascertain possible factors to improve…
What Principals Should Know About Food Allergies.
Munoz-Furlong, Anne
2002-01-01
Describes what principals should know about recent research findings on food allergies (peanuts, tree nuts, milk, eggs, soy, wheat) that can produce severe or life-threatening reactions in children. Asserts that every school should have trained staff and written procedures for reacting quickly to allergic reactions. (PKP)
Teachers' Perceptions Regarding School Principals' Coaching Skills
Yirci, Ramazan; Özdemir, Tuncay Yavuz; Kartal, Seçil Eda; Kocabas, Ibrahim
2014-01-01
The purpose of this study was to find out teachers' perceptions about school principals' coaching skills. The study was carried out within qualitative research methods. The study group included 76 teachers in Elazig and 73 teachers in Kahramanmaras provinces of Turkey. All the data were processed using Nvivo 9 software. The results indicate that…
Principal Bundles on the Projective Line
V B Mehta; S Subramanian
2002-08-01
We classify principal -bundles on the projective line over an arbitrary field of characteristic ≠ 2 or 3, where is a reductive group. If such a bundle is trivial at a -rational point, then the structure group can be reduced to a maximal torus.
Managerial Leadership and the Effective Principal.
Yukl, Gary
To help relate management ideas and knowledge to educational administration, the author reviews the major theories and findings from the last 20 years on managerial leadership and discusses their relevance for school principals. He first summarizes findings from three approaches: the traits approach, emphasizing managerial motivation and skills;…
Principal Connection / Amazon and the Whole Teacher
Hoerr, Thomas R.
2015-01-01
A recent controversy over Amazon's culture has strong implications for the whole child approach, and it offers powerful lessons for principals. A significant difference between the culture of so many businesses today and the culture at good schools is that in good schools, the welfare of the employees is very important. Student success is the…
Conceptualizing Social Justice: Interviews with Principals
Wang, Fei
2015-01-01
Purpose: Today, as the understanding of diversity is further expanded, the meaning of social justice becomes even more complicated, if not confusing. The purpose of this paper is to explore how school principals with social justice commitment understand and perceive social justice in their leadership practices. Design/methodology/approach: A…
Principal component analysis of phenolic acid spectra
Phenolic acids are common plant metabolites that exhibit bioactive properties and have applications in functional food and animal feed formulations. The ultraviolet (UV) and infrared (IR) spectra of four closely related phenolic acid structures were evaluated by principal component analysis (PCA) to...
Principal component analysis of psoriasis lesions images
Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær
2003-01-01
A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...
An Exploration of Principal Instructional Technology Leadership
Townsend, LaTricia Walker
2013-01-01
Nationwide the demand for schools to incorporate technology into their educational programs is great. In response, North Carolina developed the IMPACT model in 2003 to provide a comprehensive model for technology integration in the state. The model is aligned to national educational technology standards for teachers, students, and principals.…
Principal component analysis of symmetric fuzzy data
Giordani, Paolo; Kiers, Henk A.L.
2004-01-01
Principal Component Analysis (PCA) is a well-known tool often used for the exploratory analysis of a numerical data set. Here an extension of classical PCA is proposed, which deals with fuzzy data (in short PCAF), where the elementary datum cannot be recognized exactly by a specific number but by a
The Role of Principals in Politics.
Yingling, Walter S.
This talk uses humor to draw attention to the principal's responsibility and potential in promoting adequate education legislation. Knowledge of the issues in current legislation, contact with legislators, and organized action by administrators' associations are among the topics commented on. (PGD)
The Principal Kids Love To Hug.
Collins, Patrick
2000-01-01
David Nufer, Alaska's National Distinguished Principal for 1999, uses collaboration to create a family atmosphere at Finger Lake (Alaska) Elementary School. He turned the scheduling process over to teachers, involved teachers and parents in implementing a two-track system of mixed and single-age classrooms, brought in senior citizens to supplement…
The Principal's Mind-Set for Data
Fox, Dennis
2013-01-01
Is there a school leader anywhere who hasn't been directed, or at least encouraged, to "analyze the data" and practice what has been termed "data-driven decision-making"? Today's principal is expected to be able to skillfully collect, organize, analyze, interpret and use a variety of data in order to improve instruction, services and programs for…
Principal normal indicatrices of closed space curves
Røgen, Peter
1999-01-01
A theorem due to J. Weiner, which is also proven by B. Solomon, implies that a principal normal indicatrix of a closed space curve with nonvanishing curvature has integrated geodesic curvature zero and contains no subarc with integrated geodesic curvature pi. We prove that the inverse problem alw...
Elementary Teachers' Perceptions of Elementary Principals' Effectiveness
Fridenvalds, Kriss R.
2012-01-01
This dissertation examined the beliefs of elementary teachers to determine if their perceptions of effective principal leadership align to transformational leadership theory vis-a-vis the Educational Leadership Policy Standards (ELPS). A phenomenological, single-case study approach was utilized by means of a mixed-methodological, Web-based survey,…
Autocrats, Bureaucrats, and Buffoons: Images of Principals.
Glanz, Jerry
1998-01-01
A content analysis of over 35 American motion pictures and television sitcoms since the 1950s showed principals most often portrayed as autocrats, bureaucrats, or buffoons. Sometimes, as in the TV movie "Kidz in the Woods," a single show depicts all three characteristics. Promoting instructional leadership and an ethic of caring among…
The Principal as Chief Executive Officer.
Dubin, Andrew E., Ed.
This book was predicated on the idea that an effective principal must be proactive in decision making and have available appropriate information sources to make good decisions. Each chapter analyzes decision making from the perspective of different professionals in education and business management. Through the use of personal accounts and case…
Burnout And Lifestyle Of Principals And Entrepreneurs
Jasna Lavrenčič
2014-12-01
Full Text Available Research Question (RQ: What kind of lifestyle do the principals and entrepreneurs lead? Does the lifestyle of principals and entrepreneurs influence burnout? Purpose: To find out, based on the results of a questionnaire, what kind of lifestyle both researched groups lead. Does lifestyle have an influence on the occurrence of the phenomenon of burnout. Method: We used the method of data collection by questionnaire. Acquired data were analyzed using SPSS, descriptive and inference statistics. Results: Results showed, that both groups lead a similar lifestyle and that lifestyle influences burnout with principals, as well as entrepreneurs. Organization: School principals and entrepreneurs are the heads of individual organizations or companies, the goal of which is success. To be successful in their work, they must adapt their lifestyle, which can be healthy or unhealthy. If their lifestyle is unhealthy, it can lead to burnout. Society: With results of the questionnaire we would like to answer the question about the lifestyle of both groups and its influence on the occurrence of burnout. Originality: The study of lifestyle and the occurrence of burnout in these two groups is the first study in this area. Limitations/Future Research: In continuation, research groups could be submitted to the research fields of effort physiology and tracking of certain haematological parameters, such as cholesterol, blood sugar and stress hormones - adrenaline, noradrenalin, cortisol. Thus, we could carry out an even more in depth research of the connection between lifestyle and burnout.
The Principal's Playbook: Tackling School Improvement
Protheroe, Nancy
2010-01-01
"The Principal's Playbook: Tackling School Improvement" brings together the best thinking on successful schools and classrooms to help school administrators engage their faculty in discussion about effective school improvement strategies. Designed to support both school improvement efforts and professional development, each chapter includes…
Primary School Principals' Experiences with Smartphone Apps
Çakir, Rahman; Aktay, Sayim
2016-01-01
Smartphones are not just pieces of hardware, they at same time also dip into software features such as communication systems. The aim of this study is to examine primary school principals' experiences with smart phone applications. Shedding light on this subject means that this research is qualitative. Criterion sampling has been intentionally…
Probabilistic Principal Component Analysis for Metabolomic Data.
Nyamundanda, Gift
2010-11-23
Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.
Technology Leadership Conditions among Nebraska School Principals
Curnyn, Molly A.
2013-01-01
As visionary leaders, school administrators are responsible for leading their schools into the 21st century by integrating technology to enhance learning and teaching. As technology leaders, principals must apply rigorous thought into the overall role that technology plays in the enhancement of student learning. Leveraging technology will assist…
Principals in Partnership with Math Coaches
Grant, Catherine Miles; Davenport, Linda Ruiz
2009-01-01
One of the most promising developments in math education is the fact that many districts are hiring math coaches--also called math resource teachers, math facilitators, math lead teachers, or math specialists--to assist elementary-level teachers with math instruction. What must not be lost, however, is that principals play an essential role in…
California Testing: How Principals Choose Priorities.
Bushman, James; Goodman, Greg; Brown-Welty, Sharon; Dorn, Shelly
2001-01-01
The Central Valley (California) Educational Research Consortium asked 118 area principals what they were doing to improve their lowest achieving students' education. Respondents are focusing on individualizing instruction, aligning curricula to standards, advocating teaching to standards and tests, promoting new curriculum methodologies, and…
Platos, Jan
2008-01-01
Today there are many universal compression algorithms, but in most cases is for specific data better using specific algorithm - JPEG for images, MPEG for movies, etc. For textual documents there are special methods based on PPM algorithm or methods with non-character access, e.g. word-based compression. In the past, several papers describing variants of word-based compression using Huffman encoding or LZW method were published. The subject of this paper is the description of a word-based compression variant based on the LZ77 algorithm. The LZ77 algorithm and its modifications are described in this paper. Moreover, various ways of sliding window implementation and various possibilities of output encoding are described, as well. This paper also includes the implementation of an experimental application, testing of its efficiency and finding the best combination of all parts of the LZ77 coder. This is done to achieve the best compression ratio. In conclusion there is comparison of this implemented application wi...
Superfast maximum-likelihood reconstruction for quantum tomography
Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon
2017-06-01
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
Stochastic convex sparse principal component analysis.
Baytas, Inci M; Lin, Kaixiang; Wang, Fei; Jain, Anil K; Zhou, Jiayu
2016-12-01
Principal component analysis (PCA) is a dimensionality reduction and data analysis tool commonly used in many areas. The main idea of PCA is to represent high-dimensional data with a few representative components that capture most of the variance present in the data. However, there is an obvious disadvantage of traditional PCA when it is applied to analyze data where interpretability is important. In applications, where the features have some physical meanings, we lose the ability to interpret the principal components extracted by conventional PCA because each principal component is a linear combination of all the original features. For this reason, sparse PCA has been proposed to improve the interpretability of traditional PCA by introducing sparsity to the loading vectors of principal components. The sparse PCA can be formulated as an ℓ1 regularized optimization problem, which can be solved by proximal gradient methods. However, these methods do not scale well because computation of the exact gradient is generally required at each iteration. Stochastic gradient framework addresses this challenge by computing an expected gradient at each iteration. Nevertheless, stochastic approaches typically have low convergence rates due to the high variance. In this paper, we propose a convex sparse principal component analysis (Cvx-SPCA), which leverages a proximal variance reduced stochastic scheme to achieve a geometric convergence rate. We further show that the convergence analysis can be significantly simplified by using a weak condition which allows a broader class of objectives to be applied. The efficiency and effectiveness of the proposed method are demonstrated on a large-scale electronic medical record cohort.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Decomposition of spectra using maximum autocorrelation factors
Larsen, Rasmus
2001-01-01
into classification or regression type analyses. A featured method for low dimensional representation of multivariate datasets is Hotellings principal components transform. We will extend the use of principal components analysis incorporating new information into the algorithm. This new information consists......This paper addresses the problem of generating a low dimensional representation of the variation present in a set of spectra, e.g. reflection spectra recorded from a series of objects. The resulting low dimensional description may subseque ntly be input through variable selection schemes...... Fourier decomposition these new variables are located in frequency as well as well wavelength. The proposed algorithm is tested on 100 samples of NIR spectra of wheat....
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Morphological Transform for Image Compression
Luis Pastor Sanchez Fernandez
2008-05-01
Full Text Available A new method for image compression based on morphological associative memories (MAMs is presented. We used the MAM to implement a new image transform and applied it at the transformation stage of image coding, thereby replacing such traditional methods as the discrete cosine transform or the discrete wavelet transform. Autoassociative and heteroassociative MAMs can be considered as a subclass of morphological neural networks. The morphological transform (MT presented in this paper generates heteroassociative MAMs derived from image subblocks. The MT is applied to individual blocks of the image using some transformation matrix as an input pattern. Depending on this matrix, the image takes a morphological representation, which is used to perform the data compression at the next stages. With respect to traditional methods, the main advantage offered by the MT is the processing speed, whereas the compression rate and the signal-to-noise ratio are competitive to conventional transforms.
Compressive Sensing in Communication Systems
Fyhn, Karsten
2013-01-01
Wireless communication is omnipresent today, but this development has led to frequency spectrum becoming a limited resource. Furthermore, wireless devices become more and more energy-limited, due to the demand for continual wireless communication of higher and higher amounts of information....... The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...
Compressive Sensing for MIMO Radar
Yu, Yao; Poor, H Vincent
2009-01-01
Multiple-input multiple-output (MIMO) radar systems have been shown to achieve superior resolution as compared to traditional radar systems with the same number of transmit and receive antennas. This paper considers a distributed MIMO radar scenario, in which each transmit element is a node in a wireless network, and investigates the use of compressive sampling for direction-of-arrival (DOA) estimation. According to the theory of compressive sampling, a signal that is sparse in some domain can be recovered based on far fewer samples than required by the Nyquist sampling theorem. The DOA of targets form a sparse vector in the angle space, and therefore, compressive sampling can be applied for DOA estimation. The proposed approach achieves the superior resolution of MIMO radar with far fewer samples than other approaches. This is particularly useful in a distributed scenario, in which the results at each receive node need to be transmitted to a fusion center for further processing.
Compressive Sensing with Optical Chaos
Rontani, D.; Choi, D.; Chang, C.-Y.; Locquet, A.; Citrin, D. S.
2016-12-01
Compressive sensing (CS) is a technique to sample a sparse signal below the Nyquist-Shannon limit, yet still enabling its reconstruction. As such, CS permits an extremely parsimonious way to store and transmit large and important classes of signals and images that would be far more data intensive should they be sampled following the prescription of the Nyquist-Shannon theorem. CS has found applications as diverse as seismology and biomedical imaging. In this work, we use actual optical signals generated from temporal intensity chaos from external-cavity semiconductor lasers (ECSL) to construct the sensing matrix that is employed to compress a sparse signal. The chaotic time series produced having their relevant dynamics on the 100 ps timescale, our results open the way to ultrahigh-speed compression of sparse signals.
Compressive behavior of fine sand.
Martin, Bradley E. (Air Force Research Laboratory, Eglin, FL); Kabir, Md. E. (Purdue University, West Lafayette, IN); Song, Bo; Chen, Wayne (Purdue University, West Lafayette, IN)
2010-04-01
The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.
Instability of ties in compression
Buch-Hansen, Thomas Cornelius
2013-01-01
Masonry cavity walls are loaded by wind pressure and vertical load from upper floors. These loads results in bending moments and compression forces in the ties connecting the outer and the inner wall in a cavity wall. Large cavity walls are furthermore loaded by differential movements from...... the temperature gradient between the outer and the inner wall, which results in critical increase of the bending moments in the ties. Since the ties are loaded by combined compression and moment forces, the loadbearing capacity is derived from instability equilibrium equations. Most of them are iterative, since......-connectors in cavity walls was developed. The method takes into account constraint conditions limiting the free length of the wall tie, and the instability in case of pure compression which gives an optimal load bearing capacity. The model is illustrated with examples from praxis....
徐文成; 陈伟成; 张书敏; 罗爱平; 刘颂豪
2002-01-01
In this paper, we report on the enhanced pulse compression due to the interaction between the positive third-order dispersion (TOD) and the nonlinear effect (cross-phase modulation effect) in birefringent fibres. Polarization soliton compression along the slow axis can be enhanced in a birefringent fibre with positive third-order dispersion. while the polarization soliton compression along the fast axis can be enhanced in the fibre with negative third-order dispersion.Moreover, there is an optimal third-order dispersion parameter for obtaining the optimal pulse compression.Redshifted initial chirp is helpful to the pulse compression, while blueshifted chirp is detrimental to the pulse compression. There is also an optimal chirp parameter to reach maximum pulse compression. The optimal pulse compression for TOD parameters under different N-order solitons is also found.
Fast, efficient lossless data compression
Ross, Douglas
1991-01-01
This paper presents lossless data compression and decompression algorithms which can be easily implemented in software. The algorithms can be partitioned into their fundamental parts which can be implemented at various stages within a data acquisition system. This allows for efficient integration of these functions into systems at the stage where they are most applicable. The algorithms were coded in Forth to run on a Silicon Composers Single Board Computer (SBC) using the Harris RTX2000 Forth processor. The algorithms require very few system resources and operate very fast. The performance of the algorithms with the RTX enables real time data compression and decompression to be implemented for a wide range of applications.
[Vascular compression of the duodenum].
Acosta, B; Guachalla, G; Martínez, C; Felce, S; Ledezma, G
1991-01-01
The acute vascular compression of the duodenum is a well-recognized clinical entity, characterized by recurrent vomiting, abdominal distention, weight loss, post prandial distress. The cause of compression is considered to be effect produced as a result of the angle formed by the superior mesenteric vessels and sometimes by one of its first two branches, and vertebrae and paravertebral muscles, when the angle between superior mesenteric vessels and the aorta it's lower than 18 degrees we can saw this syndrome. The duodenojejunostomy is the best treatment, as well as in our patient.
GPU-accelerated compressive holography.
Endo, Yutaka; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi
2016-04-18
In this paper, we show fast signal reconstruction for compressive holography using a graphics processing unit (GPU). We implemented a fast iterative shrinkage-thresholding algorithm on a GPU to solve the ℓ1 and total variation (TV) regularized problems that are typically used in compressive holography. Since the algorithm is highly parallel, GPUs can compute it efficiently by data-parallel computing. For better performance, our implementation exploits the structure of the measurement matrix to compute the matrix multiplications. The results show that GPU-based implementation is about 20 times faster than CPU-based implementation.
Compressing the Inert Doublet Model
Blinov, Nikita; Morrissey, David E; de la Puente, Alejandro
2015-01-01
The Inert Doublet Model relies on a discrete symmetry to prevent couplings of the new scalars to Standard Model fermions. This stabilizes the lightest inert state, which can then contribute to the observed dark matter density. In the presence of additional approximate symmetries, the resulting spectrum of exotic scalars can be compressed. Here, we study the phenomenological and cosmological implications of this scenario. We derive new limits on the compressed Inert Doublet Model from LEP, and outline the prospects for exclusion and discovery of this model at dark matter experiments, the LHC, and future colliders.
Weinwurm, Marcus; Appelbe, Brian; Skidmore, Jonathan; Bland, Simon; Chittenden, Jeremy
2012-10-01
Isentropic Compression Experiments on pulsed power machines in the field of High Energy Density Physics have gained interest in recent years. We describe a method of isentropically compressing cryogenic Deuterium inside a metal liner. Pulse shaping was performed by solving Kidder's homogeneous isentropic compression for cylindrical geometry and extending it to an arbitrary Equation of State. The obtained pulse shape enables us to simulate a cylindrically convergent ramp wave, which quasi-isentropically compresses the Deuterium fill to densities much higher than achievable by using a standard pulse. The effect of Rayleigh-Taylor instabilities upon the peak density achieved is evaluated using the resistive magneto-hydrodynamics code Gorgon for a maximum current of 25 MA. Therefore, isentropic liner implosions are a promising technique for recreating the conditions present in the interiors of gas giants. We applied this technique to the High-Gain Magnetized Liner Inertial Fusion (MAGLIF) scheme [1]. There a metal liner is filled with DT gas surrounded by a layer of DT ice. We show how the current pulse can be shaped in order to isentropically compress the DT ice layer. By doing so, we keep the fuel at low temperature. This maximises the compression of the DT ice layer, and increases rho-r at stagnation. Burn wave propagation in the isentropically compressed fuel is compared to propagation in fuel compressed by a standard current pulse. [4pt] [1] S.A. Slutz and R. A. Vesey, Phys. Rev. Lett. 108, 025003 (2012)
Evaluation of adhesive and compressive strength of glass ionomer cements.
Ramashanker; Singh, Raghuwar D; Chand, Pooran; Jurel, Sunit Km; Tripathi, Shuchi
2011-12-01
The aim of the study was to assess, compare and evaluate the adhesive strength and compressive strength of different brands of glass ionomer cements to a ceramometal alloy. (A) Glass ionomer cements: GC Fuji II (GC Corporation, Tokyo), Chem Flex (Dentsply DeTrey, Germany), Glass ionomer FX (Shofu-11, Japan), MR dental (MR dental suppliers Pvt Ltd, England). (B) Ceramometal alloy (Ni-Cr: Wiron 99; Bego, Bremen, Germany). (C) Cold cure acrylic resin. (E) Temperature cum humidity control chamber. (F) Instron Universal Testing Machine. Four different types of Glass ionomer cements were used in the study. From each type of the Glass ionomer cements, 15 specimens for each were made to evaluate the compressive strength and adhesive strength, respectively. The 15 specimens were further divided into three subgroups of five specimens. For compressive strength, specimens were tested at 2, 4 and 12 h by using Instron Universal Testing Machine. To evaluate the adhesive strength, specimens were surface treated with diamond bur, silicone carbide bur and sandblasting and tested under Instron Universal Testing Machine. It was concluded from the study that the compressive strength as well as the adhesive bond strength of MR dental glass ionomer cement with a ceramometal alloy was found to be maximum compare to other glass ionomer cements. Sandblasting surface treatment of ceramometal alloy was found to be comparatively more effective for adhesive bond strength between alloy and glass ionomer cement.
COPD phenotype description using principal components analysis
Roy, Kay; Smith, Jacky; Kolsum, Umme
2009-01-01
BACKGROUND: Airway inflammation in COPD can be measured using biomarkers such as induced sputum and Fe(NO). This study set out to explore the heterogeneity of COPD using biomarkers of airway and systemic inflammation and pulmonary function by principal components analysis (PCA). SUBJECTS...... AND METHODS: In 127 COPD patients (mean FEV1 61%), pulmonary function, Fe(NO), plasma CRP and TNF-alpha, sputum differential cell counts and sputum IL8 (pg/ml) were measured. Principal components analysis as well as multivariate analysis was performed. RESULTS: PCA identified four main components (% variance...... associations between the variables within components 1 and 2. CONCLUSION: COPD is a multi dimensional disease. Unrelated components of disease were identified, including neutrophilic airway inflammation which was associated with systemic inflammation, and sputum eosinophils which were related to increased Fe...
Principals' transformational leadership and teachers' collective efficacy.
Dussault, Marc; Payette, Daniel; Leroux, Mathieu
2008-04-01
The study was designed to test the relationship of principals' transformational, transactional, and laissez-faire leadership with teachers' collective efficacy. Bandura's theory of efficacy applied to the group and Bass's transformational leadership theory were used as the theoretical framework. Participants included 487 French Canadian teachers from 40 public high schools. As expected, there were positive and significant correlations between principals' transformational and transactional leadership and teachers' collective efficacy. Also, there was a negative and significant correlation between laissez-faire leadership and teachers' collective efficacy. Moreover, regression analysis showed transformational leadership significantly enhanced the predictive capabilities of transactional leadership on teachers' collective efficacy. These results confirm the importance of leadership to predict collective efficacy and, by doing so, strengthen Bass's theory of leadership.
Wavelet and wavelet packet compression of electrocardiograms.
Hilton, M L
1997-05-01
Wavelets and wavelet packets have recently emerged as powerful tools for signal compression. Wavelet and wavelet packet-based compression algorithms based on embedded zerotree wavelet (EZW) coding are developed for electrocardiogram (ECG) signals, and eight different wavelets are evaluated for their ability to compress Holter ECG data. Pilot data from a blind evaluation of compressed ECG's by cardiologists suggest that the clinically useful information present in original ECG signals is preserved by 8:1 compression, and in most cases 16:1 compressed ECG's are clinically useful.
D'Angelo, J. A.; Zodrow, E.L.; Mastalerz, Maria
2012-01-01
Nearly all of the spectrochemical studies involving Carboniferous foliage of seed-ferns are based on a limited number of pinnules, mainly compressions. In contrast, in this paper we illustrate working with a larger pinnate segment, i.e., a 22-cm long neuropteroid specimen, compression-preserved with cuticle, the compression map. The objective is to study preservation variability on a larger scale, where observation of transparency/opacity of constituent pinnules is used as a first approximation for assessing the degree of pinnule coalification/fossilization. Spectrochemical methods by Fourier transform infrared spectrometry furnish semi-quantitative data for principal component analysis.The compression map shows a high degree of preservation variability, which ranges from comparatively more coalified pinnules to less coalified pinnules that resemble fossilized-cuticles, noting that the pinnule midveins are preserved more like fossilized-cuticles. A general overall trend of coalified pinnules towards fossilized-cuticles, i.e., variable chemistry, is inferred from the semi-quantitative FTIR data as higher contents of aromatic compounds occur in the visually more opaque upper location of the compression map. The latter also shows a higher condensation of the aromatic nuclei along with some variation in both ring size and degree of aromatic substitution. From principal component analysis we infer correspondence between transparency/opacity observation and chemical information which correlate with varying degree to fossilization/coalification among pinnules. ?? 2011 Elsevier B.V.
Shockwave compression of Ar gas at several initial densities
Dattelbaum, Dana M.; Goodwin, Peter M.; Garcia, Daniel B.; Gustavsen, Richard L.; Lang, John M.; Aslam, Tariq D.; Sheffield, Stephen A.; Gibson, Lloyd L.; Morris, John S.
2017-01-01
Experimental data of the principal Hugoniot locus of variable density gas-phase noble and molecular gases are rare. The majority of shock Hugoniot data is either from shock tube experiments on low-pressure gases or from plate impact experiments on cryogenic, liquefied gases. In both cases, physics regarding shock compressibility, thresholds for the on-set of shock-driven ionization, and even dissociation chemistry are difficult to infer for gases at intermediate densities. We have developed an experimental target design for gas gun-driven plate impact experiments on noble gases at initial pressures between 200-1000 psi. Using optical velocimetry, we are able to directly determine both the shock and particle velocities of the gas on the principal Hugoniot locus, as well as clearly differentiate ionization thresholds. The target design also results in multiply shocking the gas in a quasi-isentropic fashion yielding off-Hugoniot compression data. We describe the results of a series of plate impact experiments on Ar with starting densities between 0.02-0.05 g/cm3 at room temperature. Furthermore, by coupling optical fibers to the targets, we have measured the time-resolved optical emission from the shocked gas using a spectrometer coupled to an optical streak camera to spectrally-resolve the emission, and with a 5-color optical pyrometer for temperature determination.
Quantum principal bundles and their characteristic classes
Durdevic, M
1996-01-01
A brief exposition of the general theory of characteristic classes of quantum principal bundles is given. The theory of quantum characteristic classes incorporates ideas of classical Weil theory into the conceptual framework of non-commutative differential geometry. A purely cohomological interpretation of the Weil homomorphism is given, together with a standard geometrical interpretation via quantum invariant polynomials. A natural spectral sequence is described. Some quantum phenomena appearing in the formalism are discussed.
Principal Portfolios: Recasting the Efficient Frontier
M. Hossein Partovi; Michael Caputo
2004-01-01
A new method of analyzing the efficient portfolio problem under the assumption that short sales are allowed is presented. It is based on the remarkable finding that the original asset set can be reorganized as a set of uncorrelated portfolios, here named principal portfolios. The original problem of portfolio selection from the existing, correlated assets is thereby traded for the reduced problem of choosing from a set of uncorrelated portfolios. These portfolios constitute a new investment e...
Multilevel sparse functional principal component analysis.
Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S
2014-01-29
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.
Maxwell's Demon and Data Compression
Hosoya, Akio; Shikano, Yutaka
2011-01-01
In an asymmetric Szilard engine model of Maxwell's demon, we show the equivalence between information theoretical and thermodynamic entropies when the demon erases information optimally. The work gain by the engine can be exactly canceled out by the work necessary to reset demon's memory after optimal data compression a la Shannon before the erasure.
Grid-free compressive beamforming
Xenaki, Angeliki; Gerstoft, Peter
2015-01-01
The direction-of-arrival (DOA) estimation problem involves the localization of a few sources from a limited number of observations on an array of sensors, thus it can be formulated as a sparse signal reconstruction problem and solved efficiently with compressive sensing (CS) to achieve high...
LIDAR data compression using wavelets
Pradhan, B.; Mansor, Shattri; Ramli, Abdul Rahman; Mohamed Sharif, Abdul Rashid B.; Sandeep, K.
2005-10-01
The lifting scheme has been found to be a flexible method for constructing scalar wavelets with desirable properties. In this paper, it is extended to the LIDAR data compression. A newly developed data compression approach to approximate the LIDAR surface with a series of non-overlapping triangles has been presented. Generally a Triangulated Irregular Networks (TIN) are the most common form of digital surface model that consists of elevation values with x, y coordinates that make up triangles. But over the years the TIN data representation has become a case in point for many researchers due its large data size. Compression of TIN is needed for efficient management of large data and good surface visualization. This approach covers following steps: First, by using a Delaunay triangulation, an efficient algorithm is developed to generate TIN, which forms the terrain from an arbitrary set of data. A new interpolation wavelet filter for TIN has been applied in two steps, namely splitting and elevation. In the splitting step, a triangle has been divided into several sub-triangles and the elevation step has been used to 'modify' the point values (point coordinates for geometry) after the splitting. Then, this data set is compressed at the desired locations by using second generation wavelets. The quality of geographical surface representation after using proposed technique is compared with the original LIDAR data. The results show that this method can be used for significant reduction of data set.
Compressed Blind De-convolution
Saligrama, V
2009-01-01
Suppose the signal x is realized by driving a k-sparse signal u through an arbitrary unknown stable discrete-linear time invariant system H. These types of processes arise naturally in Reflection Seismology. In this paper we are interested in several problems: (a) Blind-Deconvolution: Can we recover both the filter $H$ and the sparse signal $u$ from noisy measurements? (b) Compressive Sensing: Is x compressible in the conventional sense of compressed sensing? Namely, can x, u and H be reconstructed from a sparse set of measurements. We develop novel L1 minimization methods to solve both cases and establish sufficient conditions for exact recovery for the case when the unknown system H is auto-regressive (i.e. all pole) of a known order. In the compressed sensing/sampling setting it turns out that both H and x can be reconstructed from O(k log(n)) measurements under certain technical conditions on the support structure of u. Our main idea is to pass x through a linear time invariant system G and collect O(k lo...
Compressing spatio-temporal trajectories
Gudmundsson, Joachim; Katajainen, Jyrki; Merrick, Damian
2009-01-01
A trajectory is a sequence of locations, each associated with a timestamp, describing the movement of a point. Trajectory data is becoming increasingly available and the size of recorded trajectories is getting larger. In this paper we study the problem of compressing planar trajectories such tha...
Range Compressed Holographic Aperture Ladar
2017-06-01
digital holography, laser, active imaging, remote sensing, laser imaging 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT: SAR 8...slow speed tunable lasers, while relaxing the need to precisely track the transceiver or target motion. In the following section we describe a scenario...contrast targets. As shown in Figure 28, augmenting holographic ladar with range compression relaxes the dependence of image reconstruction on
Compressive passive millimeter wave imager
Gopalsami, Nachappa; Liao, Shaolin; Elmer, Thomas W; Koehl, Eugene R; Heifetz, Alexander; Raptis, Apostolos C
2015-01-27
A compressive scanning approach for millimeter wave imaging and sensing. A Hadamard mask is positioned to receive millimeter waves from an object to be imaged. A subset of the full set of Hadamard acquisitions is sampled. The subset is used to reconstruct an image representing the object.
The Teaching Principal: An Untenable Position or a Promising Model?
Newton, Paul M.; Wallin, Dawn
2013-01-01
This paper reports on an interpretive study that examined the role of the teaching principal, particularly as it relates to principals' moral and legal requirement to work as instructional leaders for student learning. A teaching principal is defined as a principal who has a "double load" or dual roles in teaching and administration…
19 CFR 113.33 - Corporations as principals.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Corporations as principals. 113.33 Section 113.33... TREASURY CUSTOMS BONDS Principals and Sureties § 113.33 Corporations as principals. (a) Name of corporation on the bonds. The name of a corporation executing a Customs bond as a principal, may be printed...
Principal Role Changes and Implications for Principalship Candidates.
Whitaker, Kathryn S.
1999-01-01
A principal who exchanged jobs with a university professor explores changing principal roles, using a case-study approach. Principals' working world is characterized by overwhelming responsibilities, information perplexity, and emotional anxiety. Principals would appreciate intrinsic and extrinsic rewards, support networks, university-school…
Principal Role Changes and Implications for Principalship Candidates.
Whitaker, Kathryn S.
1999-01-01
A principal who exchanged jobs with a university professor explores changing principal roles, using a case-study approach. Principals' working world is characterized by overwhelming responsibilities, information perplexity, and emotional anxiety. Principals would appreciate intrinsic and extrinsic rewards, support networks, university-school…
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Anglo-American views of Gavrilo Princip
Markovich Slobodan G.
2015-01-01
Full Text Available The paper deals with Western (Anglo-American views on the Sarajevo assassination/attentat and Gavrilo Princip. Articles on the assassination and Princip in two leading quality dailies (The Times and The New York Times have particularly been analysed as well as the views of leading historians and journalists who covered the subject including: R. G. D. Laffan, R. W. Seton-Watson, Winston Churchill, Sidney Fay, Bernadotte Schmitt, Rebecca West, A. J. P. Taylor, Vladimir Dedijer, Christopher Clark and Tim Butcher. In the West, the original general condemnation of the assassination and its main culprits was challenged when Rebecca West published her famous travelogue on Yugoslavia in 1941. Another Brit, the remarkable historian A. J. P. Taylor, had a much more positive view on the Sarajevo conspirators and blamed Germany and Austria-Hungary for the outbreak of the Great War. A turning point in Anglo-American perceptions was the publication of Vladimir Dedijer’s monumental book The Road to Sarajevo (1966, which humanised the main conspirators, a process initiated by R. West. Dedijer’s book was translated from English into all major Western languages and had an immediate impact on the understanding of the Sarajevo assassination. The rise of national antagonisms in Bosnia gradually alienated Princip from Bosnian Muslims and Croats, a process that began in the 1980s and was completed during the wars of the Yugoslav succession. Although all available sources clearly show that Princip, an ethnic Serb, gradually developed a broader Serbo-Croat and Yugoslav identity, he was ethnified and seen exclusively as a Serb by Bosnian Croats and Bosniaks and Western journalists in the 1990s. In the past century imagining Princip in Serbia and the West involved a whole spectrum of views. In interwar Anglo-American perceptions he was a fanatic and lunatic. He became humanised by Rebecca West (1941, A. J. P. Taylor showed understanding for his act (1956, he was fully
Normalization of a binary correlation function and the problem of compressibility
Bulavyin, L A; Malomuzh, N P
2002-01-01
The paper is devoted to the thorough investigation of the normalization condition for the correlation function and the analysis of the relation of this problem with the definitions of entropy and isothermal compressibility. It is shown that, for a system of large but finite size, the keeping of contributions, inversely proportional to the system volume, is principally important. Due to this, it is possible to satisfy the normalization conditions and to generalize the definitions of entropy and isothermal compressibility so to escape the appearance of contradictions.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Short-pulse, compressed ion beams at the Neutralized Drift Compression Experiment
Seidl, P. A.; Barnard, J. J.; Davidson, R. C.; Friedman, A.; Gilson, E. P.; Grote, D.; Ji, Q.; Kaganovich, I. D.; Persaud, A.; Waldron, W. L.; Schenkel, T.
2016-05-01
We have commenced experiments with intense short pulses of ion beams on the Neutralized Drift Compression Experiment (NDCX-II) at Lawrence Berkeley National Laboratory, with 1-mm beam spot size within 2.5 ns full-width at half maximum. The ion kinetic energy is 1.2 MeV. To enable the short pulse duration and mm-scale focal spot radius, the beam is neutralized in a 1.5-meter-long drift compression section following the last accelerator cell. A short-focal-length solenoid focuses the beam in the presence of the volumetric plasma that is near the target. In the accelerator, the line-charge density increases due to the velocity ramp imparted on the beam bunch. The scientific topics to be explored are warm dense matter, the dynamics of radiation damage in materials, and intense beam and beam-plasma physics including select topics of relevance to the development of heavy-ion drivers for inertial fusion energy. Below the transition to melting, the short beam pulses offer an opportunity to study the multi-scale dynamics of radiation-induced damage in materials with pump-probe experiments, and to stabilize novel metastable phases of materials when short-pulse heating is followed by rapid quenching. First experiments used a lithium ion source; a new plasma-based helium ion source shows much greater charge delivered to the target.
Short-Pulse, Compressed Ion Beams at the Neutralized Drift Compression Experiment
Seidl, Peter A; Davidson, Ronald C; Friedman, Alex; Gilson, Erik P; Grote, David; Ji, Qing; Kaganovich, I D; Persaud, Arun; Waldron, William L; Schenkel, Thomas
2016-01-01
We have commenced experiments with intense short pulses of ion beams on the Neutralized Drift Compression Experiment (NDCX-II) at Lawrence Berkeley National Laboratory, with 1-mm beam spot size within 2.5 ns full-width at half maximum. The ion kinetic energy is 1.2 MeV. To enable the short pulse duration and mm-scale focal spot radius, the beam is neutralized in a 1.5-meter-long drift compression section following the last accelerator cell. A short-focal-length solenoid focuses the beam in the presence of the volumetric plasma that is near the target. In the accelerator, the line-charge density increases due to the velocity ramp imparted on the beam bunch. The scientific topics to be explored are warm dense matter, the dynamics of radiation damage in materials, and intense beam and beam-plasma physics including select topics of relevance to the development of heavy-ion drivers for inertial fusion energy. Below the transition to melting, the short beam pulses offer an opportunity to study the multi-scale dynam...
Dan WU
2009-06-01
Full Text Available The principal-subordinate hierarchical multi-objective programming model of initial water rights allocation was developed based on the principle of coordinated and sustainable development of different regions and water sectors within a basin. With the precondition of strictly controlling maximum emissions rights, initial water rights were allocated between the first and the second levels of the hierarchy in order to promote fair and coordinated development across different regions of the basin and coordinated and efficient water use across different water sectors, realize the maximum comprehensive benefits to the basin, promote the unity of quantity and quality of initial water rights allocation, and eliminate water conflict across different regions and water sectors. According to interactive decision-making theory, a principal-subordinate hierarchical interactive iterative algorithm based on the satisfaction degree was developed and used to solve the initial water rights allocation model. A case study verified the validity of the model.
Dan WU; Feng-ping WU; Yan-ping CHEN
2009-01-01
The principal-subordinate hierarchical multi-objective programming model of initial water rights allocation was developed based on the principle of coordinated and sustainable development of different regions and water sectors within a basin. With the precondition of strictly controlling maximum emissions rights, initial water rights were allocated between the first and the second levels of the hierarchy in order to promote fair and coordinated development across different regions of the basin and coordinated and efficient water use across different water sectors, realize the maximum comprehensive benefits to the basin, promote the unity of quantity and quality of initial water rights allocation, and eliminate water conflict across different regions and water sectors. According to interactive decision-making theory, a principal-subordinate hierarchical interactive iterative algorithm based on the satisfaction degree was developed and used to solve the initial water rights allocation model. A case study verified the validity of the model.
Semantic Source Coding for Flexible Lossy Image Compression
Phoha, Shashi; Schmiedekamp, Mendel
2007-01-01
Semantic Source Coding for Lossy Video Compression investigates methods for Mission-oriented lossy image compression, by developing methods to use different compression levels for different portions...