WorldWideScience

Sample records for maximal principal compression

  1. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Science.gov (United States)

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  2. Approaching maximal performance of longitudinal beam compression in induction accelerator drivers

    International Nuclear Information System (INIS)

    Mark, J.W.K.; Ho, D.D.M.; Brandon, S.T.; Chang, C.L.; Drobot, A.T.; Faltens, A.; Lee, E.P.; Krafft, G.A.

    1986-01-01

    Longitudinal beam compression occurs before final focus and fusion chamber beam transport and is a key process determining initial conditions for final focus hardware. Determining the limits for maximal performance of key accelerator components is an essential element of the effort to reduce driver costs. Studies directed towards defining the limits of final beam compression including considerations such as maximal available compression, effects of longitudinal dispersion and beam emittance, combining pulse-shaping with beam compression to reduce the total number of beam manipulators, etc., are given. Several possible techniques are illustrated for utilizing the beam compression process to provide the pulse shapes required by a number of targets. Without such capabilities to shape the pulse, an additional factor of two or so of beam energy would be required by the targets

  3. Approaching maximal performance of longitudinal beam compression in induction accelerator drivers

    International Nuclear Information System (INIS)

    Mark, J.W.K.; Ho, D.D.M.; Brandon, S.T.; Chang, C.L.; Drobot, A.T.; Faltens, A.; Lee, E.P.; Krafft, G.A.

    1986-01-01

    Longitudinal beam compression is an integral part of the US induction accelerator development effort for heavy ion fusion. Producing maximal performance for key accelerator components is an essential element of the effort to reduce driver costs. We outline here initial studies directed towards defining the limits of final beam compression including considerations such as: maximal available compression, effects of longitudinal dispersion and beam emittance, combining pulse-shaping with beam compression to reduce the total number of beam manipulations, etc. The use of higher ion charge state Z greater than or equal to 3 is likely to test the limits of the previously envisaged beam compression and final focus hardware. A more conservative approach is to use additional beamlets in final compression and focus. On the other end of the spectrum of choices, alternate approaches might consider new final focus with greater tolerances for systematic momentum and current variations. Development of such final focus concepts would also allow more compact (and hopefully cheaper) hardware packages where the previously separate processes of beam compression, pulse-shaping and final focus occur as partially combined and nearly concurrent beam manipulations

  4. Maximal dissipation and well-posedness for the compressible Euler system

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard

    2014-01-01

    Roč. 16, č. 3 (2014), s. 447-461 ISSN 1422-6928 EU Projects: European Commission(XE) 320078 - MATHEF Keywords : maximal dissipation * compressible Euler system * weak solution Subject RIV: BA - General Mathematics Impact factor: 1.186, year: 2014 http://link.springer.com/article/10.1007/s00021-014-0163-8

  5. Compressive Online Robust Principal Component Analysis with Multiple Prior Information

    DEFF Research Database (Denmark)

    Van Luong, Huynh; Deligiannis, Nikos; Seiler, Jürgen

    -rank components. Unlike conventional batch RPCA, which processes all the data directly, our method considers a small set of measurements taken per data vector (frame). Moreover, our method incorporates multiple prior information signals, namely previous reconstructed frames, to improve these paration...... and thereafter, update the prior information for the next frame. Using experiments on synthetic data, we evaluate the separation performance of the proposed algorithm. In addition, we apply the proposed algorithm to online video foreground and background separation from compressive measurements. The results show...

  6. Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.

    Science.gov (United States)

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane

    2016-08-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.

  7. Understanding deformation mechanisms during powder compaction using principal component analysis of compression data.

    Science.gov (United States)

    Roopwani, Rahul; Buckner, Ira S

    2011-10-14

    Principal component analysis (PCA) was applied to pharmaceutical powder compaction. A solid fraction parameter (SF(c/d)) and a mechanical work parameter (W(c/d)) representing irreversible compression behavior were determined as functions of applied load. Multivariate analysis of the compression data was carried out using PCA. The first principal component (PC1) showed loadings for the solid fraction and work values that agreed with changes in the relative significance of plastic deformation to consolidation at different pressures. The PC1 scores showed the same rank order as the relative plasticity ranking derived from the literature for common pharmaceutical materials. The utility of PC1 in understanding deformation was extended to binary mixtures using a subset of the original materials. Combinations of brittle and plastic materials were characterized using the PCA method. The relationships between PC1 scores and the weight fractions of the mixtures were typically linear showing ideal mixing in their deformation behaviors. The mixture consisting of two plastic materials was the only combination to show a consistent positive deviation from ideality. The application of PCA to solid fraction and mechanical work data appears to be an effective means of predicting deformation behavior during compaction of simple powder mixtures. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Neural Network for Principal Component Analysis with Applications in Image Compression

    Directory of Open Access Journals (Sweden)

    Luminita State

    2007-04-01

    Full Text Available Classical feature extraction and data projection methods have been extensively investigated in the pattern recognition and exploratory data analysis literature. Feature extraction and multivariate data projection allow avoiding the "curse of dimensionality", improve the generalization ability of classifiers and significantly reduce the computational requirements of pattern classifiers. During the past decade a large number of artificial neural networks and learning algorithms have been proposed for solving feature extraction problems, most of them being adaptive in nature and well-suited for many real environments where adaptive approach is required. Principal Component Analysis, also called Karhunen-Loeve transform is a well-known statistical method for feature extraction, data compression and multivariate data projection and so far it has been broadly used in a large series of signal and image processing, pattern recognition and data analysis applications.

  9. Infrared and visible image fusion based on robust principal component analysis and compressed sensing

    Science.gov (United States)

    Li, Jun; Song, Minghui; Peng, Yuanxi

    2018-03-01

    Current infrared and visible image fusion methods do not achieve adequate information extraction, i.e., they cannot extract the target information from infrared images while retaining the background information from visible images. Moreover, most of them have high complexity and are time-consuming. This paper proposes an efficient image fusion framework for infrared and visible images on the basis of robust principal component analysis (RPCA) and compressed sensing (CS). The novel framework consists of three phases. First, RPCA decomposition is applied to the infrared and visible images to obtain their sparse and low-rank components, which represent the salient features and background information of the images, respectively. Second, the sparse and low-rank coefficients are fused by different strategies. On the one hand, the measurements of the sparse coefficients are obtained by the random Gaussian matrix, and they are then fused by the standard deviation (SD) based fusion rule. Next, the fused sparse component is obtained by reconstructing the result of the fused measurement using the fast continuous linearized augmented Lagrangian algorithm (FCLALM). On the other hand, the low-rank coefficients are fused using the max-absolute rule. Subsequently, the fused image is superposed by the fused sparse and low-rank components. For comparison, several popular fusion algorithms are tested experimentally. By comparing the fused results subjectively and objectively, we find that the proposed framework can extract the infrared targets while retaining the background information in the visible images. Thus, it exhibits state-of-the-art performance in terms of both fusion effects and timeliness.

  10. Maximal compression of the redshift-space galaxy power spectrum and bispectrum

    Science.gov (United States)

    Gualdi, Davide; Manera, Marc; Joachimi, Benjamin; Lahav, Ofer

    2018-05-01

    We explore two methods of compressing the redshift-space galaxy power spectrum and bispectrum with respect to a chosen set of cosmological parameters. Both methods involve reducing the dimension of the original data vector (e.g. 1000 elements) to the number of cosmological parameters considered (e.g. seven ) using the Karhunen-Loève algorithm. In the first case, we run MCMC sampling on the compressed data vector in order to recover the 1D and 2D posterior distributions. The second option, approximately 2000 times faster, works by orthogonalizing the parameter space through diagonalization of the Fisher information matrix before the compression, obtaining the posterior distributions without the need of MCMC sampling. Using these methods for future spectroscopic redshift surveys like DESI, Euclid, and PFS would drastically reduce the number of simulations needed to compute accurate covariance matrices with minimal loss of constraining power. We consider a redshift bin of a DESI-like experiment. Using the power spectrum combined with the bispectrum as a data vector, both compression methods on average recover the 68 {per cent} credible regions to within 0.7 {per cent} and 2 {per cent} of those resulting from standard MCMC sampling, respectively. These confidence intervals are also smaller than the ones obtained using only the power spectrum by 81 per cent, 80 per cent, and 82 per cent respectively, for the bias parameter b1, the growth rate f, and the scalar amplitude parameter As.

  11. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yihang Yin

    2015-08-01

    Full Text Available Wireless sensor networks (WSNs have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA. First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  12. An Efficient Data Compression Model Based on Spatial Clustering and Principal Component Analysis in Wireless Sensor Networks.

    Science.gov (United States)

    Yin, Yihang; Liu, Fengzheng; Zhou, Xiang; Li, Quanzhong

    2015-08-07

    Wireless sensor networks (WSNs) have been widely used to monitor the environment, and sensors in WSNs are usually power constrained. Because inner-node communication consumes most of the power, efficient data compression schemes are needed to reduce the data transmission to prolong the lifetime of WSNs. In this paper, we propose an efficient data compression model to aggregate data, which is based on spatial clustering and principal component analysis (PCA). First, sensors with a strong temporal-spatial correlation are grouped into one cluster for further processing with a novel similarity measure metric. Next, sensor data in one cluster are aggregated in the cluster head sensor node, and an efficient adaptive strategy is proposed for the selection of the cluster head to conserve energy. Finally, the proposed model applies principal component analysis with an error bound guarantee to compress the data and retain the definite variance at the same time. Computer simulations show that the proposed model can greatly reduce communication and obtain a lower mean square error than other PCA-based algorithms.

  13. Feature constrained compressed sensing CT image reconstruction from incomplete data via robust principal component analysis of the database

    International Nuclear Information System (INIS)

    Wu, Dufan; Li, Liang; Zhang, Li

    2013-01-01

    In computed tomography (CT), incomplete data problems such as limited angle projections often cause artifacts in the reconstruction results. Additional prior knowledge of the image has shown the potential for better results, such as a prior image constrained compressed sensing algorithm. While a pre-full-scan of the same patient is not always available, massive well-reconstructed images of different patients can be easily obtained from clinical multi-slice helical CTs. In this paper, a feature constrained compressed sensing (FCCS) image reconstruction algorithm was proposed to improve the image quality by using the prior knowledge extracted from the clinical database. The database consists of instances which are similar to the target image but not necessarily the same. Robust principal component analysis is employed to retrieve features of the training images to sparsify the target image. The features form a low-dimensional linear space and a constraint on the distance between the image and the space is used. A bi-criterion convex program which combines the feature constraint and total variation constraint is proposed for the reconstruction procedure and a flexible method is adopted for a good solution. Numerical simulations on both the phantom and real clinical patient images were taken to validate our algorithm. Promising results are shown for limited angle problems. (paper)

  14. Maximizing Power Output in Homogeneous Charge Compression Ignition (HCCI) Engines and Enabling Effective Control of Combustion Timing

    Science.gov (United States)

    Saxena, Samveg

    Homogeneous Charge Compression Ignition (HCCI) engines are one of the most promising engine technologies for the future of energy conversion from clean, efficient combustion. HCCI engines allow high efficiency and lower CO2 emission through the use of high compression ratios and the removal of intake throttle valves (like Diesel), and allow very low levels of urban pollutants like nitric oxide and soot (like Otto). These engines, however, are not without their challenges, such as low power density compared with other engine technologies, and a difficulty in controlling combustion timing. This dissertation first addresses the power output limits. The particular strategies for enabling high power output investigated in this dissertation focus on avoiding five critical limits that either damage an engine, drastically reduce efficiency, or drastically increase emissions: (1) ringing limits, (2) peak in-cylinder pressure limits, (3) misfire limits, (4) low intake temperature limits, and (5) excessive emissions limits. The research shows that the key factors that enable high power output, sufficient for passenger vehicles, while simultaneously avoiding the five limits defined above are the use of: (1) high intake air pressures allowing improved power output, (2) highly delayed combustion timing to avoid ringing limits, and (3) using the highest possible equivalence ratio before encountering ringing limits. These results are revealed by conducting extensive experiments spanning a wide range of operating conditions on a multi-cylinder HCCI engine. Second, this dissertation discusses strategies for effectively sensing combustion characteristics on a HCCI engine. For effective feedback control of HCCI combustion timing, a sensor is required to quantify when combustion occurs. Many laboratory engines use in-cylinder pressure sensors but these sensors are currently prohibitively expensive for wide-scale commercialization. Instead, ion sensors made from inexpensive sparkplugs

  15. Real-time dynamic MR image reconstruction using compressed sensing and principal component analysis (CS-PCA): Demonstration in lung tumor tracking.

    Science.gov (United States)

    Dietz, Bryson; Yip, Eugene; Yun, Jihyun; Fallone, B Gino; Wachowicz, Keith

    2017-08-01

    This work presents a real-time dynamic image reconstruction technique, which combines compressed sensing and principal component analysis (CS-PCA), to achieve real-time adaptive radiotherapy with the use of a linac-magnetic resonance imaging system. Six retrospective fully sampled dynamic data sets of patients diagnosed with non-small-cell lung cancer were used to investigate the CS-PCA algorithm. Using a database of fully sampled k-space, principal components (PC's) were calculated to aid in the reconstruction of undersampled images. Missing k-space data were calculated by projecting the current undersampled k-space data onto the PC's to generate the corresponding PC weights. The weighted PC's were summed together, and the missing k-space was iteratively updated. To gain insight into how the reconstruction might proceed at lower fields, 6× noise was added to the 3T data to investigate how the algorithm handles noisy data. Acceleration factors ranging from 2 to 10× were investigated using CS-PCA and Split Bregman CS for comparison. Metrics to determine the reconstruction quality included the normalized mean square error (NMSE), as well as the dice coefficients (DC) and centroid displacement of the tumor segmentations. Our results demonstrate that CS-PCA performed superior than CS alone. The CS-PCA patient averaged DC for 3T and 6× noise added data remained above 0.9 for acceleration factors up to 10×. The patient averaged NMSE gradually increased with increasing acceleration; however, it remained below 0.06 up to an acceleration factor of 10× for both 3T and 6× noise added data. The CS-PCA reconstruction speed ranged from 5 to 20 ms (Intel i7-4710HQ CPU @ 2.5 GHz), depending on the chosen parameters. A real-time reconstruction technique was developed for adaptive radiotherapy using a Linac-MRI system. Our CS-PCA algorithm can achieve tumor contours with DC greater than 0.9 and NMSE less than 0.06 at acceleration factors of up to, and including, 10×. The

  16. Entropy maximization

    Indian Academy of Sciences (India)

    Abstract. It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf f that satisfy. ∫ fhi dμ = λi for i = 1, 2,...,...k the maximizer of entropy is an f0 that is pro- portional to exp(. ∑ ci hi ) for some choice of ci . An extension of this to a continuum of.

  17. Entropy Maximization

    Indian Academy of Sciences (India)

    It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy ∫ f h i d = i for i = 1 , 2 , … , … k the maximizer of entropy is an f 0 that is proportional to exp ⁡ ( ∑ c i h i ) for some choice of c i . An extension of this to a continuum of ...

  18. IMNN: Information Maximizing Neural Networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  19. Multiscale principal component analysis

    International Nuclear Information System (INIS)

    Akinduko, A A; Gorban, A N

    2014-01-01

    Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis

  20. Principal Ports

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Principal Ports are defined by port limits or US Army Corps of Engineers (USACE) projects, these exclude non-USACE projects not authorized for publication. The...

  1. Principal components

    NARCIS (Netherlands)

    Hallin, M.; Hörmann, S.; Piegorsch, W.; El Shaarawi, A.

    2012-01-01

    Principal Components are probably the best known and most widely used of all multivariate analysis techniques. The essential idea consists in performing a linear transformation of the observed k-dimensional variables in such a way that the new variables are vectors of k mutually orthogonal

  2. Profit maximization mitigates competition

    DEFF Research Database (Denmark)

    Dierker, Egbert; Grodal, Birgit

    1996-01-01

    We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...

  3. Maximally incompatible quantum observables

    Energy Technology Data Exchange (ETDEWEB)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)

    2014-05-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  4. Maximally incompatible quantum observables

    International Nuclear Information System (INIS)

    Heinosaari, Teiko; Schultz, Jussi; Toigo, Alessandro; Ziman, Mario

    2014-01-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  5. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  6. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  7. Compression stockings

    Science.gov (United States)

    Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...

  8. Maximizers versus satisficers

    OpenAIRE

    Andrew M. Parker; Wandi Bruine de Bruin; Baruch Fischhoff

    2007-01-01

    Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...

  9. Maximal combustion temperature estimation

    International Nuclear Information System (INIS)

    Golodova, E; Shchepakina, E

    2006-01-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models

  10. Joint Group Sparse PCA for Compressed Hyperspectral Imaging.

    Science.gov (United States)

    Khan, Zohaib; Shafait, Faisal; Mian, Ajmal

    2015-12-01

    A sparse principal component analysis (PCA) seeks a sparse linear combination of input features (variables), so that the derived features still explain most of the variations in the data. A group sparse PCA introduces structural constraints on the features in seeking such a linear combination. Collectively, the derived principal components may still require measuring all the input features. We present a joint group sparse PCA (JGSPCA) algorithm, which forces the basic coefficients corresponding to a group of features to be jointly sparse. Joint sparsity ensures that the complete basis involves only a sparse set of input features, whereas the group sparsity ensures that the structural integrity of the features is maximally preserved. We evaluate the JGSPCA algorithm on the problems of compressed hyperspectral imaging and face recognition. Compressed sensing results show that the proposed method consistently outperforms sparse PCA and group sparse PCA in reconstructing the hyperspectral scenes of natural and man-made objects. The efficacy of the proposed compressed sensing method is further demonstrated in band selection for face recognition.

  11. Maximally multipartite entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Parisi, Giorgio; Pascazio, Saverio

    2008-06-01

    We introduce the notion of maximally multipartite entangled states of n qubits as a generalization of the bipartite case. These pure states have a bipartite entanglement that does not depend on the bipartition and is maximal for all possible bipartitions. They are solutions of a minimization problem. Examples for small n are investigated, both analytically and numerically.

  12. Maximizers versus satisficers

    Directory of Open Access Journals (Sweden)

    Andrew M. Parker

    2007-12-01

    Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.

  13. Is CP violation maximal

    International Nuclear Information System (INIS)

    Gronau, M.

    1984-01-01

    Two ambiguities are noted in the definition of the concept of maximal CP violation. The phase convention ambiguity is overcome by introducing a CP violating phase in the quark mixing matrix U which is invariant under rephasing transformations. The second ambiguity, related to the parametrization of U, is resolved by finding a single empirically viable definition of maximal CP violation when assuming that U does not single out one generation. Considerable improvement in the calculation of nonleptonic weak amplitudes is required to test the conjecture of maximal CP violation. 21 references

  14. Guinea pig maximization test

    DEFF Research Database (Denmark)

    Andersen, Klaus Ejner

    1985-01-01

    Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...

  15. Tri-maximal vs. bi-maximal neutrino mixing

    International Nuclear Information System (INIS)

    Scott, W.G

    2000-01-01

    It is argued that data from atmospheric and solar neutrino experiments point strongly to tri-maximal or bi-maximal lepton mixing. While ('optimised') bi-maximal mixing gives an excellent a posteriori fit to the data, tri-maximal mixing is an a priori hypothesis, which is not excluded, taking account of terrestrial matter effects

  16. Redesigning Principal Internships: Practicing Principals' Perspectives

    Science.gov (United States)

    Anast-May, Linda; Buckner, Barbara; Geer, Gregory

    2011-01-01

    Internship programs too often do not provide the types of experiences that effectively bridge the gap between theory and practice and prepare school leaders who are capable of leading and transforming schools. To help address this problem, the current study is directed at providing insight into practicing principals' views of the types of…

  17. MAXIM: The Blackhole Imager

    Science.gov (United States)

    Gendreau, Keith; Cash, Webster; Gorenstein, Paul; Windt, David; Kaaret, Phil; Reynolds, Chris

    2004-01-01

    The Beyond Einstein Program in NASA's Office of Space Science Structure and Evolution of the Universe theme spells out the top level scientific requirements for a Black Hole Imager in its strategic plan. The MAXIM mission will provide better than one tenth of a microarcsecond imaging in the X-ray band in order to satisfy these requirements. We will overview the driving requirements to achieve these goals and ultimately resolve the event horizon of a supermassive black hole. We will present the current status of this effort that includes a study of a baseline design as well as two alternative approaches.

  18. Social group utility maximization

    CERN Document Server

    Gong, Xiaowen; Yang, Lei; Zhang, Junshan

    2014-01-01

    This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b

  19. Wellhead compression

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  20. What Motivates Principals?

    Science.gov (United States)

    Iannone, Ron

    1973-01-01

    Achievement and recognition were mentioned as factors appearing with greater frequency in principal's job satisfactions; school district policy and interpersonal relations were mentioned as job dissatisfactions. (Editor)

  1. Principal Ports and Facilities

    Data.gov (United States)

    California Natural Resource Agency — The Principal Port file contains USACE port codes, geographic locations (longitude, latitude), names, and commodity tonnage summaries (total tons, domestic, foreign,...

  2. Principal Ports and Facilities

    Data.gov (United States)

    California Department of Resources — The Principal Port file contains USACE port codes, geographic locations (longitude, latitude), names, and commodity tonnage summaries (total tons, domestic, foreign,...

  3. Principals' Perceptions of Politics

    Science.gov (United States)

    Tooms, Autumn K.; Kretovics, Mark A.; Smialek, Charles A.

    2007-01-01

    This study is an effort to examine principals' perceptions of workplace politics and its influence on their productivity and efficacy. A survey was used to explore the perceptions of current school administrators with regard to workplace politics. The instrument was disseminated to principals serving public schools in one Midwestern state in the…

  4. Renewing the Principal Pipeline

    Science.gov (United States)

    Turnbull, Brenda J.

    2015-01-01

    The work principals do has always mattered, but as the demands of the job increase, it matters even more. Perhaps once they could maintain safety and order and call it a day, but no longer. Successful principals today must also lead instruction and nurture a productive learning community for students, teachers, and staff. They set the tone for the…

  5. Teaching Principal Components Using Correlations.

    Science.gov (United States)

    Westfall, Peter H; Arias, Andrea L; Fulton, Lawrence V

    2017-01-01

    Introducing principal components (PCs) to students is difficult. First, the matrix algebra and mathematical maximization lemmas are daunting, especially for students in the social and behavioral sciences. Second, the standard motivation involving variance maximization subject to unit length constraint does not directly connect to the "variance explained" interpretation. Third, the unit length and uncorrelatedness constraints of the standard motivation do not allow re-scaling or oblique rotations, which are common in practice. Instead, we propose to motivate the subject in terms of optimizing (weighted) average proportions of variance explained in the original variables; this approach may be more intuitive, and hence easier to understand because it links directly to the familiar "R-squared" statistic. It also removes the need for unit length and uncorrelatedness constraints, provides a direct interpretation of "variance explained," and provides a direct answer to the question of whether to use covariance-based or correlation-based PCs. Furthermore, the presentation can be made without matrix algebra or optimization proofs. Modern tools from data science, including heat maps and text mining, provide further help in the interpretation and application of PCs; examples are given. Together, these techniques may be used to revise currently used methods for teaching and learning PCs in the behavioral sciences.

  6. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  7. Principal bundles on the projective line

    Indian Academy of Sciences (India)

    M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22

    LetX be a complete nonsingular curve over the algebraic closurek ofk andGa reductive group over k. Let E → X be a principal G-bundle on X. E is said to be semistable if, for every reduction of structure group EP ⊂ E to a maximal parabolic subgroup P of G, we have degree EP (p) ≤ 0, where p is the Lie algebra of P and EP ...

  8. Compressibility of rotating black holes

    International Nuclear Information System (INIS)

    Dolan, Brian P.

    2011-01-01

    Interpreting the cosmological constant as a pressure, whose thermodynamically conjugate variable is a volume, modifies the first law of black hole thermodynamics. Properties of the resulting thermodynamic volume are investigated: the compressibility and the speed of sound of the black hole are derived in the case of nonpositive cosmological constant. The adiabatic compressibility vanishes for a nonrotating black hole and is maximal in the extremal case--comparable with, but still less than, that of a cold neutron star. A speed of sound v s is associated with the adiabatic compressibility, which is equal to c for a nonrotating black hole and decreases as the angular momentum is increased. An extremal black hole has v s 2 =0.9 c 2 when the cosmological constant vanishes, and more generally v s is bounded below by c/√(2).

  9. Maximal Bell's inequality violation for non-maximal entanglement

    International Nuclear Information System (INIS)

    Kobayashi, M.; Khanna, F.; Mann, A.; Revzen, M.; Santana, A.

    2004-01-01

    Bell's inequality violation (BIQV) for correlations of polarization is studied for a product state of two two-mode squeezed vacuum (TMSV) states. The violation allowed is shown to attain its maximal limit for all values of the squeezing parameter, ζ. We show via an explicit example that a state whose entanglement is not maximal allow maximal BIQV. The Wigner function of the state is non-negative and the average value of either polarization is nil

  10. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  11. Maximally Symmetric Composite Higgs Models.

    Science.gov (United States)

    Csáki, Csaba; Ma, Teng; Shu, Jing

    2017-09-29

    Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.

  12. Cardiorespiratory Coordination in Repeated Maximal Exercise

    Directory of Open Access Journals (Sweden)

    Sergi Garcia-Retortillo

    2017-06-01

    Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC

  13. Principles of maximally classical and maximally realistic quantum ...

    Indian Academy of Sciences (India)

    Principles of maximally classical and maximally realistic quantum mechanics. S M ROY. Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India. Abstract. Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2N-dimensional phase space, ...

  14. Emittance Growth during Bunch Compression in the CTF-II

    Energy Technology Data Exchange (ETDEWEB)

    Raubenheimer, Tor O

    1999-02-26

    Measurements of the beam emittance during bunch compression in the CLIC Test Facility (CTF-II) are described. The measurements were made with different beam charges and different energy correlations versus the bunch compressor settings which were varied from no compression through the point of full compression and to over-compression. Significant increases in the beam emittance were observed with the maximum emittance occurring near the point of full (maximal) compression. Finally, evaluation of possible emittance dilution mechanisms indicate that coherent synchrotron radiation was the most likely cause.

  15. Principal Component Analysis Based Measure of Structural Holes

    Science.gov (United States)

    Deng, Shiguo; Zhang, Wenqing; Yang, Huijie

    2013-02-01

    Based upon principal component analysis, a new measure called compressibility coefficient is proposed to evaluate structural holes in networks. This measure incorporates a new effect from identical patterns in networks. It is found that compressibility coefficient for Watts-Strogatz small-world networks increases monotonically with the rewiring probability and saturates to that for the corresponding shuffled networks. While compressibility coefficient for extended Barabasi-Albert scale-free networks decreases monotonically with the preferential effect and is significantly large compared with that for corresponding shuffled networks. This measure is helpful in diverse research fields to evaluate global efficiency of networks.

  16. Compressive Load Resistance Characteristics of Rice Grain

    OpenAIRE

    Sumpun Chaitep; Chaiy R. Metha Pathawee; Pipatpong Watanawanyoo

    2008-01-01

    Investigation was made to observe the compressive load property of rice gain both rough rice and brown grain. Six rice varieties (indica and japonica) were examined with the moisture content at 10-12%. A compressive load with reference to a principal axis normal to the thickness of the grain were conducted at selected inclined angles of 0°, 15°, 30°, 45°, 60° and 70°. The result showed the compressive load resistance of rice grain based on its characteristic of yield s...

  17. Principal noncommutative torus bundles

    DEFF Research Database (Denmark)

    Echterhoff, Siegfried; Nest, Ryszard; Oyono-Oyono, Herve

    2008-01-01

    of bivariant K-theory (denoted RKK-theory) due to Kasparov. Using earlier results of Echterhoff and Williams, we shall give a complete classification of principal non-commutative torus bundles up to equivariant Morita equivalence. We then study these bundles as topological fibrations (forgetting the group...

  18. The Principal as CEO

    Science.gov (United States)

    Hollar, Charlie

    2004-01-01

    They may never grace the pages of The Wall Street Journal or Fortune magazine, but they might possibly be the most important CEOs in our country. They are elementary school principals. Each of them typically serves the learning needs of 350-400 clients (students) while overseeing a multimillion-dollar facility staffed by 20-25 teachers and 10-15…

  19. Euler principal component analysis

    NARCIS (Netherlands)

    Liwicki, Stephan; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    Principal Component Analysis (PCA) is perhaps the most prominent learning tool for dimensionality reduction in pattern recognition and computer vision. However, the ℓ 2-norm employed by standard PCA is not robust to outliers. In this paper, we propose a kernel PCA method for fast and robust PCA,

  20. Maximizing and customer loyalty: Are maximizers less loyal?

    Directory of Open Access Journals (Sweden)

    Linda Lai

    2011-06-01

    Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.

  1. Implications of maximal Jarlskog invariant and maximal CP violation

    International Nuclear Information System (INIS)

    Rodriguez-Jauregui, E.; Universidad Nacional Autonoma de Mexico

    2001-04-01

    We argue here why CP violating phase Φ in the quark mixing matrix is maximal, that is, Φ=90 . In the Standard Model CP violation is related to the Jarlskog invariant J, which can be obtained from non commuting Hermitian mass matrices. In this article we derive the conditions to have Hermitian mass matrices which give maximal Jarlskog invariant J and maximal CP violating phase Φ. We find that all squared moduli of the quark mixing elements have a singular point when the CP violation phase Φ takes the value Φ=90 . This special feature of the Jarlskog invariant J and the quark mixing matrix is a clear and precise indication that CP violating Phase Φ is maximal in order to let nature treat democratically all of the quark mixing matrix moduli. (orig.)

  2. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  3. Phenomenology of maximal and near-maximal lepton mixing

    International Nuclear Information System (INIS)

    Gonzalez-Garcia, M. C.; Pena-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.

    2001-01-01

    The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ε(equivalent to)1-2sin 2 θ ex and quantify the present experimental status for |ε| e mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10 -8 eV 2 ∼ 2 ∼ -7 eV 2 . In the mass ranges Δm 2 ∼>1.5x10 -5 eV 2 and 4x10 -10 eV 2 ∼ 2 ∼ -7 eV 2 the full interval |ε| e mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay

  4. Maximal quantum Fisher information matrix

    International Nuclear Information System (INIS)

    Chen, Yu; Yuan, Haidong

    2017-01-01

    We study the existence of the maximal quantum Fisher information matrix in the multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher information matrix by deriving various trade-off relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit. (paper)

  5. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  6. Maximize x(a - x)

    Science.gov (United States)

    Lange, L. H.

    1974-01-01

    Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)

  7. Finding Maximal Quasiperiodicities in Strings

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Pedersen, Christian N. S.

    2000-01-01

    of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...... in the suffix tree that have a superprimitive path-label....

  8. Improved forecasting with leading indicators: the principal covariate index

    NARCIS (Netherlands)

    C. Heij (Christiaan)

    2007-01-01

    textabstractWe propose a new method of leading index construction that combines the need for data compression with the objective of forecasting. This so-called principal covariate index is constructed to forecast growth rates of the Composite Coincident Index. The forecast performance is compared

  9. On the maximal diphoton width

    CERN Document Server

    Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo

    2016-01-01

    Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.

  10. Plans for longitudinal and transverse neutralized beam compression experiments, and initial results from solenoid transport experiments

    International Nuclear Information System (INIS)

    Seidl, P.A.; Armijo, J.; Baca, D.; Bieniosek, F.M.; Coleman, J.; Davidson, R.C.; Efthimion, P.C.; Friedman, A.; Gilson, E.P.; Grote, D.; Haber, I.; Henestroza, E.; Kaganovich, I.; Leitner, M.; Logan, B.G.; Molvik, A.W.; Rose, D.V.; Roy, P.K.; Sefkow, A.B.; Sharp, W.M.; Vay, J.L.; Waldron, W.L.; Welch, D.R.; Yu, S.S.

    2007-01-01

    This paper presents plans for neutralized drift compression experiments, precursors to future target heating experiments. The target-physics objective is to study warm dense matter (WDM) using short-duration (∼1 ns) ion beams that enter the targets at energies just above that at which dE/dx is maximal. High intensity on target is to be achieved by a combination of longitudinal compression and transverse focusing. This work will build upon recent success in longitudinal compression, where the ion beam was compressed lengthwise by a factor of more than 50 by first applying a linear head-to-tail velocity tilt to the beam, and then allowing the beam to drift through a dense, neutralizing background plasma. Studies on a novel pulse line ion accelerator were also carried out. It is planned to demonstrate simultaneous transverse focusing and longitudinal compression in a series of future experiments, thereby achieving conditions suitable for future WDM target experiments. Future experiments may use solenoids for transverse focusing of un-neutralized ion beams during acceleration. Recent results are reported in the transport of a high-perveance heavy ion beam in a solenoid transport channel. The principal objectives of this solenoid transport experiment are to match and transport a space-charge-dominated ion beam, and to study associated electron-cloud and gas effects that may limit the beam quality in a solenoid transport system. Ideally, the beam will establish a Brillouin-flow condition (rotation at one-half the cyclotron frequency). Other mechanisms that potentially degrade beam quality are being studied, such as focusing-field aberrations, beam halo, and separation of lattice focusing elements

  11. Maximization

    Directory of Open Access Journals (Sweden)

    A. Garmroodi Asil

    2017-09-01

    To further reduce the sulfur dioxide emission of the entire refining process, two scenarios of acid gas or air preheats are investigated when either of them is used simultaneously with the third enrichment scheme. The maximum overall sulfur recovery efficiency and highest combustion chamber temperature is slightly higher for acid gas preheats but air preheat is more favorable because it is more benign. To the best of our knowledge, optimization of the entire GTU + enrichment section and SRU processes has not been addressed previously.

  12. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2013-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....

  13. Maximizing entropy over Markov processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2014-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...

  14. Visible Leading: Principal Academy Connects and Empowers Principals

    Science.gov (United States)

    Hindman, Jennifer; Rozzelle, Jan; Ball, Rachel; Fahey, John

    2015-01-01

    The School-University Research Network (SURN) Principal Academy at the College of William & Mary in Williamsburg, Virginia, has a mission to build a leadership development program that increases principals' instructional knowledge and develops mentor principals to sustain the program. The academy is designed to connect and empower principals…

  15. Crystallographic cut that maximizes of the birefringence in photorefractive crystals

    OpenAIRE

    Rueda-Parada, Jorge Enrique

    2017-01-01

    The electro-optical birefringence effect depends on the crystal type, cut crystal, applied electric field and the incidence direction of light on the principal crystal faces. It is presented a study of maximizing the birefringence in photorefractive crystals of cubic crystallographic symmetry, in terms of these three parameters. General analytical expressions for the birefringence were obtained, from which birefringence can be established for any type of cut. A new crystallographic cut was en...

  16. Principals' Salaries, 2007-2008

    Science.gov (United States)

    Cooke, Willa D.; Licciardi, Chris

    2008-01-01

    How do salaries of elementary and middle school principals compare with those of other administrators and classroom teachers? Are increases in salaries of principals keeping pace with increases in salaries of classroom teachers? And how have principals' salaries fared over the years when the cost of living is taken into account? There are reliable…

  17. Principals Who Think Like Teachers

    Science.gov (United States)

    Fahey, Kevin

    2013-01-01

    Being a principal is a complex job, requiring quick, on-the-job learning. But many principals already have deep experience in a role at the very essence of the principalship. They know how to teach. In interviews with principals, Fahey and his colleagues learned that thinking like a teacher was key to their work. Part of thinking the way a teacher…

  18. School Principals' Emotional Coping Process

    Science.gov (United States)

    Poirel, Emmanuel; Yvon, Frédéric

    2014-01-01

    The present study examines the emotional coping of school principals in Quebec. Emotional coping was measured by stimulated recall; six principals were filmed during a working day and presented a week later with their video showing stressful encounters. The results show that school principals experience anger because of reproaches from staff…

  19. Legal Problems of the Principal.

    Science.gov (United States)

    Stern, Ralph D.; And Others

    The three talks included here treat aspects of the law--tort liability, student records, and the age of majority--as they relate to the principal. Specifically, the talk on torts deals with the consequences of principal negligence in the event of injuries to students. Assurance is given that a reasonable and prudent principal will have a minimum…

  20. RE Rooted in Principal's Biography

    NARCIS (Netherlands)

    ter Avest, Ina; Bakker, C.

    2017-01-01

    Critical incidents in the biography of principals appear to be steering in their innovative way of constructing InterReligious Education in their schools. In this contribution, the authors present the biographical narratives of 4 principals: 1 principal introducing interreligious education in a

  1. The Future of Principal Evaluation

    Science.gov (United States)

    Clifford, Matthew; Ross, Steven

    2012-01-01

    The need to improve the quality of principal evaluation systems is long overdue. Although states and districts generally require principal evaluations, research and experience tell that many state and district evaluations do not reflect current standards and practices for principals, and that evaluation is not systematically administered. When…

  2. Chamaebatiaria millefolium (Torr.) Maxim.: fernbush

    Science.gov (United States)

    Nancy L. Shaw; Emerenciana G. Hurd

    2008-01-01

    Fernbush - Chamaebatiaria millefolium (Torr.) Maxim. - the only species in its genus, is endemic to the Great Basin, Colorado Plateau, and adjacent areas of the western United States. It is an upright, generally multistemmed, sweetly aromatic shrub 0.3 to 2 m tall. Bark of young branches is brown and becomes smooth and gray with age. Leaves are leathery, alternate,...

  3. Automatic physical inference with information maximizing neural networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.

  4. Principal stratification in causal inference.

    Science.gov (United States)

    Frangakis, Constantine E; Rubin, Donald B

    2002-03-01

    Many scientific problems require that treatment comparisons be adjusted for posttreatment variables, but the estimands underlying standard methods are not causal effects. To address this deficiency, we propose a general framework for comparing treatments adjusting for posttreatment variables that yields principal effects based on principal stratification. Principal stratification with respect to a posttreatment variable is a cross-classification of subjects defined by the joint potential values of that posttreatment variable tinder each of the treatments being compared. Principal effects are causal effects within a principal stratum. The key property of principal strata is that they are not affected by treatment assignment and therefore can be used just as any pretreatment covariate. such as age category. As a result, the central property of our principal effects is that they are always causal effects and do not suffer from the complications of standard posttreatment-adjusted estimands. We discuss briefly that such principal causal effects are the link between three recent applications with adjustment for posttreatment variables: (i) treatment noncompliance, (ii) missing outcomes (dropout) following treatment noncompliance. and (iii) censoring by death. We then attack the problem of surrogate or biomarker endpoints, where we show, using principal causal effects, that all current definitions of surrogacy, even when perfectly true, do not generally have the desired interpretation as causal effects of treatment on outcome. We go on to forrmulate estimands based on principal stratification and principal causal effects and show their superiority.

  5. Le principe roman

    CERN Document Server

    Ferrari, Jérôme

    2015-01-01

    Fasciné par la figure du physicien allemand Werner Heisenberg (1901-1976), fondateur de la mécanique quantique, inventeur du célèbre "principe d'incertitude" et Prix Nobel de physique en 1932, un jeune aspirant-philosophe désenchanté s'efforce, à l'aube du XXIe siècle, de considérer l'incomplétude de sa propre existence à l'aune des travaux et de la destinée de cet exceptionnel homme de sciences qui incarne pour lui la rencontre du langage scientifique et de la poésie, lesquels, chacun à leur manière, en ouvrant la voie au scandale de l'inédit, dessillent les yeux sur le monde pour en révéler la mystérieuse beauté que ne cessent de confisquer le matérialisme à l'œuvre dans l'Histoire des hommes.

  6. Principal oscillation patterns

    International Nuclear Information System (INIS)

    Storch, H. von; Buerger, G.; Storch, J.S. von

    1993-01-01

    The Principal Oscillation Pattern (POP) analysis is a technique which is used to simultaneously infer the characteristic patterns and time scales of a vector time series. The POPs may be seen as the normal modes of a linearized system whose system matrix is estimated from data. The concept of POP analysis is reviewed. Examples are used to illustrate the potential of the POP technique. The best defined POPs of tropospheric day-to-day variability coincide with the most unstable modes derived from linearized theory. POPs can be derived even from a space-time subset of data. POPs are successful in identifying two independent modes with similar time scales in the same data set. The POP method can also produce forecasts which may potentially be used as a reference for other forecast models. The conventional POP analysis technique has been generalized in various ways. In the cyclostationary POP analysis, the estimated system matrix is allowed to vary deterministically with an externally forced cycle. In the complex POP analysis not only the state of the system but also its ''momentum'' is modeled. Associated correlation patterns are a useful tool to describe the appearance of a signal previously identified by a POP analysis in other parameters. (orig.)

  7. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix; Gregson, James; Wetzstein, Gordon; Raskar, Ramesh; Heidrich, Wolfgang

    2014-01-01

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  8. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  9. Microbunching and RF Compression

    International Nuclear Information System (INIS)

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-01-01

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  10. Is the β phase maximal?

    International Nuclear Information System (INIS)

    Ferrandis, Javier

    2005-01-01

    The current experimental determination of the absolute values of the CKM elements indicates that 2 vertical bar V ub /V cb V us vertical bar =(1-z), with z given by z=0.19+/-0.14. This fact implies that irrespective of the form of the quark Yukawa matrices, the measured value of the SM CP phase β is approximately the maximum allowed by the measured absolute values of the CKM elements. This is β=(π/6-z/3) for γ=(π/3+z/3), which implies α=π/2. Alternatively, assuming that β is exactly maximal and using the experimental measurement sin(2β)=0.726+/-0.037, the phase γ is predicted to be γ=(π/2-β)=66.3 o +/-1.7 o . The maximality of β, if confirmed by near-future experiments, may give us some clues as to the origin of CP violation

  11. Strategy to maximize maintenance operation

    OpenAIRE

    Espinoza, Michael

    2005-01-01

    This project presents a strategic analysis to maximize maintenance operations in Alcan Kitimat Works in British Columbia. The project studies the role of maintenance in improving its overall maintenance performance. It provides strategic alternatives and specific recommendations addressing Kitimat Works key strategic issues and problems. A comprehensive industry and competitive analysis identifies the industry structure and its competitive forces. In the mature aluminium industry, the bargain...

  12. Scalable Nonlinear AUC Maximization Methods

    OpenAIRE

    Khalid, Majdi; Ray, Indrakshi; Chitsaz, Hamidreza

    2017-01-01

    The area under the ROC curve (AUC) is a measure of interest in various machine learning and data mining applications. It has been widely used to evaluate classification performance on heavily imbalanced data. The kernelized AUC maximization machines have established a superior generalization ability compared to linear AUC machines because of their capability in modeling the complex nonlinear structure underlying most real world-data. However, the high training complexity renders the kernelize...

  13. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  14. FLOUTING MAXIMS IN INDONESIA LAWAK KLUB CONVERSATION

    Directory of Open Access Journals (Sweden)

    Rahmawati Sukmaningrum

    2017-04-01

    Full Text Available This study aims to identify the types of maxims flouted in the conversation in famous comedy show, Indonesia Lawak Club. Likewise, it also tries to reveal the speakers‘ intention of flouting the maxim in the conversation during the show. The writers use descriptive qualitative method in conducting this research. The data is taken from the dialogue of Indonesia Lawak club and then analyzed based on Grice‘s cooperative principles. The researchers read the dialogue‘s transcripts, identify the maxims, and interpret the data to find the speakers‘ intention for flouting the maxims in the communication. The results show that there are four types of maxims flouted in the dialogue. Those are maxim of quality (23%, maxim of quantity (11%, maxim of manner (31%, and maxim of relevance (35. Flouting the maxims in the conversations is intended to make the speakers feel uncomfortable with the conversation, show arrogances, show disagreement or agreement, and ridicule other speakers.

  15. Music analysis and point-set compression

    DEFF Research Database (Denmark)

    Meredith, David

    2015-01-01

    COSIATEC, SIATECCompress and Forth’s algorithm are point-set compression algorithms developed for discovering repeated patterns in music, such as themes and motives that would be of interest to a music analyst. To investigate their effectiveness and versatility, these algorithms were evaluated...... on three analytical tasks that depend on the discovery of repeated patterns: classifying folk song melodies into tune families, discovering themes and sections in polyphonic music, and discovering subject and countersubject entries in fugues. Each algorithm computes a compressed encoding of a point......-set representation of a musical object in the form of a list of compact patterns, each pattern being given with a set of vectors indicating its occurrences. However, the algorithms adopt different strategies in their attempts to discover encodings that maximize compression.The best-performing algorithm on the folk...

  16. Comparative assessment of intrinsic mechanical stimuli on knee cartilage and compressed agarose constructs.

    Science.gov (United States)

    Completo, A; Bandeiras, C; Fonseca, F

    2017-06-01

    A well-established cue for improving the properties of tissue-engineered cartilage is mechanical stimulation. However, the explicit ranges of mechanical stimuli that correspond to favorable metabolic outcomes are elusive. Usually, these outcomes have only been associated with the applied strain and frequency, an oversimplification that can hide the fundamental relationship between the intrinsic mechanical stimuli and the metabolic outcomes. This highlights two important key issues: the firstly is related to the evaluation of the intrinsic mechanical stimuli of native cartilage; the second, assuming that the intrinsic mechanical stimuli will be important, deals with the ability to replicate them on the tissue-engineered constructs. This study quantifies and compares the volume of cartilage and agarose subjected to a given magnitude range of each intrinsic mechanical stimulus, through a numerical simulation of a patient-specific knee model coupled with experimental data of contact during the stance phase of gait, and agarose constructs under direct-dynamic compression. The results suggest that direct compression loading needs to be parameterized with time-dependence during the initial culture period in order to better reproduce each one of the intrinsic mechanical stimuli developed in the patient-specific cartilage. A loading regime which combines time periods of low compressive strain (5%) and frequency (0.5Hz), in order to approach the maximal principal strain and fluid velocity stimulus of the patient-specific cartilage, with time periods of high compressive strain (20%) and frequency (3Hz), in order to approach the pore pressure values, may be advantageous relatively to a single loading regime throughout the full culture period. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Portraits of Principal Practice: Time Allocation and School Principal Work

    Science.gov (United States)

    Sebastian, James; Camburn, Eric M.; Spillane, James P.

    2018-01-01

    Purpose: The purpose of this study was to examine how school principals in urban settings distributed their time working on critical school functions. We also examined who principals worked with and how their time allocation patterns varied by school contextual characteristics. Research Method/Approach: The study was conducted in an urban school…

  18. Compression for radiological images

    Science.gov (United States)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  19. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  20. School Principals' Sources of Knowledge

    Science.gov (United States)

    Perkins, Arland Early

    2014-01-01

    The purpose of this study was to determine what sources of professional knowledge are available to principals in 1 rural East Tennessee school district. Qualitative research methods were applied to gain an understanding of what sources of knowledge are used by school principals in 1 rural East Tennessee school district and the barriers they face…

  1. Innovation Management Perceptions of Principals

    Science.gov (United States)

    Bakir, Asli Agiroglu

    2016-01-01

    This study is aimed to determine the perceptions of principals about innovation management and to investigate whether there is a significant difference in this perception according to various parameters. In the study, descriptive research model is used and universe is consisted from principals who participated in "Acquiring Formation Course…

  2. What Do Effective Principals Do?

    Science.gov (United States)

    Protheroe, Nancy

    2011-01-01

    Much has been written during the past decade about the changing role of the principal and the shift in emphasis from manager to instructional leader. Anyone in education, and especially principals themselves, could develop a mental list of responsibilities that fit within each of these realms. But research makes it clear that both those aspects of…

  3. Time Management for New Principals

    Science.gov (United States)

    Ruder, Robert

    2008-01-01

    Becoming a principal is a milestone in an educator's professional life. The principalship is an opportunity to provide leadership that will afford students opportunities to thrive in a nurturing and supportive environment. Despite the continuously expanding demands of being a new principal, effective time management will enable an individual to be…

  4. Bureaucratic Control and Principal Role.

    Science.gov (United States)

    Bezdek, Robert; And Others

    The purposes of this study were to determine the manner in which the imposition of increased bureaucratic control over principals influenced their allocation of time to tasks and to investigate principals' perceptions of the changes in their roles brought about by this increased control. The specific bureaucratic control system whose effects were…

  5. Maximal Abelian sets of roots

    CERN Document Server

    Lawther, R

    2018-01-01

    In this work the author lets \\Phi be an irreducible root system, with Coxeter group W. He considers subsets of \\Phi which are abelian, meaning that no two roots in the set have sum in \\Phi \\cup \\{ 0 \\}. He classifies all maximal abelian sets (i.e., abelian sets properly contained in no other) up to the action of W: for each W-orbit of maximal abelian sets we provide an explicit representative X, identify the (setwise) stabilizer W_X of X in W, and decompose X into W_X-orbits. Abelian sets of roots are closely related to abelian unipotent subgroups of simple algebraic groups, and thus to abelian p-subgroups of finite groups of Lie type over fields of characteristic p. Parts of the work presented here have been used to confirm the p-rank of E_8(p^n), and (somewhat unexpectedly) to obtain for the first time the 2-ranks of the Monster and Baby Monster sporadic groups, together with the double cover of the latter. Root systems of classical type are dealt with quickly here; the vast majority of the present work con...

  6. Maximizing benefits from resource development

    International Nuclear Information System (INIS)

    Skjelbred, B.

    2002-01-01

    The main objectives of Norwegian petroleum policy are to maximize the value creation for the country, develop a national oil and gas industry, and to be at the environmental forefront of long term resource management and coexistence with other industries. The paper presents a graph depicting production and net export of crude oil for countries around the world for 2002. Norway produced 3.41 mill b/d and exported 3.22 mill b/d. Norwegian petroleum policy measures include effective regulation and government ownership, research and technology development, and internationalisation. Research and development has been in five priority areas, including enhanced recovery, environmental protection, deep water recovery, small fields, and the gas value chain. The benefits of internationalisation includes capitalizing on Norwegian competency, exploiting emerging markets and the assurance of long-term value creation and employment. 5 figs

  7. Maximizing synchronizability of duplex networks

    Science.gov (United States)

    Wei, Xiang; Emenheiser, Jeffrey; Wu, Xiaoqun; Lu, Jun-an; D'Souza, Raissa M.

    2018-01-01

    We study the synchronizability of duplex networks formed by two randomly generated network layers with different patterns of interlayer node connections. According to the master stability function, we use the smallest nonzero eigenvalue and the eigenratio between the largest and the second smallest eigenvalues of supra-Laplacian matrices to characterize synchronizability on various duplexes. We find that the interlayer linking weight and linking fraction have a profound impact on synchronizability of duplex networks. The increasingly large inter-layer coupling weight is found to cause either decreasing or constant synchronizability for different classes of network dynamics. In addition, negative node degree correlation across interlayer links outperforms positive degree correlation when most interlayer links are present. The reverse is true when a few interlayer links are present. The numerical results and understanding based on these representative duplex networks are illustrative and instructive for building insights into maximizing synchronizability of more realistic multiplex networks.

  8. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  9. VIOLATION OF CONVERSATION MAXIM ON TV ADVERTISEMENTS

    Directory of Open Access Journals (Sweden)

    Desak Putu Eka Pratiwi

    2015-07-01

    Full Text Available Maxim is a principle that must be obeyed by all participants textually and interpersonally in order to have a smooth communication process. Conversation maxim is divided into four namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner of speaking. Violation of the maxim may occur in a conversation in which the information the speaker has is not delivered well to his speaking partner. Violation of the maxim in a conversation will result in an awkward impression. The example of violation is the given information that is redundant, untrue, irrelevant, or convoluted. Advertisers often deliberately violate the maxim to create unique and controversial advertisements. This study aims to examine the violation of maxims in conversations of TV ads. The source of data in this research is food advertisements aired on TV media. Documentation and observation methods are applied to obtain qualitative data. The theory used in this study is a maxim theory proposed by Grice (1975. The results of the data analysis are presented with informal method. The results of this study show an interesting fact that the violation of maxim in a conversation found in the advertisement exactly makes the advertisements very attractive and have a high value.

  10. Mechanical behavior of silicon carbide nanoparticles under uniaxial compression

    Energy Technology Data Exchange (ETDEWEB)

    He, Qiuxiang; Fei, Jing; Tang, Chao; Zhong, Jianxin; Meng, Lijun, E-mail: ljmeng@xtu.edu.cn [Xiangtan University, Hunan Key Laboratory for Micro-Nano Energy Materials and Devices, Faculty of School of Physics and Optoelectronics (China)

    2016-03-15

    The mechanical behavior of SiC nanoparticles under uniaxial compression was investigated using an atomic-level compression simulation technique. The results revealed that the mechanical deformation of SiC nanocrystals is highly dependent on compression orientation, particle size, and temperature. A structural transformation from the original zinc-blende to a rock-salt phase is identified for SiC nanoparticles compressed along the [001] direction at low temperature. However, the rock-salt phase is not observed for SiC nanoparticles compressed along the [110] and [111] directions irrespective of size and temperature. The high-pressure-generated rock-salt phase strongly affects the mechanical behavior of the nanoparticles, including their hardness and deformation process. The hardness of [001]-compressed nanoparticles decreases monotonically as their size increases, different from that of [110] and [111]-compressed nanoparticles, which reaches a maximal value at a critical size and then decreases. Additionally, a temperature-dependent mechanical response was observed for all simulated SiC nanoparticles regardless of compression orientation and size. Interestingly, the hardness of SiC nanocrystals with a diameter of 8 nm compressed in [001]-orientation undergoes a steep decrease at 0.1–200 K and then a gradual decline from 250 to 1500 K. This trend can be attributed to different deformation mechanisms related to phase transformation and dislocations. Our results will be useful for practical applications of SiC nanoparticles under high pressure.

  11. relationship between principals' management approaches

    African Journals Online (AJOL)

    Admin

    Data were collected using a self-administered questionnaire from a sample of. 211 teachers, 28 principals and 22 chairpersons of parent- teachers association. Data were ..... their role expectation in discipline management. Data from the 20 ...

  12. Principals, agents and research programmes

    OpenAIRE

    Elizabeth Shove

    2003-01-01

    Research programmes appear to represent one of the more powerful instruments through which research funders (principals) steer and shape what researchers (agents) do. The fact that agents navigate between different sources and styles of programme funding and that they use programmes to their own ends is readily accommodated within principal-agent theory with the help of concepts such as shirking and defection. Taking a different route, I use three examples of research programming (by the UK, ...

  13. Optimal interface between principal deterrent systems and material accounting

    International Nuclear Information System (INIS)

    Deiermann, P.J.; Opelka, J.H.

    1983-01-01

    The purpose of this study is to find an optimal blend between three safeguards systems for special nuclear material (SNM), the material accounting system and the physical security and material control systems. The latter two are denoted as principal deterrent systems. The optimization methodology employed is a two-stage decision algorithm, first an explicit maximization of expected diverter benefits and subsequently a minimization of expected defender costs for changes in material accounting procedures and incremental improvements in the principal deterrent systems. The probability of diverter success function dependent upon the principal deterrents and material accounting system variables is developed. Within the range of certainty of the model, existing material accounting, material control and physical security practices are justified

  14. Anisotropic Concrete Compressive Strength

    DEFF Research Database (Denmark)

    Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao

    2017-01-01

    When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...

  15. Experiments with automata compression

    NARCIS (Netherlands)

    Daciuk, J.; Yu, S; Daley, M; Eramian, M G

    2001-01-01

    Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

  16. Numerical approach to solar ejector-compression refrigeration system

    Directory of Open Access Journals (Sweden)

    Zheng Hui-Fan

    2016-01-01

    Full Text Available A model was established for solar ejector-compression refrigeration system. The influence of generator temperature, middle-temperature, and evaporator temperature on the performance of the refrigerant system was analyzed. An optimal generator temperature is found for maximal energy efficiency ratio and minimal power consumption.

  17. Does team lifting increase the variability in peak lumbar compression in ironworkers?

    NARCIS (Netherlands)

    Faber, Gert; Visser, Steven; van der Molen, Henk F.; Kuijer, P. Paul F. M.; Hoozemans, Marco J. M.; van Dieën, Jaap H.; Frings-Dresen, Monique H. W.

    2012-01-01

    Ironworkers frequently perform heavy lifting tasks in teams of two or four workers. Team lifting could potentially lead to a higher variation in peak lumbar compression forces than lifts performed by one worker, resulting in higher maximal peak lumbar compression forces. This study compared

  18. Maximizing ROI (return on information)

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, B.

    2000-05-01

    The role and importance of managing information are discussed, underscoring the importance by quoting from the report of the International Data Corporation, according to which Fortune 500 companies lost $ 12 billion in 1999 due to inefficiencies resulting from intellectual re-work, substandard performance , and inability to find knowledge resources. The report predicts that this figure will rise to $ 31.5 billion by 2003. Key impediments to implementing knowledge management systems are identified as : the cost and human resources requirement of deployment; inflexibility of historical systems to adapt to change; and the difficulty of achieving corporate acceptance of inflexible software products that require changes in 'normal' ways of doing business. The author recommends the use of model, document and rule-independent systems with a document centered interface (DCI), employing rapid application development (RAD) and object technologies and visual model development, which eliminate these problems, making it possible for companies to maximize their return on information (ROI), and achieve substantial savings in implementation costs.

  19. Maximizing the optical network capacity.

    Science.gov (United States)

    Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I

    2016-03-06

    Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. © 2016 The Authors.

  20. Principal Curves on Riemannian Manifolds.

    Science.gov (United States)

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  1. The Principal and the Law. Elementary Principal Series No. 7.

    Science.gov (United States)

    Doverspike, David E.; Cone, W. Henry

    Developments over the past 25 years in school-related legal issues in elementary schools have significantly changed the principal's role. In 1975, a decision of the U.S. Supreme Court established three due-process guidelines for short-term suspension. The decision requires student notification of charges, explanation of evidence, and an informal…

  2. Does mental exertion alter maximal muscle activation?

    Directory of Open Access Journals (Sweden)

    Vianney eRozand

    2014-09-01

    Full Text Available Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 minutes each: i high mental exertion (incongruent Stroop task, ii moderate mental exertion (congruent Stroop task, iii low mental exertion (watching a movie. In each condition, mental exertion was combined with ten intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 minutes. Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors.

  3. AUC-Maximizing Ensembles through Metalearning.

    Science.gov (United States)

    LeDell, Erin; van der Laan, Mark J; Petersen, Maya

    2016-05-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.

  4. Surface analysis the principal techniques

    CERN Document Server

    Vickerman, John C

    2009-01-01

    This completely updated and revised second edition of Surface Analysis: The Principal Techniques, deals with the characterisation and understanding of the outer layers of substrates, how they react, look and function which are all of interest to surface scientists. Within this comprehensive text, experts in each analysis area introduce the theory and practice of the principal techniques that have shown themselves to be effective in both basic research and in applied surface analysis. Examples of analysis are provided to facilitate the understanding of this topic and to show readers how they c

  5. Principal bundles the classical case

    CERN Document Server

    Sontz, Stephen Bruce

    2015-01-01

    This introductory graduate level text provides a relatively quick path to a special topic in classical differential geometry: principal bundles.  While the topic of principal bundles in differential geometry has become classic, even standard, material in the modern graduate mathematics curriculum, the unique approach taken in this text presents the material in a way that is intuitive for both students of mathematics and of physics. The goal of this book is to present important, modern geometric ideas in a form readily accessible to students and researchers in both the physics and mathematics communities, providing each with an understanding and appreciation of the language and ideas of the other.

  6. On maximal massive 3D supergravity

    OpenAIRE

    Bergshoeff , Eric A; Hohm , Olaf; Rosseel , Jan; Townsend , Paul K

    2010-01-01

    ABSTRACT We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric " general massive supergravity " and the maximally supersymmetric N = 8 " new massive supergravity ". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level. (Bergshoeff, Eric A) (Hohm, Olaf) (Rosseel, Jan) P.K.Townsend@da...

  7. Inclusive Fitness Maximization:An Axiomatic Approach

    OpenAIRE

    Okasha, Samir; Weymark, John; Bossert, Walter

    2014-01-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...

  8. Activity versus outcome maximization in time management.

    Science.gov (United States)

    Malkoc, Selin A; Tonietto, Gabriela N

    2018-04-30

    Feeling time-pressed has become ubiquitous. Time management strategies have emerged to help individuals fit in more of their desired and necessary activities. We provide a review of these strategies. In doing so, we distinguish between two, often competing, motives people have in managing their time: activity maximization and outcome maximization. The emerging literature points to an important dilemma: a given strategy that maximizes the number of activities might be detrimental to outcome maximization. We discuss such factors that might hinder performance in work tasks and enjoyment in leisure tasks. Finally, we provide theoretically grounded recommendations that can help balance these two important goals in time management. Published by Elsevier Ltd.

  9. On the maximal superalgebras of supersymmetric backgrounds

    International Nuclear Information System (INIS)

    Figueroa-O'Farrill, Jose; Hackett-Jones, Emily; Moutsopoulos, George; Simon, Joan

    2009-01-01

    In this paper we give a precise definition of the notion of a maximal superalgebra of certain types of supersymmetric supergravity backgrounds, including the Freund-Rubin backgrounds, and propose a geometric construction extending the well-known construction of its Killing superalgebra. We determine the structure of maximal Lie superalgebras and show that there is a finite number of isomorphism classes, all related via contractions from an orthosymplectic Lie superalgebra. We use the structure theory to show that maximally supersymmetric waves do not possess such a maximal superalgebra, but that the maximally supersymmetric Freund-Rubin backgrounds do. We perform the explicit geometric construction of the maximal superalgebra of AdS 4 X S 7 and find that it is isomorphic to osp(1|32). We propose an algebraic construction of the maximal superalgebra of any background asymptotic to AdS 4 X S 7 and we test this proposal by computing the maximal superalgebra of the M2-brane in its two maximally supersymmetric limits, finding agreement.

  10. Task-oriented maximally entangled states

    International Nuclear Information System (INIS)

    Agrawal, Pankaj; Pradhan, B

    2010-01-01

    We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.

  11. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  12. School Uniforms: Guidelines for Principals.

    Science.gov (United States)

    Essex, Nathan L.

    2001-01-01

    Principals desiring to develop a school-uniform policy should involve parents, teachers, community leaders, and student representatives; beware restrictions on religious and political expression; provide flexibility and assistance for low-income families; implement a pilot program; align the policy with school-safety issues; and consider legal…

  13. The Principal and Tort Liability.

    Science.gov (United States)

    Stern, Ralph D.

    The emphasis of this chapter is on the tort liability of principals, especially their commission of unintentional torts or torts resulting from negligent conduct. A tort is defined as a wrongful act, not including a breach of contract or trust, which results in injury to another's person, property, or reputation and for which the injured party is…

  14. Teachers' Perspectives on Principal Mistreatment

    Science.gov (United States)

    Blase, Joseph; Blase, Jo

    2006-01-01

    Although there is some important scholarly work on the problem of workplace mistreatment/abuse, theoretical or empirical work on abusive school principals is nonexistent. Symbolic interactionism was the theoretical structure for the present study. This perspective on social research is founded on three primary assumptions: (1) individuals act…

  15. Principal minors and rhombus tilings

    International Nuclear Information System (INIS)

    Kenyon, Richard; Pemantle, Robin

    2014-01-01

    The algebraic relations between the principal minors of a generic n × n matrix are somewhat mysterious, see e.g. Lin and Sturmfels (2009 J. Algebra 322 4121–31). We show, however, that by adding in certain almost principal minors, the ideal of relations is generated by translations of a single relation, the so-called hexahedron relation, which is a composition of six cluster mutations. We give in particular a Laurent-polynomial parameterization of the space of n × n matrices, whose parameters consist of certain principal and almost principal minors. The parameters naturally live on vertices and faces of the tiles in a rhombus tiling of a convex 2n-gon. A matrix is associated to an equivalence class of tilings, all related to each other by Yang–Baxter-like transformations. By specializing the initial data we can similarly parameterize the space of Hermitian symmetric matrices over R,C or H the quaternions. Moreover by further specialization we can parametrize the space of positive definite matrices over these rings. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Cluster algebras mathematical physics’. (paper)

  16. The Principal's Role in Leading Instructional Change: A Case Study in New Program Adoption

    Science.gov (United States)

    Breon, Amy

    2016-01-01

    The noise in generating an agreed upon definition of instructional leadership that extends beyond theory to the practice of principals has been almost deafening in the last few decades. Many emphasize the need for the role of the principal to adapt to meet the demands of leadership that maximizes student achievement, but lack the specificity to…

  17. The FRX-C/LSM compression experiment

    International Nuclear Information System (INIS)

    Rej, D.J.; Siemon, R.E.; Taggart, D.P.

    1989-01-01

    After two years of preparation, hardware for high-power FRC compression heating studies is now being installed onto FRX-C/LSM. FRCs will be formed and translated out of the θ-pinch source, and into a compressor where the external B-field will be increased from 0.4 to 2 T in 55 μs. The compressed FRC can then be translated into a third stage for further study. A principal experimental goal is to study FRC confinement at the high energy density, n(T/sub e/ + T/sub i/) ≤ 1.0 /times/ 10 22 keV/m 3 , associated with the large external field. Experiments are scheduled to begin in April. 11 refs., 5 figs

  18. Maximally Entangled Multipartite States: A Brief Survey

    International Nuclear Information System (INIS)

    Enríquez, M; Wintrowicz, I; Życzkowski, K

    2016-01-01

    The problem of identifying maximally entangled quantum states of a composite quantum systems is analyzed. We review some states of multipartite systems distinguished with respect to certain measures of quantum entanglement. Numerical results obtained for 4-qubit pure states illustrate the fact that the notion of maximally entangled state depends on the measure used. (paper)

  19. Utility maximization and mode of payment

    NARCIS (Netherlands)

    Koning, R.H.; Ridder, G.; Heijmans, R.D.H.; Pollock, D.S.G.; Satorra, A.

    2000-01-01

    The implications of stochastic utility maximization in a model of choice of payment are examined. Three types of compatibility with utility maximization are distinguished: global compatibility, local compatibility on an interval, and local compatibility on a finite set of points. Keywords:

  20. Corporate Social Responsibility and Profit Maximizing Behaviour

    OpenAIRE

    Becchetti, Leonardo; Giallonardo, Luisa; Tessitore, Maria Elisabetta

    2005-01-01

    We examine the behavior of a profit maximizing monopolist in a horizontal differentiation model in which consumers differ in their degree of social responsibility (SR) and consumers SR is dynamically influenced by habit persistence. The model outlines parametric conditions under which (consumer driven) corporate social responsibility is an optimal choice compatible with profit maximizing behavior.

  1. Inclusive fitness maximization: An axiomatic approach.

    Science.gov (United States)

    Okasha, Samir; Weymark, John A; Bossert, Walter

    2014-06-07

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Maximal Entanglement in High Energy Physics

    Directory of Open Access Journals (Sweden)

    Alba Cervera-Lierta, José I. Latorre, Juan Rojo, Luca Rottoli

    2017-11-01

    Full Text Available We analyze how maximal entanglement is generated at the fundamental level in QED by studying correlations between helicity states in tree-level scattering processes at high energy. We demonstrate that two mechanisms for the generation of maximal entanglement are at work: i $s$-channel processes where the virtual photon carries equal overlaps of the helicities of the final state particles, and ii the indistinguishable superposition between $t$- and $u$-channels. We then study whether requiring maximal entanglement constrains the coupling structure of QED and the weak interactions. In the case of photon-electron interactions unconstrained by gauge symmetry, we show how this requirement allows reproducing QED. For $Z$-mediated weak scattering, the maximal entanglement principle leads to non-trivial predictions for the value of the weak mixing angle $\\theta_W$. Our results are a first step towards understanding the connections between maximal entanglement and the fundamental symmetries of high-energy physics.

  3. Acute Thoracolumbar Spinal Cord Injury: Relationship of Cord Compression to Neurological Outcome.

    Science.gov (United States)

    Skeers, Peta; Battistuzzo, Camila R; Clark, Jillian M; Bernard, Stephen; Freeman, Brian J C; Batchelor, Peter E

    2018-02-21

    Spinal cord injury in the cervical spine is commonly accompanied by cord compression and urgent surgical decompression may improve neurological recovery. However, the extent of spinal cord compression and its relationship to neurological recovery following traumatic thoracolumbar spinal cord injury is unclear. The purpose of this study was to quantify maximum cord compression following thoracolumbar spinal cord injury and to assess the relationship among cord compression, cord swelling, and eventual clinical outcome. The medical records of patients who were 15 to 70 years of age, were admitted with a traumatic thoracolumbar spinal cord injury (T1 to L1), and underwent a spinal surgical procedure were examined. Patients with penetrating injuries and multitrauma were excluded. Maximal osseous canal compromise and maximal spinal cord compression were measured on preoperative mid-sagittal computed tomography (CT) scans and T2-weighted magnetic resonance imaging (MRI) by observers blinded to patient outcome. The American Spinal Injury Association (ASIA) Impairment Scale (AIS) grades from acute hospital admission (≤24 hours of injury) and rehabilitation discharge were used to measure clinical outcome. Relationships among spinal cord compression, canal compromise, and initial and final AIS grades were assessed via univariate and multivariate analyses. Fifty-three patients with thoracolumbar spinal cord injury were included in this study. The overall mean maximal spinal cord compression (and standard deviation) was 40% ± 21%. There was a significant relationship between median spinal cord compression and final AIS grade, with grade-A patients (complete injury) exhibiting greater compression than grade-C and D patients (incomplete injury) (p compression as independently influencing the likelihood of complete spinal cord injury (p compression. Greater cord compression is associated with an increased likelihood of severe neurological deficits (complete injury) following

  4. Optical pulse compression

    International Nuclear Information System (INIS)

    Glass, A.J.

    1975-01-01

    The interest in using large lasers to achieve a very short and intense pulse for generating fusion plasma has provided a strong impetus to reexamine the possibilities of optical pulse compression at high energy. Pulse compression allows one to generate pulses of long duration (minimizing damage problems) and subsequently compress optical pulses to achieve the short pulse duration required for specific applications. The ideal device for carrying out this program has not been developed. Of the two approaches considered, the Gires--Tournois approach is limited by the fact that the bandwidth and compression are intimately related, so that the group delay dispersion times the square of the bandwidth is about unity for all simple Gires--Tournois interferometers. The Treacy grating pair does not suffer from this limitation, but is inefficient because diffraction generally occurs in several orders and is limited by the problem of optical damage to the grating surfaces themselves. Nonlinear and parametric processes were explored. Some pulse compression was achieved by these techniques; however, they are generally difficult to control and are not very efficient. (U.S.)

  5. Developing Principal Instructional Leadership through Collaborative Networking

    Science.gov (United States)

    Cone, Mariah Bahar

    2010-01-01

    This study examines what occurs when principals of urban schools meet together to learn and improve their instructional leadership in collaborative principal networks designed to support, sustain, and provide ongoing principal capacity building. Principal leadership is considered second only to teaching in its ability to improve schools, yet few…

  6. 31 CFR 19.995 - Principal.

    Science.gov (United States)

    2010-07-01

    ... SUSPENSION (NONPROCUREMENT) Definitions § 19.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Principal. 19.995 Section 19.995...

  7. 22 CFR 208.995 - Principal.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Principal. 208.995 Section 208.995 Foreign...) Definitions § 208.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

  8. 29 CFR 1471.995 - Principal.

    Science.gov (United States)

    2010-07-01

    ... SUSPENSION (NONPROCUREMENT) Definitions § 1471.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or... 29 Labor 4 2010-07-01 2010-07-01 false Principal. 1471.995 Section 1471.995 Labor Regulations...

  9. 21 CFR 1404.995 - Principal.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Principal. 1404.995 Section 1404.995 Food and...) Definitions § 1404.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

  10. 22 CFR 1006.995 - Principal.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Principal. 1006.995 Section 1006.995 Foreign... § 1006.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

  11. 2 CFR 180.995 - Principal.

    Science.gov (United States)

    2010-01-01

    ... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false Principal. 180.995 Section 180.995 Grants and Agreements OFFICE OF MANAGEMENT AND BUDGET GOVERNMENTWIDE GUIDANCE FOR GRANTS AND AGREEMENTS... § 180.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator...

  12. 34 CFR 85.995 - Principal.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Principal. 85.995 Section 85.995 Education Office of...) Definitions § 85.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

  13. 22 CFR 1508.995 - Principal.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Principal. 1508.995 Section 1508.995 Foreign...) Definitions § 1508.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...

  14. Isentropic Compression of Argon

    International Nuclear Information System (INIS)

    Oona, H.; Solem, J.C.; Veeser, L.R.; Ekdahl, C.A.; Rodriquez, P.J.; Younger, S.M.; Lewis, W.; Turley, W.D.

    1997-01-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal

  15. Pulsed Compression Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Roestenberg, T. [University of Twente, Enschede (Netherlands)

    2012-06-07

    The advantages of the Pulsed Compression Reactor (PCR) over the internal combustion engine-type chemical reactors are briefly discussed. Over the last four years a project concerning the fundamentals of the PCR technology has been performed by the University of Twente, Enschede, Netherlands. In order to assess the feasibility of the application of the PCR principle for the conversion methane to syngas, several fundamental questions needed to be answered. Two important questions that relate to the applicability of the PCR for any process are: how large is the heat transfer rate from a rapidly compressed and expanded volume of gas, and how does this heat transfer rate compare to energy contained in the compressed gas? And: can stable operation with a completely free piston as it is intended with the PCR be achieved?.

  16. Medullary compression syndrome

    International Nuclear Information System (INIS)

    Barriga T, L.; Echegaray, A.; Zaharia, M.; Pinillos A, L.; Moscol, A.; Barriga T, O.; Heredia Z, A.

    1994-01-01

    The authors made a retrospective study in 105 patients treated in the Radiotherapy Department of the National Institute of Neoplasmic Diseases from 1973 to 1992. The objective of this evaluation was to determine the influence of radiotherapy in patients with medullary compression syndrome in aspects concerning pain palliation and improvement of functional impairment. Treatment sheets of patients with medullary compression were revised: 32 out of 39 of patients (82%) came to hospital by their own means and continued walking after treatment, 8 out of 66 patients (12%) who came in a wheelchair or were bedridden, could mobilize by their own after treatment, 41 patients (64%) had partial alleviation of pain after treatment. In those who came by their own means and did not change their characteristics, functional improvement was observed. It is concluded that radiotherapy offers palliative benefit in patients with medullary compression syndrome. (authors). 20 refs., 5 figs., 6 tabs

  17. Graph Compression by BFS

    Directory of Open Access Journals (Sweden)

    Alberto Apostolico

    2009-08-01

    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

  18. Generalized principal resonance in oscillatory systems of second order; Resonancia principal generalizada en sistemas oscilatorios de segundo orden

    Energy Technology Data Exchange (ETDEWEB)

    Munoz Aguirre, E. [Universidad Autonoma de Puebla, Oaxaca (Mexico); Alexandrov, V. V. [Benemerita Universidad Autonoma de Puebla, Puebla (Mexico)

    2001-02-01

    This paper will describe the generalized principal resonance of systems as described by the second order of ordinary differential equations and proved by the Pontriaguin maximal principle to coincide with the lengthened solution of an external problem of the same system. The results are verified in special cases of general resonance and parametric resonance for the Mathieu equation. [Spanish] En el presente articulo se estudia la resonancia principal generalizada para sistemas descritos por ecuaciones diferenciales ordinarias de segundo orden y se demuestra con ayuda del principio del maximo de Pontriaguin, la coincidencia de esta con la solucion prolongada de un problema extremal para el mismo sistema. Ademas se verifican estos resultados en los casos particulares de resonancia general y resonancia parametrica para la ecuacion de Mathieu.

  19. Bipartite Bell Inequality and Maximal Violation

    International Nuclear Information System (INIS)

    Li Ming; Fei Shaoming; Li-Jost Xian-Qing

    2011-01-01

    We present new bell inequalities for arbitrary dimensional bipartite quantum systems. The maximal violation of the inequalities is computed. The Bell inequality is capable of detecting quantum entanglement of both pure and mixed quantum states more effectively. (general)

  20. HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL

    CERN Document Server

    HR Division

    2000-01-01

    Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...

  1. Maximal Inequalities for Dependent Random Variables

    DEFF Research Database (Denmark)

    Hoffmann-Jorgensen, Jorgen

    2016-01-01

    Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X......-k. Then a maximal inequality gives conditions ensuring that the maximal partial sum M-n = max(1) (...

  2. Maximizing Function through Intelligent Robot Actuator Control

    Data.gov (United States)

    National Aeronautics and Space Administration — Maximizing Function through Intelligent Robot Actuator Control Successful missions to Mars and beyond will only be possible with the support of high-performance...

  3. An ethical justification of profit maximization

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2010-01-01

    In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...... utility) maximizing actions are ruled out, e.g., by behavioural norms or formal institutions....

  4. A definition of maximal CP-violation

    International Nuclear Information System (INIS)

    Roos, M.

    1985-01-01

    The unitary matrix of quark flavour mixing is parametrized in a general way, permitting a mathematically natural definition of maximal CP violation. Present data turn out to violate this definition by 2-3 standard deviations. (orig.)

  5. A cosmological problem for maximally symmetric supergravity

    International Nuclear Information System (INIS)

    German, G.; Ross, G.G.

    1986-01-01

    Under very general considerations it is shown that inflationary models of the universe based on maximally symmetric supergravity with flat potentials are unable to resolve the cosmological energy density (Polonyi) problem. (orig.)

  6. Insulin resistance and maximal oxygen uptake

    DEFF Research Database (Denmark)

    Seibaek, Marie; Vestergaard, Henrik; Burchardt, Hans

    2003-01-01

    BACKGROUND: Type 2 diabetes, coronary atherosclerosis, and physical fitness all correlate with insulin resistance, but the relative importance of each component is unknown. HYPOTHESIS: This study was undertaken to determine the relationship between insulin resistance, maximal oxygen uptake......, and the presence of either diabetes or ischemic heart disease. METHODS: The study population comprised 33 patients with and without diabetes and ischemic heart disease. Insulin resistance was measured by a hyperinsulinemic euglycemic clamp; maximal oxygen uptake was measured during a bicycle exercise test. RESULTS......: There was a strong correlation between maximal oxygen uptake and insulin-stimulated glucose uptake (r = 0.7, p = 0.001), and maximal oxygen uptake was the only factor of importance for determining insulin sensitivity in a model, which also included the presence of diabetes and ischemic heart disease. CONCLUSION...

  7. Maximal supergravities and the E10 model

    International Nuclear Information System (INIS)

    Kleinschmidt, Axel; Nicolai, Hermann

    2006-01-01

    The maximal rank hyperbolic Kac-Moody algebra e 10 has been conjectured to play a prominent role in the unification of duality symmetries in string and M theory. We review some recent developments supporting this conjecture

  8. Principal chiral model on superspheres

    International Nuclear Information System (INIS)

    Mitev, V.; Schomerus, V.; Quella, T.

    2008-09-01

    We investigate the spectrum of the principal chiral model (PCM) on odd-dimensional superspheres as a function of the curvature radius R. For volume-filling branes on S 3 vertical stroke 2 , we compute the exact boundary spectrum as a function of R. The extension to higher dimensional superspheres is discussed, but not carried out in detail. Our results provide very convincing evidence in favor of the strong-weak coupling duality between supersphere PCMs and OSP(2S+2 vertical stroke 2S) Gross-Neveu models that was recently conjectured by Candu and Saleur. (orig.)

  9. Gaussian maximally multipartite-entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio

    2009-12-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .

  10. Gaussian maximally multipartite-entangled states

    International Nuclear Information System (INIS)

    Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano

    2009-01-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7.

  11. Neutrino mass textures with maximal CP violation

    International Nuclear Information System (INIS)

    Aizawa, Ichiro; Kitabayashi, Teruyuki; Yasue, Masaki

    2005-01-01

    We show three types of neutrino mass textures, which give maximal CP violation as well as maximal atmospheric neutrino mixing. These textures are described by six real mass parameters: one specified by two complex flavor neutrino masses and two constrained ones and the others specified by three complex flavor neutrino masses. In each texture, we calculate mixing angles and masses, which are consistent with observed data, as well as Majorana CP phases

  12. Why firms should not always maximize profits

    OpenAIRE

    Kolstad, Ivar

    2006-01-01

    Though corporate social responsibility (CSR) is on the agenda of most major corporations, corporate executives still largely support the view that corporations should maximize the returns to their owners. There are two lines of defence for this position. One is the Friedmanian view that maximizing owner returns is the corporate social responsibility of corporations. The other is a position voiced by many executives, that CSR and profits go together. This paper argues that the first position i...

  13. Maximally Informative Observables and Categorical Perception

    OpenAIRE

    Tsiang, Elaine

    2012-01-01

    We formulate the problem of perception in the framework of information theory, and prove that categorical perception is equivalent to the existence of an observable that has the maximum possible information on the target of perception. We call such an observable maximally informative. Regardless whether categorical perception is real, maximally informative observables can form the basis of a theory of perception. We conclude with the implications of such a theory for the problem of speech per...

  14. Compressible generalized Newtonian fluids

    Czech Academy of Sciences Publication Activity Database

    Málek, Josef; Rajagopal, K.R.

    2010-01-01

    Roč. 61, č. 6 (2010), s. 1097-1110 ISSN 0044-2275 Institutional research plan: CEZ:AV0Z20760514 Keywords : power law fluid * uniform temperature * compressible fluid Subject RIV: BJ - Thermodynamics Impact factor: 1.290, year: 2010

  15. Temporal compressive sensing systems

    Science.gov (United States)

    Reed, Bryan W.

    2017-12-12

    Methods and systems for temporal compressive sensing are disclosed, where within each of one or more sensor array data acquisition periods, one or more sensor array measurement datasets comprising distinct linear combinations of time slice data are acquired, and where mathematical reconstruction allows for calculation of accurate representations of the individual time slice datasets.

  16. Compression of Infrared images

    DEFF Research Database (Denmark)

    Mantel, Claire; Forchhammer, Søren

    2017-01-01

    best for bits-per-pixel rates below 1.4 bpp, while HEVC obtains best performance in the range 1.4 to 6.5 bpp. The compression performance is also evaluated based on maximum errors. These results also show that HEVC can achieve a precision of 1°C with an average of 1.3 bpp....

  17. Gas compression infrared generator

    International Nuclear Information System (INIS)

    Hug, W.F.

    1980-01-01

    A molecular gas is compressed in a quasi-adiabatic manner to produce pulsed radiation during each compressor cycle when the pressure and temperature are sufficiently high, and part of the energy is recovered during the expansion phase, as defined in U.S. Pat. No. 3,751,666; characterized by use of a cylinder with a reciprocating piston as a compressor

  18. A Note on McDonald's Generalization of Principal Components Analysis

    Science.gov (United States)

    Shine, Lester C., II

    1972-01-01

    It is shown that McDonald's generalization of Classical Principal Components Analysis to groups of variables maximally channels the totalvariance of the original variables through the groups of variables acting as groups. An equation is obtained for determining the vectors of correlations of the L2 components with the original variables.…

  19. Interpretable functional principal component analysis.

    Science.gov (United States)

    Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

    2016-09-01

    Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.

  20. Biomechanical characteristics of handballing maximally in Australian football.

    Science.gov (United States)

    Parrington, Lucy; Ball, Kevin; MacMahon, Clare

    2014-11-01

    The handball pass is influential in Australian football, and achieving higher ball speeds in flight is an advantage in increasing distance and reducing the chance of interceptions. The purpose of this study was to provide descriptive kinematic data and identify key technical aspects of maximal handball performance. Three-dimensional full body kinematic data from 19 professional Australian football players performing handball pass for maximal speed were collected, and the hand speed at ball contact was used to determine performance. Sixty-four kinematic parameters initially obtained were reduced to 15, and then grouped into like components through a two-stage supervised principal components analysis procedure. These components were then entered into a multiple regression analysis, which indicated that greater hand speed was associated with greater shoulder angular velocity and separation angle between the shoulders and pelvis at ball contact, as well as an earlier time of maximum upper-trunk rotation velocity. These data suggested that in order to increase the speed of the handball pass in Australian football, strategies like increased shoulder angular velocity, increased separation angle at ball contact, and earlier achievement of upper-trunk rotation speed might be beneficial.

  1. Synthesis of magnetic systems producing field with maximal scalar characteristics

    International Nuclear Information System (INIS)

    Klevets, Nickolay I.

    2005-01-01

    A method of synthesis of the magnetic systems (MSs) consisting of uniformly magnetized blocks is proposed. This method allows to synthesize MSs providing maximum value of any magnetic field scalar characteristic. In particular, it is possible to synthesize the MSs providing the maximum of a field projection on a given vector, a gradient of a field modulus and a gradient of a field energy on a given directing vector, a field magnitude, a magnetic flux through a given surface, a scalar product of a field or a force by a directing function given in some area of space, etc. The synthesized MSs provide maximal efficiency of permanent magnets utilization. The usage of the proposed method of MSs synthesis allows to change a procedure of projecting in principal, namely, to execute it according to the following scheme: (a) to choose the sizes, a form and a number of blocks of a system proceeding from technological (economical) reasons; (b) using the proposed synthesis method, to find an orientation of site magnetization providing maximum possible effect of magnet utilization in a system obtained in (a). Such approach considerably reduces a time of MSs projecting and guarantees maximal possible efficiency of magnets utilization. Besides it provides absolute assurance in 'ideality' of a MS design and allows to obtain an exact estimate of the limit parameters of a field in a working area of a projected MS. The method is applicable to a system containing the components from soft magnetic material with linear magnetic properties

  2. Female Traditional Principals and Co-Principals: Experiences of Role Conflict and Job Satisfaction

    Science.gov (United States)

    Eckman, Ellen Wexler; Kelber, Sheryl Talcott

    2010-01-01

    This paper presents a secondary analysis of survey data focusing on role conflict and job satisfaction of 102 female principals. Data were collected from 51 female traditional principals and 51 female co-principals. By examining the traditional and co-principal leadership models as experienced by female principals, this paper addresses the impact…

  3. Shareholder, stakeholder-owner or broad stakeholder maximization

    OpenAIRE

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating stakeholder-owner. Maximization of shareholder value is a special case of owner-maximization, and only under quite re-strictive assumptions shareholder maximization is larger or equal to stakeholder-owner...

  4. Compressible Fluid Suspension Performance Testing

    National Research Council Canada - National Science Library

    Hoogterp, Francis

    2003-01-01

    ... compressible fluid suspension system that was designed and installed on the vehicle by DTI. The purpose of the tests was to evaluate the possible performance benefits of the compressible fluid suspension system...

  5. LZ-Compressed String Dictionaries

    OpenAIRE

    Arz, Julian; Fischer, Johannes

    2013-01-01

    We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

  6. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  7. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  8. On Bayesian Principal Component Analysis

    Czech Academy of Sciences Publication Activity Database

    Šmídl, Václav; Quinn, A.

    2007-01-01

    Roč. 51, č. 9 (2007), s. 4101-4123 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Principal component analysis ( PCA ) * Variational bayes (VB) * von-Mises–Fisher distribution Subject RIV: BC - Control Systems Theory Impact factor: 1.029, year: 2007 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4MYD60N-6&_user=10&_coverDate=05%2F15%2F2007&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b8ea629d48df926fe18f9e5724c9003a

  9. Principals: Learn P.R. Survival Skills.

    Science.gov (United States)

    Reep, Beverly B.

    1988-01-01

    School building level public relations depends on the principal or vice principal. Strategies designed to enhance school public relations programs include linking school and community, working with the press, and keeping morale high inside the school. (MLF)

  10. Vacua of maximal gauged D=3 supergravities

    International Nuclear Information System (INIS)

    Fischbacher, T; Nicolai, H; Samtleben, H

    2002-01-01

    We analyse the scalar potentials of maximal gauged three-dimensional supergravities which reveal a surprisingly rich structure. In contrast to maximal supergravities in dimensions D≥4, all these theories possess a maximally supersymmetric (N=16) ground state with negative cosmological constant Λ 2 gauged theory, whose maximally supersymmetric groundstate has Λ = 0. We compute the mass spectra of bosonic and fermionic fluctuations around these vacua and identify the unitary irreducible representations of the relevant background (super)isometry groups to which they belong. In addition, we find several stationary points which are not maximally supersymmetric, and determine their complete mass spectra as well. In particular, we show that there are analogues of all stationary points found in higher dimensions, among them are de Sitter (dS) vacua in the theories with noncompact gauge groups SO(5, 3) 2 and SO(4, 4) 2 , as well as anti-de Sitter (AdS) vacua in the compact gauged theory preserving 1/4 and 1/8 of the supersymmetries. All the dS vacua have tachyonic instabilities, whereas there do exist nonsupersymmetric AdS vacua which are stable, again in contrast to the D≥4 theories

  11. Digital cinema video compression

    Science.gov (United States)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  12. Fingerprints in compressed strings

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Cording, Patrick Hagge

    2017-01-01

    In this paper we show how to construct a data structure for a string S of size N compressed into a context-free grammar of size n that supports efficient Karp–Rabin fingerprint queries to any substring of S. That is, given indices i and j, the answer to a query is the fingerprint of the substring S......[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(log⁡N) query time, and for Linear SLPs (an SLP derivative that captures LZ78 compression and its variations) we get O(log⁡log⁡N) query time...

  13. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  14. Compressed sensing electron tomography

    International Nuclear Information System (INIS)

    Leary, Rowan; Saghi, Zineb; Midgley, Paul A.; Holland, Daniel J.

    2013-01-01

    The recent mathematical concept of compressed sensing (CS) asserts that a small number of well-chosen measurements can suffice to reconstruct signals that are amenable to sparse or compressible representation. In addition to powerful theoretical results, the principles of CS are being exploited increasingly across a range of experiments to yield substantial performance gains relative to conventional approaches. In this work we describe the application of CS to electron tomography (ET) reconstruction and demonstrate the efficacy of CS–ET with several example studies. Artefacts present in conventional ET reconstructions such as streaking, blurring of object boundaries and elongation are markedly reduced, and robust reconstruction is shown to be possible from far fewer projections than are normally used. The CS–ET approach enables more reliable quantitative analysis of the reconstructions as well as novel 3D studies from extremely limited data. - Highlights: • Compressed sensing (CS) theory and its application to electron tomography (ET) is described. • The practical implementation of CS–ET is outlined and its efficacy demonstrated with examples. • High fidelity tomographic reconstruction is possible from a small number of images. • The CS–ET reconstructions can be more reliably segmented and analysed quantitatively. • CS–ET is applicable to different image content by choice of an appropriate sparsifying transform

  15. Principals as Assessment Leaders in Rural Schools

    Science.gov (United States)

    Renihan, Patrick; Noonan, Brian

    2012-01-01

    This article reports a study of rural school principals' assessment leadership roles and the impact of rural context on their work. The study involved three focus groups of principals serving small rural schools of varied size and grade configuration in three systems. Principals viewed assessment as a matter of teacher accountability and as a…

  16. Principal Stability and the Rural Divide

    Science.gov (United States)

    Pendola, Andrew; Fuller, Edward J.

    2018-01-01

    This article examines the unique features of the rural school context and how these features are associated with the stability of principals in these schools. Given the small but growing literature on the characteristics of rural principals, this study presents an exploratory analysis of principal stability across schools located in different…

  17. New Principal Coaching as a Safety Net

    Science.gov (United States)

    Celoria, Davide; Roberson, Ingrid

    2015-01-01

    This study examines new principal coaching as an induction process and explores the emotional dimensions of educational leadership. Twelve principal coaches and new principals--six of each--participated in this qualitative study that employed emergent coding (Creswell, 2008; Denzin, 2005; Glaser & Strauss, 1998; Spradley, 1979). The major…

  18. 12 CFR 561.39 - Principal office.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Principal office. 561.39 Section 561.39 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY DEFINITIONS FOR REGULATIONS AFFECTING ALL SAVINGS ASSOCIATIONS § 561.39 Principal office. The term principal office means the home...

  19. The Principal as Academician: The Renewed Voice.

    Science.gov (United States)

    McAvoy, Brenda, Ed.

    This collection of essays was written by principals who participated in the 1986-87 Humanities Seminar sponsored by the Principals' Institute of Georgia State University. The focus was "The Evolution of Intellectual Leadership." The roles of the principal as philosopher, historian, ethnician, writer and team member are examined through…

  20. Principal-Counselor Collaboration and School Climate

    Science.gov (United States)

    Rock, Wendy D.; Remley, Theodore P.; Range, Lillian M.

    2017-01-01

    Examining whether principal-counselor collaboration and school climate were related, researchers sent 4,193 surveys to high school counselors in the United States and received 419 responses. As principal-counselor collaboration increased, there were increases in counselors viewing the principal as supportive, the teachers as regarding one another…

  1. Modelling Monthly Mental Sickness Cases Using Principal ...

    African Journals Online (AJOL)

    The methodology was principal component analysis (PCA) using data obtained from the hospital to estimate regression coefficients and parameters. It was found that the principal component regression model that was derived was good predictive tool. The principal component regression model obtained was okay and this ...

  2. Principals' Collaborative Roles as Leaders for Learning

    Science.gov (United States)

    Kitchen, Margaret; Gray, Susan; Jeurissen, Maree

    2016-01-01

    This article draws on data from three multicultural New Zealand primary schools to reconceptualize principals' roles as leaders for learning. In doing so, the writers build on Sinnema and Robinson's (2012) article on goal setting in principal evaluation. Sinnema and Robinson found that even principals hand-picked for their experience fell short on…

  3. Perceptions of Beginning Public School Principals.

    Science.gov (United States)

    Lyons, James E.

    1993-01-01

    Summarizes a study to determine principal's perceptions of their competency in primary responsibility areas and their greatest challenges and frustrations. Beginning principals are challenged by delegating responsibilities and becoming familiar with the principal's role, the local school, and school operations. Their major frustrations are role…

  4. Teacher Supervision Practices and Principals' Characteristics

    Science.gov (United States)

    April, Daniel; Bouchamma, Yamina

    2015-01-01

    A questionnaire was used to determine the individual and collective teacher supervision practices of school principals and vice-principals in Québec (n = 39) who participated in a research-action study on pedagogical supervision. These practices were then analyzed in terms of the principals' sociodemographic and socioprofessional characteristics…

  5. Leadership Coaching for Principals: A National Study

    Science.gov (United States)

    Wise, Donald; Cavazos, Blanca

    2017-01-01

    Surveys were sent to a large representative sample of public school principals in the United States asking if they had received leadership coaching. Comparison of responses to actual numbers of principals indicates that the sample represents the first national study of principal leadership coaching. Results indicate that approximately 50% of all…

  6. 41 CFR 105-68.995 - Principal.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Principal. 105-68.995 Section 105-68.995 Public Contracts and Property Management Federal Property Management Regulations System...-GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 105-68.995 Principal. Principal means— (a...

  7. A principal-agent Model of corruption

    NARCIS (Netherlands)

    Groenendijk, Nico

    1997-01-01

    One of the new avenues in the study of political corruption is that of neo-institutional economics, of which the principal-agent theory is a part. In this article a principal-agent model of corruption is presented, in which there are two principals (one of which is corrupting), and one agent (who is

  8. An information maximization model of eye movements

    Science.gov (United States)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  9. Utility Maximization in Nonconvex Wireless Systems

    CERN Document Server

    Brehmer, Johannes

    2012-01-01

    This monograph formulates a framework for modeling and solving utility maximization problems in nonconvex wireless systems. First, a model for utility optimization in wireless systems is defined. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed. The development is based on a careful examination of the properties that are required for the application of each method. The focus is on problems whose initial formulation does not allow for a solution by standard convex methods. Solution approaches that take into account the nonconvexities inherent to wireless systems are discussed in detail. The monograph concludes with two case studies that demonstrate the application of the proposed framework to utility maximization in multi-antenna broadcast channels.

  10. Maximizing band gaps in plate structures

    DEFF Research Database (Denmark)

    Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard

    2006-01-01

    periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated......Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... theoretically and experimentally and the issue of finite size effects is addressed....

  11. Singularity Structure of Maximally Supersymmetric Scattering Amplitudes

    DEFF Research Database (Denmark)

    Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy

    2014-01-01

    We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....

  12. Learning curves for mutual information maximization

    International Nuclear Information System (INIS)

    Urbanczik, R.

    2003-01-01

    An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed [S. Becker and G. Hinton, Nature (London) 355, 161 (1992)]. For a generic data model, I show that in the large sample limit the structure in the data is recognized by mutual information maximization. For a more restricted model, where the networks are similar to perceptrons, I calculate the learning curves for zero-temperature Gibbs learning. These show that convergence can be rather slow, and a way of regularizing the procedure is considered

  13. Finding Maximal Pairs with Bounded Gap

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Lyngsø, Rune B.; Pedersen, Christian N. S.

    1999-01-01

    . In this paper we present methods for finding all maximal pairs under various constraints on the gap. In a string of length n we can find all maximal pairs with gap in an upper and lower bounded interval in time O(n log n+z) where z is the number of reported pairs. If the upper bound is removed the time reduces...... to O(n+z). Since a tandem repeat is a pair where the gap is zero, our methods can be seen as a generalization of finding tandem repeats. The running time of our methods equals the running time of well known methods for finding tandem repeats....

  14. Principal component regression analysis with SPSS.

    Science.gov (United States)

    Liu, R X; Kuang, J; Gong, Q; Hou, X L

    2003-06-01

    The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.

  15. Energy Efficient Precoding C-RAN Downlink with Compression at Fronthaul

    OpenAIRE

    Nguyen, Kien-Giang; Vu, Quang-Doanh; Juntti, Markku; Tran, Le-Nam

    2017-01-01

    This paper considers a downlink transmission of cloud radio access network (C-RAN) in which precoded baseband signals at a common baseband unit are compressed before being forwarded to radio units (RUs) through limited fronthaul capacity links. We investigate the joint design of precoding, multivariate compression and RU-user selection which maximizes the energy efficiency of downlink C-RAN networks. The considered problem is inherently a rank-constrained mixed Boolean nonconvex program for w...

  16. Maximizing the Range of a Projectile.

    Science.gov (United States)

    Brown, Ronald A.

    1992-01-01

    Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)

  17. Robust Utility Maximization Under Convex Portfolio Constraints

    International Nuclear Information System (INIS)

    Matoussi, Anis; Mezghani, Hanen; Mnif, Mohamed

    2015-01-01

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle

  18. Ehrenfest's Lottery--Time and Entropy Maximization

    Science.gov (United States)

    Ashbaugh, Henry S.

    2010-01-01

    Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…

  19. Reserve design to maximize species persistence

    Science.gov (United States)

    Robert G. Haight; Laurel E. Travis

    2008-01-01

    We develop a reserve design strategy to maximize the probability of species persistence predicted by a stochastic, individual-based, metapopulation model. Because the population model does not fit exact optimization procedures, our strategy involves deriving promising solutions from theory, obtaining promising solutions from a simulation optimization heuristic, and...

  20. Maximal indecomposable past sets and event horizons

    International Nuclear Information System (INIS)

    Krolak, A.

    1984-01-01

    The existence of maximal indecomposable past sets MIPs is demonstrated using the Kuratowski-Zorn lemma. A criterion for the existence of an absolute event horizon in space-time is given in terms of MIPs and a relation to black hole event horizon is shown. (author)

  1. Maximization of eigenvalues using topology optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2000-01-01

    to localized modes in low density areas. The topology optimization problem is formulated using the SIMP method. Special attention is paid to a numerical method for removing localized eigenmodes in low density areas. The method is applied to numerical examples of maximizing the first eigenfrequency, One example...

  2. Maximizing Resource Utilization in Video Streaming Systems

    Science.gov (United States)

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  3. A THEORY OF MAXIMIZING SENSORY INFORMATION

    NARCIS (Netherlands)

    Hateren, J.H. van

    1992-01-01

    A theory is developed on the assumption that early sensory processing aims at maximizing the information rate in the channels connecting the sensory system to more central parts of the brain, where it is assumed that these channels are noisy and have a limited dynamic range. Given a stimulus power

  4. Maximizing scientific knowledge from randomized clinical trials

    DEFF Research Database (Denmark)

    Gustafsson, Finn; Atar, Dan; Pitt, Bertram

    2010-01-01

    Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly vari...

  5. A Model of College Tuition Maximization

    Science.gov (United States)

    Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.

    2009-01-01

    This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…

  6. Logit Analysis for Profit Maximizing Loan Classification

    OpenAIRE

    Watt, David L.; Mortensen, Timothy L.; Leistritz, F. Larry

    1988-01-01

    Lending criteria and loan classification methods are developed. Rating system breaking points are analyzed to present a method to maximize loan revenues. Financial characteristics of farmers are used as determinants of delinquency in a multivariate logistic model. Results indicate that debt-to-asset and operating ration are most indicative of default.

  7. Fast algorithm for exploring and compressing of large hyperspectral images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    2011-01-01

    A new method for calculation of latent variable space for exploratory analysis and dimension reduction of large hyperspectral images is proposed. The method is based on significant downsampling of image pixels with preservation of pixels’ structure in feature (variable) space. To achieve this, in...... can be used first of all for fast compression of large data arrays with principal component analysis or similar projection techniques....

  8. Developing maximal neuromuscular power: Part 1--biological basis of maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-01-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances, the ability to generate maximal muscular power. Part 1 focuses on the factors that affect maximal power production, while part 2, which will follow in a forthcoming edition of Sports Medicine, explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability of the neuromuscular system to generate maximal power is affected by a range of interrelated factors. Maximal muscular power is defined and limited by the force-velocity relationship and affected by the length-tension relationship. The ability to generate maximal power is influenced by the type of muscle action involved and, in particular, the time available to develop force, storage and utilization of elastic energy, interactions of contractile and elastic elements, potentiation of contractile and elastic filaments as well as stretch reflexes. Furthermore, maximal power production is influenced by morphological factors including fibre type contribution to whole muscle area, muscle architectural features and tendon properties as well as neural factors including motor unit recruitment, firing frequency, synchronization and inter-muscular coordination. In addition, acute changes in the muscle environment (i.e. alterations resulting from fatigue, changes in hormone milieu and muscle temperature) impact the ability to generate maximal power. Resistance training has been shown to impact each of these neuromuscular factors in quite specific ways. Therefore, an understanding of the biological basis of maximal power production is essential for developing training programmes that effectively enhance maximal power production in the human.

  9. Understanding Violations of Gricean Maxims in Preschoolers and Adults

    Directory of Open Access Journals (Sweden)

    Mako eOkanda

    2015-07-01

    Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.

  10. Compressive Transient Imaging

    KAUST Repository

    Sun, Qilin

    2017-04-01

    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  11. Fast Compressive Tracking.

    Science.gov (United States)

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  12. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  14. Compression of magnetohydrodynamic simulation data using singular value decomposition

    International Nuclear Information System (INIS)

    Castillo Negrete, D. del; Hirshman, S.P.; Spong, D.A.; D'Azevedo, E.F.

    2007-01-01

    Numerical calculations of magnetic and flow fields in magnetohydrodynamic (MHD) simulations can result in extensive data sets. Particle-based calculations in these MHD fields, needed to provide closure relations for the MHD equations, will require communication of this data to multiple processors and rapid interpolation at numerous particle orbit positions. To facilitate this analysis it is advantageous to compress the data using singular value decomposition (SVD, or principal orthogonal decomposition, POD) methods. As an example of the compression technique, SVD is applied to magnetic field data arising from a dynamic nonlinear MHD code. The performance of the SVD compression algorithm is analyzed by calculating Poincare plots for electron orbits in a three-dimensional magnetic field and comparing the results with uncompressed data

  15. Analysis by compression

    DEFF Research Database (Denmark)

    Meredith, David

    MEL is a geometric music encoding language designed to allow for musical objects to be encoded parsimoniously as sets of points in pitch-time space, generated by performing geometric transformations on component patterns. MEL has been implemented in Java and coupled with the SIATEC pattern...... discovery algorithm to allow for compact encodings to be generated automatically from in extenso note lists. The MEL-SIATEC system is founded on the belief that music analysis and music perception can be modelled as the compression of in extenso descriptions of musical objects....

  16. Compressive Fatigue in Wood

    DEFF Research Database (Denmark)

    Clorius, Christian Odin; Pedersen, Martin Bo Uhre; Hoffmeyer, Preben

    1999-01-01

    An investigation of fatigue failure in wood subjected to load cycles in compression parallel to grain is presented. Small clear specimens of spruce are taken to failure in square wave formed fatigue loading at a stress excitation level corresponding to 80% of the short term strength. Four...... frequencies ranging from 0.01 Hz to 10 Hz are used. The number of cycles to failure is found to be a poor measure of the fatigue performance of wood. Creep, maximum strain, stiffness and work are monitored throughout the fatigue tests. Accumulated creep is suggested identified with damage and a correlation...

  17. Compressive full waveform lidar

    Science.gov (United States)

    Yang, Weiyi; Ke, Jun

    2017-05-01

    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  18. Metal Hydride Compression

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Terry A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bowman, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Barton [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Anovitz, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jensen, Craig [Hawaii Hydrogen Carriers LLC, Honolulu, HI (United States)

    2017-07-01

    Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methods of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H2 at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H2) gas compressor with a

  19. Free compression tube. Applications

    Science.gov (United States)

    Rusu, Ioan

    2012-11-01

    During the flight of vehicles, their propulsion energy must overcome gravity, to ensure the displacement of air masses on vehicle trajectory, to cover both energy losses from the friction between a solid surface and the air and also the kinetic energy of reflected air masses due to the impact with the flying vehicle. The flight optimization by increasing speed and reducing fuel consumption has directed research in the aerodynamics field. The flying vehicles shapes obtained through studies in the wind tunnel provide the optimization of the impact with the air masses and the airflow along the vehicle. By energy balance studies for vehicles in flight, the author Ioan Rusu directed his research in reducing the energy lost at vehicle impact with air masses. In this respect as compared to classical solutions for building flight vehicles aerodynamic surfaces which reduce the impact and friction with air masses, Ioan Rusu has invented a device which he named free compression tube for rockets, registered with the State Office for Inventions and Trademarks of Romania, OSIM, deposit f 2011 0352. Mounted in front of flight vehicles it eliminates significantly the impact and friction of air masses with the vehicle solid. The air masses come into contact with the air inside the free compression tube and the air-solid friction is eliminated and replaced by air to air friction.

  20. Photon compression in cylinders

    International Nuclear Information System (INIS)

    Ensley, D.L.

    1977-01-01

    It has been shown theoretically that intense microwave radiation is absorbed non-classically by a newly enunciated mechanism when interacting with hydrogen plasma. Fields > 1 Mg, lambda > 1 mm are within this regime. The predicted absorption, approximately P/sub rf/v/sub theta/sup e/, has not yet been experimentally confirmed. The applications of such a coupling are many. If microwave bursts approximately > 5 x 10 14 watts, 5 ns can be generated, the net generation of power from pellet fusion as well as various military applications becomes feasible. The purpose, then, for considering gas-gun photon compression is to obtain the above experimental capability by converting the gas kinetic energy directly into microwave form. Energies of >10 5 joules cm -2 and powers of >10 13 watts cm -2 are potentially available for photon interaction experiments using presently available technology. The following topics are discussed: microwave modes in a finite cylinder, injection, compression, switchout operation, and system performance parameter scaling

  1. Fingerprints in Compressed Strings

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2013-01-01

    The Karp-Rabin fingerprint of a string is a type of hash value that due to its strong properties has been used in many string algorithms. In this paper we show how to construct a data structure for a string S of size N compressed by a context-free grammar of size n that answers fingerprint queries...... derivative that captures LZ78 compression and its variations) we get O(loglogN) query time. Hence, our data structures has the same time and space complexity as for random access in SLPs. We utilize the fingerprint data structures to solve the longest common extension problem in query time O(logNlogℓ) and O....... That is, given indices i and j, the answer to a query is the fingerprint of the substring S[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(logN) query time, and for Linear SLPs (an SLP...

  2. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  3. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  4. Introduction to compressible fluid flow

    CERN Document Server

    Oosthuizen, Patrick H

    2013-01-01

    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices

  5. Mammographic compression in Asian women.

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  6. Reduction of symplectic principal R-bundles

    International Nuclear Information System (INIS)

    Lacirasella, Ignazio; Marrero, Juan Carlos; Padrón, Edith

    2012-01-01

    We describe a reduction process for symplectic principal R-bundles in the presence of a momentum map. These types of structures play an important role in the geometric formulation of non-autonomous Hamiltonian systems. We apply this procedure to the standard symplectic principal R-bundle associated with a fibration π:M→R. Moreover, we show a reduction process for non-autonomous Hamiltonian systems on symplectic principal R-bundles. We apply these reduction processes to several examples. (paper)

  7. Refined reservoir description to maximize oil recovery

    International Nuclear Information System (INIS)

    Flewitt, W.E.

    1975-01-01

    To assure maximized oil recovery from older pools, reservoir description has been advanced by fully integrating original open-hole logs and the recently introduced interpretive techniques made available through cased-hole wireline saturation logs. A refined reservoir description utilizing normalized original wireline porosity logs has been completed in the Judy Creek Beaverhill Lake ''A'' Pool, a reefal carbonate pool with current potential productivity of 100,000 BOPD and 188 active wells. Continuous porosity was documented within a reef rim and cap while discontinuous porous lenses characterized an interior lagoon. With the use of pulsed neutron logs and production data a separate water front and pressure response was recognized within discrete environmental units. The refined reservoir description aided in reservoir simulation model studies and quantifying pool performance. A pattern water flood has now replaced the original peripheral bottom water drive to maximize oil recovery

  8. Maximal frustration as an immunological principle.

    Science.gov (United States)

    de Abreu, F Vistulo; Mostardinha, P

    2009-03-06

    A fundamental problem in immunology is that of understanding how the immune system selects promptly which cells to kill without harming the body. This problem poses an apparent paradox. Strong reactivity against pathogens seems incompatible with perfect tolerance towards self. We propose a different view on cellular reactivity to overcome this paradox: effector functions should be seen as the outcome of cellular decisions which can be in conflict with other cells' decisions. We argue that if cellular systems are frustrated, then extensive cross-reactivity among the elements in the system can decrease the reactivity of the system as a whole and induce perfect tolerance. Using numerical and mathematical analyses, we discuss two simple models that perform optimal pathogenic detection with no autoimmunity if cells are maximally frustrated. This study strongly suggests that a principle of maximal frustration could be used to build artificial immune systems. It would be interesting to test this principle in the real adaptive immune system.

  9. Derivative pricing based on local utility maximization

    OpenAIRE

    Jan Kallsen

    2002-01-01

    This paper discusses a new approach to contingent claim valuation in general incomplete market models. We determine the neutral derivative price which occurs if investors maximize their local utility and if derivative demand and supply are balanced. We also introduce the sensitivity process of a contingent claim. This process quantifies the reliability of the neutral derivative price and it can be used to construct price bounds. Moreover, it allows to calibrate market models in order to be co...

  10. Control of Shareholders’ Wealth Maximization in Nigeria

    OpenAIRE

    A. O. Oladipupo; C. O. Okafor

    2014-01-01

    This research focuses on who controls shareholder’s wealth maximization and how does this affect firm’s performance in publicly quoted non-financial companies in Nigeria. The shareholder fund was the dependent while explanatory variables were firm size (proxied by log of turnover), retained earning (representing management control) and dividend payment (representing measure of shareholders control). The data used for this study were obtained from the Nigerian Stock Exchange [NSE] fact book an...

  11. Definable maximal discrete sets in forcing extensions

    DEFF Research Database (Denmark)

    Törnquist, Asger Dag; Schrittesser, David

    2018-01-01

    Let  be a Σ11 binary relation, and recall that a set A is -discrete if no two elements of A are related by . We show that in the Sacks and Miller forcing extensions of L there is a Δ12 maximal -discrete set. We use this to answer in the negative the main question posed in [5] by showing...

  12. Dynamic Convex Duality in Constrained Utility Maximization

    OpenAIRE

    Li, Yusong; Zheng, Harry

    2016-01-01

    In this paper, we study a constrained utility maximization problem following the convex duality approach. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint process coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also...

  13. Single maximal versus combination punch kinematics.

    Science.gov (United States)

    Piorkowski, Barry A; Lees, Adrian; Barton, Gabor J

    2011-03-01

    The aim of this study was to determine the influence of punch type (Jab, Cross, Lead Hook and Reverse Hook) and punch modality (Single maximal, 'In-synch' and 'Out of synch' combination) on punch speed and delivery time. Ten competition-standard volunteers performed punches with markers placed on their anatomical landmarks for 3D motion capture with an eight-camera optoelectronic system. Speed and duration between key moments were computed. There were significant differences in contact speed between punch types (F(2,18,84.87) = 105.76, p = 0.001) with Lead and Reverse Hooks developing greater speed than Jab and Cross. There were significant differences in contact speed between punch modalities (F(2,64,102.87) = 23.52, p = 0.001) with the Single maximal (M+/- SD: 9.26 +/- 2.09 m/s) higher than 'Out of synch' (7.49 +/- 2.32 m/s), 'In-synch' left (8.01 +/- 2.35 m/s) or right lead (7.97 +/- 2.53 m/s). Delivery times were significantly lower for Jab and Cross than Hook. Times were significantly lower 'In-synch' than a Single maximal or 'Out of synch' combination mode. It is concluded that a defender may have more evasion-time than previously reported. This research could be of use to performers and coaches when considering training preparations.

  14. Formation Control for the MAXIM Mission

    Science.gov (United States)

    Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.

    2004-01-01

    Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.

  15. Gradient Dynamics and Entropy Production Maximization

    Science.gov (United States)

    Janečka, Adam; Pavelka, Michal

    2018-01-01

    We compare two methods for modeling dissipative processes, namely gradient dynamics and entropy production maximization. Both methods require similar physical inputs-how energy (or entropy) is stored and how it is dissipated. Gradient dynamics describes irreversible evolution by means of dissipation potential and entropy, it automatically satisfies Onsager reciprocal relations as well as their nonlinear generalization (Maxwell-Onsager relations), and it has statistical interpretation. Entropy production maximization is based on knowledge of free energy (or another thermodynamic potential) and entropy production. It also leads to the linear Onsager reciprocal relations and it has proven successful in thermodynamics of complex materials. Both methods are thermodynamically sound as they ensure approach to equilibrium, and we compare them and discuss their advantages and shortcomings. In particular, conditions under which the two approaches coincide and are capable of providing the same constitutive relations are identified. Besides, a commonly used but not often mentioned step in the entropy production maximization is pinpointed and the condition of incompressibility is incorporated into gradient dynamics.

  16. Adiabatic compression and radiative compression of magnetic fields

    International Nuclear Information System (INIS)

    Woods, C.H.

    1980-01-01

    Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape

  17. Waves and compressible flow

    CERN Document Server

    Ockendon, Hilary

    2016-01-01

    Now in its second edition, this book continues to give readers a broad mathematical basis for modelling and understanding the wide range of wave phenomena encountered in modern applications.  New and expanded material includes topics such as elastoplastic waves and waves in plasmas, as well as new exercises.  Comprehensive collections of models are used to illustrate the underpinning mathematical methodologies, which include the basic ideas of the relevant partial differential equations, characteristics, ray theory, asymptotic analysis, dispersion, shock waves, and weak solutions. Although the main focus is on compressible fluid flow, the authors show how intimately gasdynamic waves are related to wave phenomena in many other areas of physical science.   Special emphasis is placed on the development of physical intuition to supplement and reinforce analytical thinking. Each chapter includes a complete set of carefully prepared exercises, making this a suitable textbook for students in applied mathematics, ...

  18. A Streaming PCA VLSI Chip for Neural Data Compression.

    Science.gov (United States)

    Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi

    2017-12-01

    Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.

  19. Principal component analysis for authorship attribution

    OpenAIRE

    Amir Jamak; Alen Savatic; Mehmet Can

    2012-01-01

    Background: To recognize the authors of the texts by the use of statistical tools, one first needs to decide about the features to be used as author characteristics, and then extract these features from texts. The features extracted from texts are mostly the counts of so called function words. Objectives: The data extracted are processed further to compress as a data with less number of features, such a way that the compressed data still has the power of effective discriminators. In this case...

  20. The principal Hugoniot of Mg2SiO4 to 950 GPa

    Science.gov (United States)

    Townsend, J. P.; Root, S.; Shulenburger, L.; Lemke, R. W.; Kraus, R. G.; Jacobsen, S. B.; Spaulding, D.; Davies, E.; Stewart, S. T.

    2017-12-01

    We present new measurements and ab-initio calculations of the principal Hugoniot states of forsterite Mg2SiO4 in the liquid regime between 200-950 GPa.Forsterite samples were shock compressed along the principal Hugoniot using plate-impact shock compression experiments on the Sandia National Laboratories Z machine facility.In order to gain insight into the physical state of the liquid, we performed quantum molecular dynamics calculations of the Hugoniot and compare the results to experiment.We show that the principal Hugoniot is consistent with that of a single molecular fluid phase of Mg2SiO4, and compare our results to previous dynamic compression experiments and QMD calculations.Finally, we discuss how the results inform planetary accretion and impact models.Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  1. New pulser for principal PO power

    International Nuclear Information System (INIS)

    Coudert, G.

    1984-01-01

    The pulser of the principal power of the PS is the unit that makes it possible to generate the reference function of the voltage of the principal magnet. This function depends on time and on the magnetic field of the magnet. It also generates various synchronization and reference pulses

  2. Principals: Human Capital Managers at Every School

    Science.gov (United States)

    Kimball, Steven M.

    2011-01-01

    Being a principal is more than just being an instructional leader. Principals also must manage their schools' teaching talent in a strategic way so that it is linked to school instructional improvement strategies, to the competencies needed to enact the strategies, and to success in boosting student learning. Teacher acquisition and performance…

  3. Constructing principals' professional identities through life stories ...

    African Journals Online (AJOL)

    The Life History approach was used to collect data from six ... experience as the most significant leadership factors that influence principals' ... ranging from their entry into the teaching profession to their appointment as ..... teachers. I think I learnt from my principal to be strict but accommodating ..... Teachers College Press.

  4. Integrating Technology: The Principals' Role and Effect

    Science.gov (United States)

    Machado, Lucas J.; Chung, Chia-Jung

    2015-01-01

    There are many factors that influence technology integration in the classroom such as teacher willingness, availability of hardware, and professional development of staff. Taking into account these elements, this paper describes research on technology integration with a focus on principals' attitudes. The role of the principal in classroom…

  5. Building Leadership Capacity to Support Principal Succession

    Science.gov (United States)

    Escalante, Karen Elizabeth

    2016-01-01

    This study applies transformational leadership theory practices, specifically inspiring a shared vision, modeling the way and enabling others to act to examine the purposeful ways in which principals work to build the next generation of teacher leaders in response to the dearth of K-12 principals. The purpose of this study was to discover how one…

  6. Deformation quantization of principal fibre bundles

    International Nuclear Information System (INIS)

    Weiss, S.

    2007-01-01

    Deformation quantization is an algebraic but still geometrical way to define noncommutative spacetimes. In order to investigate corresponding gauge theories on such spaces, the geometrical formulation in terms of principal fibre bundles yields the appropriate framework. In this talk I will explain what should be understood by a deformation quantization of principal fibre bundles and how associated vector bundles arise in this context. (author)

  7. Primary School Principals' Self-Monitoring Skills

    Science.gov (United States)

    Konan, Necdet

    2015-01-01

    The aim of the present study is to identify primary school principals' self-monitoring skills. The study adopted the general survey model and its population comprised primary school principals serving in the city of Diyarbakir, Turkey, while 292 of these constituted the sample. Self-Monitoring Scale was used as the data collection instrument. In…

  8. Revising the Role of Principal Supervisor

    Science.gov (United States)

    Saltzman, Amy

    2016-01-01

    In Washington, D.C., and Tulsa, Okla., districts whose efforts are supported by the Wallace Foundation, principal supervisors concentrate on bolstering their principals' work to improve instruction, as opposed to focusing on the managerial or operational aspects of running a school. Supervisors oversee fewer schools, which enables them to provide…

  9. An Examination of Principal Job Satisfaction

    Science.gov (United States)

    Pengilly, Michelle M.

    2010-01-01

    As education continues to succumb to deficits in budgets and increasingly high levels of student performance to meet the federal and state mandates, the quest to sustain and retain successful principals is imperative. The National Association of School Boards (1999) portrays effective principals as "linchpins" of school improvement and…

  10. Do Principals Fire the Worst Teachers?

    Science.gov (United States)

    Jacob, Brian A.

    2011-01-01

    This article takes advantage of a unique policy change to examine how principals make decisions regarding teacher dismissal. In 2004, the Chicago Public Schools (CPS) and Chicago Teachers Union signed a new collective bargaining agreement that gave principals the flexibility to dismiss probationary teachers for any reason and without the…

  11. Artful Dodges Principals Use to Beat Bureaucracy.

    Science.gov (United States)

    Ficklen, Ellen

    1982-01-01

    A study of Chicago (Illinois) principals revealed many ways principals practiced "creative insubordination"--avoiding following instructions but still getting things done. Among the dodges are deliberately missing deadlines, following orders literally, ignoring channels to procure teachers or materials, and using community members to…

  12. Women principals' reflections of curriculum management challenges ...

    African Journals Online (AJOL)

    This study reports the reflections of grade 6 rural primary principals in Mpumalanga province. A qualitative method of inquiry was used in this article, where data were collected using individual interviews with three principals and focus group discussions with the school management teams (SMTs) of three primary schools.

  13. The Succession of a School Principal.

    Science.gov (United States)

    Fauske, Janice R.; Ogawa, Rodney T.

    Applying theory from organizational and cultural perspectives to succession of principals, this study observes and records the language and culture of a small suburban elementary school. The study's procedures included analyses of shared organizational understandings as well as identification of the principal's influence on the school. Analyses of…

  14. Principals' Perceptions of School Public Relations

    Science.gov (United States)

    Morris, Robert C.; Chan, Tak Cheung; Patterson, Judith

    2009-01-01

    This study was designed to investigate school principals' perceptions on school public relations in five areas: community demographics, parental involvement, internal and external communications, school council issues, and community resources. Findings indicated that principals' concerns were as follows: rapid population growth, change of…

  15. Should Principals Know More about Law?

    Science.gov (United States)

    Doctor, Tyrus L.

    2013-01-01

    Educational law is a critical piece of the education conundrum. Principals reference law books on a daily basis in order to address the wide range of complex problems in the school system. A principal's knowledge of law issues and legal decision-making are essential to provide effective feedback for a successful school.

  16. How Not to Prepare School Principals

    Science.gov (United States)

    Davis, Stephen H.; Leon, Ronald J.

    2011-01-01

    Instead of focusing on how principals should be trained, an contrarian view is offered, grounded upon theoretical perspectives of experiential learning, and in particular, upon the theory of andragogy. A brief parable of the DoNoHarm School of Medicine is used as a descriptive analog for many principal preparation programs in America. The…

  17. Social Media Strategies for School Principals

    Science.gov (United States)

    Cox, Dan; McLeod, Scott

    2014-01-01

    The purpose of this qualitative study was to describe, analyze, and interpret the experiences of school principals who use multiple social media tools with stakeholders as part of their comprehensive communications practices. Additionally, it examined why school principals have chosen to communicate with their stakeholders through social media.…

  18. New Principals' Perspectives of Their Multifaceted Roles

    Science.gov (United States)

    Gentilucci, James L.; Denti, Lou; Guaglianone, Curtis L.

    2013-01-01

    This study utilizes Symbolic Interactionism to explore perspectives of neophyte principals. Findings explain how these perspectives are modified through complex interactions throughout the school year, and they also suggest preparation programs can help new principals most effectively by teaching "soft" skills such as active listening…

  19. The Principal's Guide to Grant Success.

    Science.gov (United States)

    Bauer, David G.

    This book provides principals of public and private elementary and middle schools with a step-by-step approach for developing a system that empowers faculty, staff, and the school community in attracting grant funds. Following the introduction, chapter 1 discusses the principal's role in supporting grantseeking. Chapter 2 describes how to…

  20. Application specific compression : final report.

    Energy Technology Data Exchange (ETDEWEB)

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  1. Compressed Baryonic Matter of Astrophysics

    OpenAIRE

    Guo, Yanjun; Xu, Renxin

    2013-01-01

    Baryonic matter in the core of a massive and evolved star is compressed significantly to form a supra-nuclear object, and compressed baryonic matter (CBM) is then produced after supernova. The state of cold matter at a few nuclear density is pedagogically reviewed, with significant attention paid to a possible quark-cluster state conjectured from an astrophysical point of view.

  2. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  3. Data Compression with Linear Algebra

    OpenAIRE

    Etler, David

    2015-01-01

    A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.

  4. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  5. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  6. Postactivation potentiation biases maximal isometric strength assessment.

    Science.gov (United States)

    Lima, Leonardo Coelho Rabello; Oliveira, Felipe Bruno Dias; Oliveira, Thiago Pires; Assumpção, Claudio de Oliveira; Greco, Camila Coelho; Cardozo, Adalgiso Croscato; Denadai, Benedito Sérgio

    2014-01-01

    Postactivation potentiation (PAP) is known to enhance force production. Maximal isometric strength assessment protocols usually consist of two or more maximal voluntary isometric contractions (MVCs). The objective of this study was to determine if PAP would influence isometric strength assessment. Healthy male volunteers (n = 23) performed two five-second MVCs separated by a 180-seconds interval. Changes in isometric peak torque (IPT), time to achieve it (tPTI), contractile impulse (CI), root mean square of the electromyographic signal during PTI (RMS), and rate of torque development (RTD), in different intervals, were measured. Significant increases in IPT (240.6 ± 55.7 N·m versus 248.9 ± 55.1 N·m), RTD (746 ± 152 N·m·s(-1) versus 727 ± 158 N·m·s(-1)), and RMS (59.1 ± 12.2% RMSMAX  versus 54.8 ± 9.4% RMSMAX) were found on the second MVC. tPTI decreased significantly on the second MVC (2373 ± 1200 ms versus 2784 ± 1226 ms). We conclude that a first MVC leads to PAP that elicits significant enhancements in strength-related variables of a second MVC performed 180 seconds later. If disconsidered, this phenomenon might bias maximal isometric strength assessment, overestimating some of these variables.

  7. Gain maximization in a probabilistic entanglement protocol

    Science.gov (United States)

    di Lorenzo, Antonio; Esteves de Queiroz, Johnny Hebert

    Entanglement is a resource. We can therefore define gain as a monotonic function of entanglement G (E) . If a pair with entanglement E is produced with probability P, the net gain is N = PG (E) - (1 - P) C , where C is the cost of a failed attempt. We study a protocol where a pair of quantum systems is produced in a maximally entangled state ρm with probability Pm, while it is produced in a partially entangled state ρp with the complementary probability 1 -Pm . We mix a fraction w of the partially entangled pairs with the maximally entangled ones, i.e. we take the state to be ρ = (ρm + wUlocρpUloc+) / (1 + w) , where Uloc is an appropriate unitary local operation designed to maximize the entanglement of ρ. This procedure on one hand reduces the entanglement E, and hence the gain, but on the other hand it increases the probability of success to P =Pm + w (1 -Pm) , therefore the net gain N may increase. There may be hence, a priori, an optimal value for w, the fraction of failed attempts that we mix in. We show that, in the hypothesis of a linear gain G (E) = E , even assuming a vanishing cost C -> 0 , the net gain N is increasing with w, therefore the best strategy is to always mix the partially entangled states. Work supported by CNPq, Conselho Nacional de Desenvolvimento Científico e Tecnológico, proc. 311288/2014-6, and by FAPEMIG, Fundação de Amparo à Pesquisa de Minas Gerais, proc. IC-FAPEMIG2016-0269 and PPM-00607-16.

  8. Maximizing percentage depletion in solid minerals

    International Nuclear Information System (INIS)

    Tripp, J.; Grove, H.D.; McGrath, M.

    1982-01-01

    This article develops a strategy for maximizing percentage depletion deductions when extracting uranium or other solid minerals. The goal is to avoid losing percentage depletion deductions by staying below the 50% limitation on taxable income from the property. The article is divided into two major sections. The first section is comprised of depletion calculations that illustrate the problem and corresponding solutions. The last section deals with the feasibility of applying the strategy and complying with the Internal Revenue Code and appropriate regulations. Three separate strategies or appropriate situations are developed and illustrated. 13 references, 3 figures, 7 tables

  9. What currency do bumble bees maximize?

    Directory of Open Access Journals (Sweden)

    Nicholas L Charlton

    2010-08-01

    Full Text Available In modelling bumble bee foraging, net rate of energetic intake has been suggested as the appropriate currency. The foraging behaviour of honey bees is better predicted by using efficiency, the ratio of energetic gain to expenditure, as the currency. We re-analyse several studies of bumble bee foraging and show that efficiency is as good a currency as net rate in terms of predicting behaviour. We suggest that future studies of the foraging of bumble bees should be designed to distinguish between net rate and efficiency maximizing behaviour in an attempt to discover which is the more appropriate currency.

  10. New Maximal Two-distance Sets

    DEFF Research Database (Denmark)

    Lisonek, Petr

    1996-01-01

    A two-distance set in E^d is a point set X inthe d-dimensional Euclidean spacesuch that the distances between distinct points in Xassume only two different non-zero values. Based on results from classical distance geometry, we developan algorithm to classify, for a given dimension, all maximal...... (largest possible)two-distance sets in E^d.Using this algorithm we have completed the full classificationfor all dimensions less than or equal to 7, andwe have found one set in E^8 whosemaximality follows from Blokhuis' upper bound on sizes of s-distance sets.While in the dimensions less than or equal to 6...

  11. Maximizing policy learning in international committees

    DEFF Research Database (Denmark)

    Nedergaard, Peter

    2007-01-01

    , this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...

  12. Pouliot type duality via a-maximization

    International Nuclear Information System (INIS)

    Kawano, Teruhiko; Ookouchi, Yutaka; Tachikawa, Yuji; Yagi, Futoshi

    2006-01-01

    We study four-dimensional N=1Spin(10) gauge theory with a single spinor and N Q vectors at the superconformal fixed point via the electric-magnetic duality and a-maximization. When gauge invariant chiral primary operators hit the unitarity bounds, we find that the theory with no superpotential is identical to the one with some superpotential at the infrared fixed point. The auxiliary field method in the electric theory offers a satisfying description of the infrared fixed point, which is consistent with the better picture in the magnetic theory. In particular, it gives a clear description of the emergence of new massless degrees of freedom in the electric theory

  13. Endogenous Market Structures and Contract Theory. Delegation, principal-agent contracts, screening, franchising and tying

    OpenAIRE

    Etro Federico

    2010-01-01

    I study the role of unilateral strategic contracts for firms active in markets with price competition and endogenous entry. Traditional results change substantially when the market structure is endogenous rather than exogenous. They concern 1) contracts of managerial delegation to non-profit maximizers, 2) incentive principal-agent contracts in the presence of moral hazard on cost reducing activities, 3) screening contracts in case of asymmetric information on the productivity of the managers...

  14. Functional Principal Components Analysis of Shanghai Stock Exchange 50 Index

    Directory of Open Access Journals (Sweden)

    Zhiliang Wang

    2014-01-01

    Full Text Available The main purpose of this paper is to explore the principle components of Shanghai stock exchange 50 index by means of functional principal component analysis (FPCA. Functional data analysis (FDA deals with random variables (or process with realizations in the smooth functional space. One of the most popular FDA techniques is functional principal component analysis, which was introduced for the statistical analysis of a set of financial time series from an explorative point of view. FPCA is the functional analogue of the well-known dimension reduction technique in the multivariate statistical analysis, searching for linear transformations of the random vector with the maximal variance. In this paper, we studied the monthly return volatility of Shanghai stock exchange 50 index (SSE50. Using FPCA to reduce dimension to a finite level, we extracted the most significant components of the data and some relevant statistical features of such related datasets. The calculated results show that regarding the samples as random functions is rational. Compared with the ordinary principle component analysis, FPCA can solve the problem of different dimensions in the samples. And FPCA is a convenient approach to extract the main variance factors.

  15. Multistage principal component analysis based method for abdominal ECG decomposition

    International Nuclear Information System (INIS)

    Petrolis, Robertas; Krisciukaitis, Algimantas; Gintautas, Vladas

    2015-01-01

    Reflection of fetal heart electrical activity is present in registered abdominal ECG signals. However this signal component has noticeably less energy than concurrent signals, especially maternal ECG. Therefore traditionally recommended independent component analysis, fails to separate these two ECG signals. Multistage principal component analysis (PCA) is proposed for step-by-step extraction of abdominal ECG signal components. Truncated representation and subsequent subtraction of cardio cycles of maternal ECG are the first steps. The energy of fetal ECG component then becomes comparable or even exceeds energy of other components in the remaining signal. Second stage PCA concentrates energy of the sought signal in one principal component assuring its maximal amplitude regardless to the orientation of the fetus in multilead recordings. Third stage PCA is performed on signal excerpts representing detected fetal heart beats in aim to perform their truncated representation reconstructing their shape for further analysis. The algorithm was tested with PhysioNet Challenge 2013 signals and signals recorded in the Department of Obstetrics and Gynecology, Lithuanian University of Health Sciences. Results of our method in PhysioNet Challenge 2013 on open data set were: average score: 341.503 bpm 2 and 32.81 ms. (paper)

  16. Sparse principal component analysis in medical shape modeling

    Science.gov (United States)

    Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus

    2006-03-01

    Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.

  17. Developing maximal neuromuscular power: part 2 - training considerations for improving maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-02-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and

  18. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  19. Preparing Principals as Instructional Leaders: Perceptions of University Faculty, Expert Principals, and Expert Teacher Leaders

    Science.gov (United States)

    Taylor Backor, Karen; Gordon, Stephen P.

    2015-01-01

    Although research has established links between the principal's instructional leadership and student achievement, there is considerable concern in the literature concerning the capacity of principal preparation programs to prepare instructional leaders. This study interviewed educational leadership faculty as well as expert principals and teacher…

  20. Exploring the Impact of Applicants' Gender and Religion on Principals' Screening Decisions for Assistant Principal Applicants

    Science.gov (United States)

    Bon, Susan C.

    2009-01-01

    In this experimental study, a national random sample of high school principals (stratified by gender) were asked to evaluate hypothetical applicants whose resumes varied by religion (Jewish, Catholic, nondenominational) and gender (male, female) for employment as assistant principals. Results reveal that male principals rate all applicants higher…

  1. Principal Self-Efficacy and Work Engagement: Assessing a Norwegian Principal Self-Efficacy Scale

    Science.gov (United States)

    Federici, Roger A.; Skaalvik, Einar M.

    2011-01-01

    One purpose of the present study was to develop and test the factor structure of a multidimensional and hierarchical Norwegian Principal Self-Efficacy Scale (NPSES). Another purpose of the study was to investigate the relationship between principal self-efficacy and work engagement. Principal self-efficacy was measured by the 22-item NPSES. Work…

  2. Principal Time Management Skills: Explaining Patterns in Principals' Time Use, Job Stress, and Perceived Effectiveness

    Science.gov (United States)

    Grissom, Jason A.; Loeb, Susanna; Mitani, Hajime

    2015-01-01

    Purpose: Time demands faced by school principals make principals' work increasingly difficult. Research outside education suggests that effective time management skills may help principals meet job demands, reduce job stress, and improve their performance. The purpose of this paper is to investigate these hypotheses. Design/methodology/approach:…

  3. Compression etiology in tendinopathy.

    Science.gov (United States)

    Almekinders, Louis C; Weinhold, Paul S; Maffulli, Nicola

    2003-10-01

    Recent studies have emphasized that the etiology of tendinopathy is not as simple as was once thought. The etiology is likely to be multifactorial. Etiologic factors may include some of the traditional factors such as overuse, inflexibility, and equipment problems; however, other factors need to be considered as well, such as age-related tendon degeneration and biomechanical considerations as outlined in this article. More research is needed to determine the significance of stress-shielding and compression in tendinopathy. If they are confirmed to play a role, this finding may significantly alter our approach in both prevention and in treatment through exercise therapy. The current biomechanical studies indicate that certain joint positions are more likely to place tensile stress on the area of the tendon commonly affected by tendinopathy. These joint positions seem to be different than the traditional positions for stretching exercises used for prevention and rehabilitation of tendinopathic conditions. Incorporation of different joint positions during stretching exercises may exert more uniform, controlled tensile stress on these affected areas of the tendon and avoid stresshielding. These exercises may be able to better maintain the mechanical strength of that region of the tendon and thereby avoid injury. Alternatively, they could more uniformly stress a healing area of the tendon in a controlled manner, and thereby stimulate healing once an injury has occurred. Additional work will have to prove if a change in rehabilitation exercises is more efficacious that current techniques.

  4. Compressible Vortex Ring

    Science.gov (United States)

    Elavarasan, Ramasamy; Arakeri, Jayawant; Krothapalli, Anjaneyulu

    1999-11-01

    The interaction of a high-speed vortex ring with a shock wave is one of the fundamental issues as it is a source of sound in supersonic jets. The complex flow field induced by the vortex alters the propagation of the shock wave greatly. In order to understand the process, a compressible vortex ring is studied in detail using Particle Image Velocimetry (PIV) and shadowgraphic techniques. The high-speed vortex ring is generated from a shock tube and the shock wave, which precedes the vortex, is reflected back by a plate and made to interact with the vortex. The shadowgraph images indicate that the reflected shock front is influenced by the non-uniform flow induced by the vortex and is decelerated while passing through the vortex. It appears that after the interaction the shock is "split" into two. The PIV measurements provided clear picture about the evolution of the vortex at different time interval. The centerline velocity traces show the maximum velocity to be around 350 m/s. The velocity field, unlike in incompressible rings, contains contributions from both the shock and the vortex ring. The velocity distribution across the vortex core, core diameter and circulation are also calculated from the PIV data.

  5. Maximization techniques for oilfield development profits

    International Nuclear Information System (INIS)

    Lerche, I.

    1999-01-01

    In 1981 Nind provided a quantitative procedure for estimating the optimum number of development wells to emplace on an oilfield to maximize profit. Nind's treatment assumed that there was a steady selling price, that all wells were placed in production simultaneously, and that each well's production profile was identical and a simple exponential decline with time. This paper lifts these restrictions to allow for price fluctuations, variable with time emplacement of wells, and production rates that are more in line with actual production records than is a simple exponential decline curve. As a consequence, it is possible to design production rate strategies, correlated with price fluctuations, so as to maximize the present-day worth of a field. For price fluctuations that occur on a time-scale rapid compared to inflation rates it is appropriate to have production rates correlate directly with such price fluctuations. The same strategy does not apply for price fluctuations occurring on a time-scale long compared to inflation rates where, for small amplitudes in the price fluctuations, it is best to sell as much product as early as possible to overcome inflation factors, while for large amplitude fluctuations the best strategy is to sell product as early as possible but to do so mainly on price upswings. Examples are provided to show how these generalizations of Nind's (1981) formula change the complexion of oilfield development optimization. (author)

  6. Advances in compressible turbulent mixing

    International Nuclear Information System (INIS)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

  7. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  8. Advances in compressible turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  9. Lagrangian investigations of vorticity dynamics in compressible turbulence

    Science.gov (United States)

    Parashar, Nishant; Sinha, Sawan Suman; Danish, Mohammad; Srinivasan, Balaji

    2017-10-01

    In this work, we investigate the influence of compressibility on vorticity-strain rate dynamics. Well-resolved direct numerical simulations of compressible homogeneous isotropic turbulence performed over a cubical domain of 10243 are employed for this study. To clearly identify the influence of compressibility on the time-dependent dynamics (rather than on the one-time flow field), we employ a well-validated Lagrangian particle tracker. The tracker is used to obtain time correlations between the instantaneous vorticity vector and the strain-rate eigenvector system of an appropriately chosen reference time. In this work, compressibility is parameterized in terms of both global (turbulent Mach number) and local parameters (normalized dilatation-rate and flow field topology). Our investigations reveal that the local dilatation rate significantly influences these statistics. In turn, this observed influence of the dilatation rate is predominantly associated with rotation dominated topologies (unstable-focus-compressing, stable-focus-stretching). We find that an enhanced dilatation rate (in both contracting and expanding fluid elements) significantly enhances the tendency of the vorticity vector to align with the largest eigenvector of the strain-rate. Further, in fluid particles where the vorticity vector is maximally misaligned (perpendicular) at the reference time, vorticity does show a substantial tendency to align with the intermediate eigenvector as well. The authors make an attempt to provide physical explanations of these observations (in terms of moment of inertia and angular momentum) by performing detailed calculations following tetrads {approach of Chertkov et al. ["Lagrangian tetrad dynamics and the phenomenology of turbulence," Phys. Fluids 11(8), 2394-2410 (1999)] and Xu et al. ["The pirouette effect in turbulent flows," Nat. Phys. 7(9), 709-712 (2011)]} in a compressible flow field.

  10. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  11. Compressive sensing for urban radar

    CERN Document Server

    Amin, Moeness

    2014-01-01

    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  12. Optimization of wavelet decomposition for image compression and feature preservation.

    Science.gov (United States)

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T

    2003-09-01

    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  13. Quality and loudness judgments for music subjected to compression limiting.

    Science.gov (United States)

    Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M

    2012-08-01

    Dynamic-range compression (DRC) is used in the music industry to maximize loudness. The amount of compression applied to commercial recordings has increased over time due to a motivating perspective that louder music is always preferred. In contrast to this viewpoint, artists and consumers have argued that using large amounts of DRC negatively affects the quality of music. However, little research evidence has supported the claims of either position. The present study investigated how DRC affects the perceived loudness and sound quality of recorded music. Rock and classical music samples were peak-normalized and then processed using different amounts of DRC. Normal-hearing listeners rated the processed and unprocessed samples on overall loudness, dynamic range, pleasantness, and preference, using a scaled paired-comparison procedure in two conditions: un-equalized, in which the loudness of the music samples varied, and loudness-equalized, in which loudness differences were minimized. Results indicated that a small amount of compression was preferred in the un-equalized condition, but the highest levels of compression were generally detrimental to quality, whether loudness was equalized or varied. These findings are contrary to the "louder is better" mentality in the music industry and suggest that more conservative use of DRC may be preferred for commercial music.

  14. Comparative Analysis of Principals' Management Strategies in ...

    African Journals Online (AJOL)

    It was recommended among others that principals of secondary schools should adopt all the management strategies in this study as this will improve school administration and consequently students‟ academic performance. Keywords: Management Strategies; Secondary Schools; Administrative Effectiveness ...

  15. Spatial control of groundwater contamination, using principal ...

    Indian Academy of Sciences (India)

    probe into the spatial controlling processes of groundwater contamination, using principal component analysis (PCA). ... topography, soil type, depth of water levels, and water usage. Thus, the ... of effective sites for infiltration of recharge water.

  16. The Relationship between Principals' Managerial Approaches and ...

    African Journals Online (AJOL)

    Nekky Umera

    Egerton University, P. O. Box 16568, NAKURU KENYA bosirej@yahoo.com ... teacher and parental input while it was negatively correlated with the level of .... principal's attitude, gender qualifications, and leadership experience (Green,. 1999 ...

  17. First-Year Principal Encounters Homophobia

    Science.gov (United States)

    Retelle, Ellen

    2011-01-01

    A 1st-year principal encounters homonegativity and an ethical dilemma when she attempts to terminate a teacher because of the teacher's inadequate and ineffective teaching. The teacher responds by threatening to "out" Ms. L. to the parents.

  18. Integrating Data Transformation in Principal Components Analysis

    KAUST Repository

    Maadooliat, Mehdi; Huang, Jianhua Z.; Hu, Jianhua

    2015-01-01

    Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior

  19. Spatial control of groundwater contamination, using principal

    Indian Academy of Sciences (India)

    Spatial control of groundwater contamination, using principal component analysis ... anthropogenic (agricultural activities and domestic wastewaters), and marine ... The PC scores reflect the change of groundwater quality of geogenic origin ...

  20. Principal Hawaiian Islands Geoid Heights (GEOID96)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This 2' geoid height grid for the Principal Hawaiian Islands is distributed as a GEOID96 model. The computation used 61,000 terrestrial and marine gravity data held...

  1. PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture

    Directory of Open Access Journals (Sweden)

    Kanokmon Rujirakul

    2014-01-01

    Full Text Available Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages’ complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  2. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    Science.gov (United States)

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  3. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  4. Analysing Music with Point-Set Compression Algorithms

    DEFF Research Database (Denmark)

    Meredith, David

    2016-01-01

    Several point-set pattern-discovery and compression algorithms designed for analysing music are reviewed and evaluated. Each algorithm takes as input a point-set representation of a score in which each note is represented as a point in pitch-time space. Each algorithm computes the maximal...... and sections in pieces of classical music. On the first task, the best-performing algorithms achieved success rates of around 84%. In the second task, the best algorithms achieved mean F1 scores of around 0.49, with scores for individual pieces rising as high as 0.71....

  5. Shareholder, stakeholder-owner or broad stakeholder maximization

    DEFF Research Database (Denmark)

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating...... including the shareholders of a company. Although it may be the ultimate goal for Corporate Social Responsibility to achieve this kind of maximization, broad stakeholder maximization is quite difficult to give a precise definition. There is no one-dimensional measure to add different stakeholder benefits...... not traded on the mar-ket, and therefore there is no possibility for practical application. Broad stakeholder maximization instead in practical applications becomes satisfying certain stakeholder demands, so that the practical application will be stakeholder-owner maximization un-der constraints defined...

  6. Femoral Neck Strain during Maximal Contraction of Isolated Hip-Spanning Muscle Groups

    Directory of Open Access Journals (Sweden)

    Saulo Martelli

    2017-01-01

    Full Text Available The aim of the study was to investigate femoral neck strain during maximal isometric contraction of the hip-spanning muscles. The musculoskeletal and the femur finite-element models from an elderly white woman were taken from earlier studies. The hip-spanning muscles were grouped by function in six hip-spanning muscle groups. The peak hip and knee moments in the model were matched to corresponding published measurements of the hip and knee moments during maximal isometric exercises about the hip and the knee in elderly participants. The femoral neck strain was calculated using full activation of the agonist muscles at fourteen physiological joint angles. The 5%±0.8% of the femoral neck volume exceeded the 90th percentile of the strain distribution across the 84 studied scenarios. Hip extensors, flexors, and abductors generated the highest tension in the proximal neck (2727 με, tension (986 με and compression (−2818 με in the anterior and posterior neck, and compression (−2069 με in the distal neck, respectively. Hip extensors and flexors generated the highest neck strain per unit of joint moment (63–67 με·m·N−1 at extreme hip angles. Therefore, femoral neck strain is heterogeneous and muscle contraction and posture dependent.

  7. Maximizing Lumen Gain With Directional Atherectomy.

    Science.gov (United States)

    Stanley, Gregory A; Winscott, John G

    2016-08-01

    To describe the use of a low-pressure balloon inflation (LPBI) technique to delineate intraluminal plaque and guide directional atherectomy in order to maximize lumen gain and achieve procedure success. The technique is illustrated in a 77-year-old man with claudication who underwent superficial femoral artery revascularization using a HawkOne directional atherectomy catheter. A standard angioplasty balloon was inflated to 1 to 2 atm during live fluoroscopy to create a 3-dimensional "lumenogram" of the target lesion. Directional atherectomy was performed only where plaque impinged on the balloon at a specific fluoroscopic orientation. The results of the LPBI technique were corroborated with multimodality diagnostic imaging, including digital subtraction angiography, intravascular ultrasound, and intra-arterial pressure measurements. With the LPBI technique, directional atherectomy can routinely achieve <10% residual stenosis, as illustrated in this case, thereby broadly supporting a no-stent approach to lower extremity endovascular revascularization. © The Author(s) 2016.

  8. Primordial two-component maximally symmetric inflation

    Science.gov (United States)

    Enqvist, K.; Nanopoulos, D. V.; Quirós, M.; Kounnas, C.

    1985-12-01

    We propose a two-component inflation model, based on maximally symmetric supergravity, where the scales of reheating and the inflation potential at the origin are decoupled. This is possible because of the second-order phase transition from SU(5) to SU(3)×SU(2)×U(1) that takes place when φ≅φcinflation at the global minimum, and leads to a reheating temperature TR≅(1015-1016) GeV. This makes it possible to generate baryon asymmetry in the conventional way without any conflict with experimental data on proton lifetime. The mass of the gravitinos is m3/2≅1012 GeV, thus avoiding the gravitino problem. Monopoles are diluted by residual inflation in the broken phase below the cosmological bounds if φcUSA.

  9. Distributed-Memory Fast Maximal Independent Set

    Energy Technology Data Exchange (ETDEWEB)

    Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew

    2017-09-13

    The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluate their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.

  10. Quench dynamics of topological maximally entangled states.

    Science.gov (United States)

    Chung, Ming-Chiang; Jhu, Yi-Hao; Chen, Pochung; Mou, Chung-Yu

    2013-07-17

    We investigate the quench dynamics of the one-particle entanglement spectra (OPES) for systems with topologically nontrivial phases. By using dimerized chains as an example, it is demonstrated that the evolution of OPES for the quenched bipartite systems is governed by an effective Hamiltonian which is characterized by a pseudospin in a time-dependent pseudomagnetic field S(k,t). The existence and evolution of the topological maximally entangled states (tMESs) are determined by the winding number of S(k,t) in the k-space. In particular, the tMESs survive only if nontrivial Berry phases are induced by the winding of S(k,t). In the infinite-time limit the equilibrium OPES can be determined by an effective time-independent pseudomagnetic field Seff(k). Furthermore, when tMESs are unstable, they are destroyed by quasiparticles within a characteristic timescale in proportion to the system size.

  11. Maximizing policy learning in international committees

    DEFF Research Database (Denmark)

    Nedergaard, Peter

    2007-01-01

    , this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......In the voluminous literature on the European Union's open method of coordination (OMC), no one has hitherto analysed on the basis of scholarly examination the question of what contributes to the learning processes in the OMC committees. On the basis of a questionnaire sent to all participants......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...

  12. Lovelock black holes with maximally symmetric horizons

    Energy Technology Data Exchange (ETDEWEB)

    Maeda, Hideki; Willison, Steven; Ray, Sourya, E-mail: hideki@cecs.cl, E-mail: willison@cecs.cl, E-mail: ray@cecs.cl [Centro de Estudios CientIficos (CECs), Casilla 1469, Valdivia (Chile)

    2011-08-21

    We investigate some properties of n( {>=} 4)-dimensional spacetimes having symmetries corresponding to the isometries of an (n - 2)-dimensional maximally symmetric space in Lovelock gravity under the null or dominant energy condition. The well-posedness of the generalized Misner-Sharp quasi-local mass proposed in the past study is shown. Using this quasi-local mass, we clarify the basic properties of the dynamical black holes defined by a future outer trapping horizon under certain assumptions on the Lovelock coupling constants. The C{sup 2} vacuum solutions are classified into four types: (i) Schwarzschild-Tangherlini-type solution; (ii) Nariai-type solution; (iii) special degenerate vacuum solution; and (iv) exceptional vacuum solution. The conditions for the realization of the last two solutions are clarified. The Schwarzschild-Tangherlini-type solution is studied in detail. We prove the first law of black-hole thermodynamics and present the expressions for the heat capacity and the free energy.

  13. MAXIMIZING THE BENEFITS OF ERP SYSTEMS

    Directory of Open Access Journals (Sweden)

    Paulo André da Conceição Menezes

    2010-04-01

    Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.

  14. Maximal energy extraction under discrete diffusive exchange

    Energy Technology Data Exchange (ETDEWEB)

    Hay, M. J., E-mail: hay@princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Schiff, J. [Department of Mathematics, Bar-Ilan University, Ramat Gan 52900 (Israel); Fisch, N. J. [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)

    2015-10-15

    Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.

  15. Maximizing profitability in a hospital outpatient pharmacy.

    Science.gov (United States)

    Jorgenson, J A; Kilarski, J W; Malatestinic, W N; Rudy, T A

    1989-07-01

    This paper describes the strategies employed to increase the profitability of an existing ambulatory pharmacy operated by the hospital. Methods to generate new revenue including implementation of a home parenteral therapy program, a home enteral therapy program, a durable medical equipment service, and home care disposable sales are described. Programs to maximize existing revenue sources such as increasing the capture rate on discharge prescriptions, increasing "walk-in" prescription traffic and increasing HMO prescription volumes are discussed. A method utilized to reduce drug expenditures is also presented. By minimizing expenses and increasing the revenues for the ambulatory pharmacy operation, net profit increased from +26,000 to over +140,000 in one year.

  16. Maximizing the benefits of a dewatering system

    International Nuclear Information System (INIS)

    Matthews, P.; Iverson, T.S.

    1999-01-01

    The use of dewatering systems in the mining, industrial sludge and sewage waste treatment industries is discussed, also describing some of the problems that have been encountered while using drilling fluid dewatering technology. The technology is an acceptable drilling waste handling alternative but it has had problems associated with recycled fluid incompatibility, high chemical costs and system inefficiencies. This paper discussed the following five action areas that can maximize the benefits and help reduce costs of a dewatering project: (1) co-ordinate all services, (2) choose equipment that fits the drilling program, (3) match the chemical treatment with the drilling fluid types, (4) determine recycled fluid compatibility requirements, and (5) determine the disposal requirements before project start-up. 2 refs., 5 figs

  17. Mixtures of maximally entangled pure states

    Energy Technology Data Exchange (ETDEWEB)

    Flores, M.M., E-mail: mflores@nip.up.edu.ph; Galapon, E.A., E-mail: eric.galapon@gmail.com

    2016-09-15

    We study the conditions when mixtures of maximally entangled pure states remain entangled. We found that the resulting mixed state remains entangled when the number of entangled pure states to be mixed is less than or equal to the dimension of the pure states. For the latter case of mixing a number of pure states equal to their dimension, we found that the mixed state is entangled provided that the entangled pure states to be mixed are not equally weighted. We also found that one can restrict the set of pure states that one can mix from in order to ensure that the resulting mixed state is genuinely entangled. Also, we demonstrate how these results could be applied as a way to detect entanglement in mixtures of the entangled pure states with noise.

  18. Compressed gas fuel storage system

    Science.gov (United States)

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  19. Compressed sensing for distributed systems

    CERN Document Server

    Coluccia, Giulio; Magli, Enrico

    2015-01-01

    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  20. Nonlinear compression of optical solitons

    Indian Academy of Sciences (India)

    linear pulse propagation is the nonlinear Schrödinger (NLS) equation [1]. There are ... Optical pulse compression finds important applications in optical fibres. The pulse com ..... to thank CSIR, New Delhi for financial support in the form of SRF.

  1. Maximally reliable Markov chains under energy constraints.

    Science.gov (United States)

    Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam

    2009-07-01

    Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.

  2. Mechanics of the Compression Wood Response: II. On the Location, Action, and Distribution of Compression Wood Formation.

    Science.gov (United States)

    Archer, R R; Wilson, B F

    1973-04-01

    A new method for simulation of cross-sectional growth provided detailed information on the location of normal wood and compression wood increments in two tilted white pine (Pinus strobus L.) leaders. These data were combined with data on stiffness, slope, and curvature changes over a 16-week period to make the mechanical analysis. The location of compression wood changed from the under side to a flank side and then to the upper side of the leader as the geotropic stimulus decreased, owing to compression wood action. Its location shifted back to a flank side when the direction of movement of the leader reversed. A model for this action, based on elongation strains, was developed and predicted the observed curvature changes with elongation strains of 0.3 to 0.5%, or a maximal compressive stress of 60 to 300 kilograms per square centimeter. After tilting, new wood formation was distributed so as to maintain consistent strain levels along the leaders in bending under gravitational loads. The computed effective elastic moduli were about the same for the two leaders throughout the season.

  3. A Criterion to Identify Maximally Entangled Four-Qubit State

    International Nuclear Information System (INIS)

    Zha Xinwei; Song Haiyang; Feng Feng

    2011-01-01

    Paolo Facchi, et al. [Phys. Rev. A 77 (2008) 060304(R)] presented a maximally multipartite entangled state (MMES). Here, we give a criterion for the identification of maximally entangled four-qubit states. Using this criterion, we not only identify some existing maximally entangled four-qubit states in the literature, but also find several new maximally entangled four-qubit states as well. (general)

  4. Maximal lattice free bodies, test sets and the Frobenius problem

    DEFF Research Database (Denmark)

    Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt

    Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral m...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....

  5. Lossless compression of waveform data for efficient storage and transmission

    International Nuclear Information System (INIS)

    Stearns, S.D.; Tan, Li Zhe; Magotra, Neeraj

    1993-01-01

    Compression of waveform data is significant in many engineering and research areas since it can be used to alleviate data storage and transmission bandwidth. For example, seismic data are widely recorded and transmitted so that analysis can be performed on large amounts of data for numerous applications such as petroleum exploration, determination of the earth's core structure, seismic event detection and discrimination of underground nuclear explosions, etc. This paper describes a technique for lossless wave form data compression. The technique consists of two stages. The first stage is a modified form of linear prediction with discrete coefficients and the second stage is bi-level sequence coding. The linear predictor generates an error or residue sequence in a way such that exact reconstruction of the original data sequence can be accomplished with a simple algorithm. The residue sequence is essentially white Gaussian with seismic or other similar waveform data. Bi-level sequence coding, in which two sample sizes are chosen and the residue sequence is encoded into subsequences that alternate from one level to the other, further compresses the residue sequence. The principal feature of the two-stage data compression algorithm is that it is lossless, that is, it allows exact, bit-for-bit recovery of the original data sequence. The performance of the lossless compression algorithm at each stage is analyzed. The advantages of using bi-level sequence coding in the second stage are its simplicity of implementation, its effectiveness on data with large amplitude variations, and its near-optimal performance in encoding Gaussian sequences. Applications of the two-stage technique to typical seismic data indicates that an average number of compressed bits per sample close to the lower bound is achievable in practical situations

  6. Multichannel compressive sensing MRI using noiselet encoding.

    Directory of Open Access Journals (Sweden)

    Kamlesh Pawar

    Full Text Available The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS. In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  7. Thermal reservoir sizing for adiabatic compressed air energy storage

    Energy Technology Data Exchange (ETDEWEB)

    Kere, Amelie; Goetz, Vincent; Py, Xavier; Olives, Regis; Sadiki, Najim [Perpignan Univ. (France). PROMES CNRS UPR 8521; Mercier-Allart, Eric [EDF R et D, Chatou (France)

    2012-07-01

    Despite the operation of the two existing industrial facilities to McIntosh (Alabama), and for more than thirty years, Huntorf (Germany), electricity storage in the form of compressed air in underground cavern (CAES) has not seen the development that was expected in the 80s. The efficiency of this form of storage was with the first generation CAES, less than 50%. The evolving context technique can significantly alter this situation. The new generation so-called Adiabatic CAES (A-CAES) is to retrieve the heat produced by the compression via thermal storage, thus eliminating the necessity of gas to burn and would allow consideration efficiency overall energy of the order of 70%. To date, there is no existing installation of A-CAES. Many studies describe the principal and the general working mode of storage systems by adiabatic compression of air. So, efficiencies of different configurations of adiabatic compression process were analyzed. The aim of this paper is to simulate and analyze the performances of a thermal storage reservoir integrated in the system and adapted to the working conditions of a CAES.

  8. Pressurizer safety valve serviceability enhancement by spring compression stability

    Energy Technology Data Exchange (ETDEWEB)

    Ratiu, M.D.; Moisidis, N.T. [California Consulting Engineering and Technology (CALCET), San Leandro, California (United States)

    2007-07-01

    The proactive maintenance of the spring-loaded-self-actuated Pressurizer Safety Valve (PSV) has caused frequent concerns pertaining the spring self actuated reliability due to set point drift, spurious openings, and seat leakage. The exhaustive testing performed on a Crosby PSV model 6M6 has revealed that the principal cause of these malfunctions is the spring compression elastic instability during service. The spring lateral deformations measurements performed validated the analytical shapes for spring compression: symmetrical bending - for coaxial supported ends - restraining any support displacement, and asymmetrical bending induced by the potential misalignment of the supported top end. The source of the spring compression instability appears on the tested Crosby PSV induced by the top end lateral displacement during long term operation. The testing with restrained displacement at the spring top has shown consistent set-point reproducibility, less than +/- 1 per cent. To eliminate the asymmetrical spring buckling, a design review of the PSV is proposed including the guided fixture at the top and the decrease of spring coil slenderness ratio H/D, corresponding to the general analytical elastic stability for the asymmetrical compression. (authors)

  9. Pressurizer safety valve serviceability enhancement by spring compression stability

    International Nuclear Information System (INIS)

    Ratiu, M.D.; Moisidis, N.T.

    2007-01-01

    The proactive maintenance of the spring-loaded-self-actuated Pressurizer Safety Valve (PSV) has caused frequent concerns pertaining the spring self actuated reliability due to set point drift, spurious openings, and seat leakage. The exhaustive testing performed on a Crosby PSV model 6M6 has revealed that the principal cause of these malfunctions is the spring compression elastic instability during service. The spring lateral deformations measurements performed validated the analytical shapes for spring compression: symmetrical bending - for coaxial supported ends - restraining any support displacement, and asymmetrical bending induced by the potential misalignment of the supported top end. The source of the spring compression instability appears on the tested Crosby PSV induced by the top end lateral displacement during long term operation. The testing with restrained displacement at the spring top has shown consistent set-point reproducibility, less than +/- 1 per cent. To eliminate the asymmetrical spring buckling, a design review of the PSV is proposed including the guided fixture at the top and the decrease of spring coil slenderness ratio H/D, corresponding to the general analytical elastic stability for the asymmetrical compression. (authors)

  10. Thermophysical properties of multi-shock compressed dense argon.

    Science.gov (United States)

    Chen, Q F; Zheng, J; Gu, Y J; Chen, Y L; Cai, L C; Shen, Z J

    2014-02-21

    In contrast to the single shock compression state that can be obtained directly via experimental measurements, the multi-shock compression states, however, have to be calculated with the aid of theoretical models. In order to determine experimentally the multiple shock states, a diagnostic approach with the Doppler pins system (DPS) and the pyrometer was used to probe multiple shocks in dense argon plasmas. Plasma was generated by a shock reverberation technique. The shock was produced using the flyer plate impact accelerated up to ∼6.1 km/s by a two-stage light gas gun and introduced into the plenum argon gas sample, which was pre-compressed from the environmental pressure to about 20 MPa. The time-resolved optical radiation histories were determined using a multi-wavelength channel optical transience radiance pyrometer. Simultaneously, the particle velocity profiles of the LiF window was measured with multi-DPS. The states of multi-shock compression argon plasma were determined from the measured shock velocities combining the particle velocity profiles. We performed the experiments on dense argon plasmas to determine the principal Hugonoit up to 21 GPa, the re-shock pressure up to 73 GPa, and the maximum measure pressure of the fourth shock up to 158 GPa. The results are used to validate the existing self-consistent variational theory model in the partial ionization region and create new theoretical models.

  11. Faster tissue interface analysis from Raman microscopy images using compressed factorisation

    Science.gov (United States)

    Palmer, Andrew D.; Bannerman, Alistair; Grover, Liam; Styles, Iain B.

    2013-06-01

    The structure of an artificial ligament was examined using Raman microscopy in combination with novel data analysis. Basis approximation and compressed principal component analysis are shown to provide efficient compression of confocal Raman microscopy images, alongside powerful methods for unsupervised analysis. This scheme allows the acceleration of data mining, such as principal component analysis, as they can be performed on the compressed data representation, providing a decrease in the factorisation time of a single image from five minutes to under a second. Using this workflow the interface region between a chemically engineered ligament construct and a bone-mimic anchor was examined. Natural ligament contains a striated interface between the bone and tissue that provides improved mechanical load tolerance, a similar interface was found in the ligament construct.

  12. Use of compression garments by women with lymphoedema secondary to breast cancer treatment.

    Science.gov (United States)

    Longhurst, E; Dylke, E S; Kilbreath, S L

    2018-02-19

    This aim of this study was to determine the use of compression garments by women with lymphoedema secondary to breast cancer treatment and factors which underpin use. An online survey was distributed to the Survey and Review group of the Breast Cancer Network Australia. The survey included questions related to the participants' demographics, breast cancer and lymphoedema medical history, prescription and use of compression garments and their beliefs about compression and lymphoedema. Data were analysed using principal component analysis and multivariable logistic regression. Compression garments had been prescribed to 83% of 201 women with lymphoedema within the last 5 years, although 37 women had discontinued their use. Even when accounting for severity of swelling, type of garment(s) and advice given for use varied across participants. Use of compression garments was driven by women's beliefs that they were vulnerable to progression of their disease and that compression would prevent its worsening. Common reasons given as to why women had discontinued their use included discomfort, and their lymphoedema was stable. Participant characteristics associated with discontinuance of compression garments included their belief that (i) the garments were not effective in managing their condition, (ii) experienced mild-moderate swelling and/or (iii) had experienced swelling for greater than 5 years. The prescription of compression garments for lymphoedema is highly varied and may be due to lack of underpinning evidence to inform treatment.

  13. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    Science.gov (United States)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  14. Shock compression experiments on Lithium Deuteride single crystals.

    Energy Technology Data Exchange (ETDEWEB)

    Knudson, Marcus D.; Desjarlais, Michael Paul; Lemke, Raymond W.

    2014-10-01

    S hock compression exper iments in the few hundred GPa (multi - Mabr) regime were performed on Lithium Deuteride (LiD) single crystals . This study utilized the high velocity flyer plate capability of the Sandia Z Machine to perform impact experiments at flyer plate velocities in the range of 17 - 32 km/s. Measurements included pressure, density, and temperature between %7E200 - 600 GPa along the Principal Hugoniot - the locus of end states achievable through compression by large amplitude shock waves - as well as pressure and density of re - shock states up to %7E900 GPa . The experimental measurements are compared with recent density functional theory calculations as well as a new tabular equation of state developed at Los Alamos National Labs.

  15. Maximizing Expected Achievable Rates for Block-Fading Buffer-Aided Relay Channels

    KAUST Repository

    Shaqfeh, Mohammad

    2016-05-25

    In this paper, the long-term average achievable rate over block-fading buffer-aided relay channels is maximized using a hybrid scheme that combines three essential transmission strategies, which are decode-and-forward, compress-and-forward, and direct transmission. The proposed hybrid scheme is dynamically adapted based on the channel state information. The integration and optimization of these three strategies provide a more generic and fundamental solution and give better achievable rates than the known schemes in the literature. Despite the large number of optimization variables, the proposed hybrid scheme can be optimized using simple closed-form formulas that are easy to apply in practical relay systems. This includes adjusting the transmission rate and compression when compress-and-forward is the selected strategy based on the channel conditions. Furthermore, in this paper, the hybrid scheme is applied to three different models of the Gaussian block-fading buffer-aided relay channels, depending on whether the relay is half or full duplex and whether the source and the relay have orthogonal or non-orthogonal channel access. Several numerical examples are provided to demonstrate the achievable rate results and compare them to the upper bounds of the ergodic capacity for each one of the three channel models under consideration.

  16. Maximizing Expected Achievable Rates for Block-Fading Buffer-Aided Relay Channels

    KAUST Repository

    Shaqfeh, Mohammad; Zafar, Ammar; Alnuweiri, Hussein; Alouini, Mohamed-Slim

    2016-01-01

    In this paper, the long-term average achievable rate over block-fading buffer-aided relay channels is maximized using a hybrid scheme that combines three essential transmission strategies, which are decode-and-forward, compress-and-forward, and direct transmission. The proposed hybrid scheme is dynamically adapted based on the channel state information. The integration and optimization of these three strategies provide a more generic and fundamental solution and give better achievable rates than the known schemes in the literature. Despite the large number of optimization variables, the proposed hybrid scheme can be optimized using simple closed-form formulas that are easy to apply in practical relay systems. This includes adjusting the transmission rate and compression when compress-and-forward is the selected strategy based on the channel conditions. Furthermore, in this paper, the hybrid scheme is applied to three different models of the Gaussian block-fading buffer-aided relay channels, depending on whether the relay is half or full duplex and whether the source and the relay have orthogonal or non-orthogonal channel access. Several numerical examples are provided to demonstrate the achievable rate results and compare them to the upper bounds of the ergodic capacity for each one of the three channel models under consideration.

  17. On maximal surfaces in asymptotically flat space-times

    International Nuclear Information System (INIS)

    Bartnik, R.; Chrusciel, P.T.; O Murchadha, N.

    1990-01-01

    Existence of maximal and 'almost maximal' hypersurfaces in asymptotically flat space-times is established under boundary conditions weaker than those considered previously. We show in particular that every vacuum evolution of asymptotically flat data for Einstein equations can be foliated by slices maximal outside a spatially compact set and that every (strictly) stationary asymptotically flat space-time can be foliated by maximal hypersurfaces. Amongst other uniqueness results, we show that maximal hypersurface can be used to 'partially fix' an asymptotic Poincare group. (orig.)

  18. 29 CFR 1917.154 - Compressed air.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  19. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  20. Promoting principals' managerial involvement in instructional improvement.

    Science.gov (United States)

    Gillat, A

    1994-01-01

    Studies of school leadership suggest that visiting classrooms, emphasizing achievement and training, and supporting teachers are important indicators of the effectiveness of school principals. The utility of a behavior-analytic program to support the enhancement of these behaviors in 2 school principals and the impact of their involvement upon teachers' and students' performances in three classes were examined in two experiments, one at an elementary school and another at a secondary school. Treatment conditions consisted of helping the principal or teacher to schedule his or her time and to use goal setting, feedback, and praise. A withdrawal design (Experiment 1) and a multiple baseline across classrooms (Experiment 2) showed that the principal's and teacher's rates of praise, feedback, and goal setting increased during the intervention, and were associated with improvements in the academic performance of the students. In the future, school psychologists might analyze the impact of involving themselves in supporting the principal's involvement in improving students' and teachers' performances or in playing a similar leadership role themselves.

  1. Perceptual Image Compression in Telemedicine

    Science.gov (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  2. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  3. Design and manufacturing rules for maximizing the performance of polycrystalline piezoelectric bending actuators

    International Nuclear Information System (INIS)

    Jafferis, Noah T; Smith, Michael J; Wood, Robert J

    2015-01-01

    Increasing the energy and power density of piezoelectric actuators is very important for any weight-sensitive application, and is especially crucial for enabling autonomy in micro/milli-scale robots and devices utilizing this technology. This is achieved by maximizing the mechanical flexural strength and electrical dielectric strength through the use of laser-induced melting or polishing, insulating edge coating, and crack-arresting features, combined with features for rigid ground attachments to maximize force output. Manufacturing techniques have also been developed to enable mass customization, in which sheets of material are pre-stacked to form a laminate from which nearly arbitrary planar actuator designs can be fabricated using only laser cutting. These techniques have led to a 70% increase in energy density and an increase in mean lifetime of at least 15× compared to prior manufacturing methods. In addition, measurements have revealed a doubling of the piezoelectric coefficient when operating at the high fields necessary to achieve maximal energy densities, along with an increase in the Young’s modulus at the high compressive strains encountered—these two effects help to explain the higher performance of our actuators as compared to that predicted by linear models. (paper)

  4. Video on the Internet: An introduction to the digital encoding, compression, and transmission of moving image data.

    Science.gov (United States)

    Boudier, T; Shotton, D M

    1999-01-01

    In this paper, we seek to provide an introduction to the fast-moving field of digital video on the Internet, from the viewpoint of the biological microscopist who might wish to store or access videos, for instance in image databases such as the BioImage Database (http://www.bioimage.org). We describe and evaluate the principal methods used for encoding and compressing moving image data for digital storage and transmission over the Internet, which involve compromises between compression efficiency and retention of image fidelity, and describe the existing alternate software technologies for downloading or streaming compressed digitized videos using a Web browser. We report the results of experiments on video microscopy recordings and three-dimensional confocal animations of biological specimens to evaluate the compression efficiencies of the principal video compression-decompression algorithms (codecs) and to document the artefacts associated with each of them. Because MPEG-1 gives very high compression while yet retaining reasonable image quality, these studies lead us to recommend that video databases should store both a high-resolution original version of each video, ideally either uncompressed or losslessly compressed, and a separate edited and highly compressed MPEG-1 preview version that can be rapidly downloaded for interactive viewing by the database user. Copyright 1999 Academic Press.

  5. Generation of large-scale vortives in compressible helical turbulence

    International Nuclear Information System (INIS)

    Chkhetiani, O.G.; Gvaramadze, V.V.

    1989-01-01

    We consider generation of large-scale vortices in compressible self-gravitating turbulent medium. The closed equation describing evolution of the large-scale vortices in helical turbulence with finite correlation time is obtained. This equation has the form similar to the hydromagnetic dynamo equation, which allows us to call the vortx genertation effect the vortex dynamo. It is possible that principally the same mechanism is responsible both for amplification and maintenance of density waves and magnetic fields in gaseous disks of spiral galaxies. (author). 29 refs

  6. A Note of Caution on Maximizing Entropy

    Directory of Open Access Journals (Sweden)

    Richard E. Neapolitan

    2014-07-01

    Full Text Available The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some of these cases, and discusses how to identify some of the situations in which this principle should not be used. The paper starts by reviewing three approaches to probability, namely the classical approach, the limiting frequency approach, and the Bayesian approach. It then introduces maximum entropy and shows its relationship to the three approaches. Next, through examples, it shows that maximizing entropy sometimes can stand in direct opposition to Bayesian updating based on reasonable prior beliefs. The paper concludes that if we take the Bayesian approach that probability is about reasonable belief based on all available information, then we can resolve the conflict between the maximum entropy approach and the Bayesian approach that is demonstrated in the examples.

  7. Optimal topologies for maximizing network transmission capacity

    Science.gov (United States)

    Chen, Zhenhao; Wu, Jiajing; Rong, Zhihai; Tse, Chi K.

    2018-04-01

    It has been widely demonstrated that the structure of a network is a major factor that affects its traffic dynamics. In this work, we try to identify the optimal topologies for maximizing the network transmission capacity, as well as to build a clear relationship between structural features of a network and the transmission performance in terms of traffic delivery. We propose an approach for designing optimal network topologies against traffic congestion by link rewiring and apply them on the Barabási-Albert scale-free, static scale-free and Internet Autonomous System-level networks. Furthermore, we analyze the optimized networks using complex network parameters that characterize the structure of networks, and our simulation results suggest that an optimal network for traffic transmission is more likely to have a core-periphery structure. However, assortative mixing and the rich-club phenomenon may have negative impacts on network performance. Based on the observations of the optimized networks, we propose an efficient method to improve the transmission capacity of large-scale networks.

  8. New features of the maximal abelian projection

    International Nuclear Information System (INIS)

    Bornyakov, V.G.; Polikarpov, M.I.; Syritsyn, S.N.; Schierholz, G.; Suzuki, T.

    2005-12-01

    After fixing the Maximal Abelian gauge in SU(2) lattice gauge theory we decompose the nonabelian gauge field into the so called monopole field and the modified nonabelian field with monopoles removed. We then calculate respective static potentials and find that the potential due to the modified nonabelian field is nonconfining while, as is well known, the monopole field potential is linear. Furthermore, we show that the sum of these potentials approximates the nonabelian static potential with 5% or higher precision at all distances considered. We conclude that at large distances the monopole field potential describes the classical energy of the hadronic string while the modified nonabelian field potential describes the string fluctuations. Similar decomposition was observed to work for the adjoint static potential. A check was also made of the center projection in the direct center gauge. Two static potentials, determined by projected Z 2 and by modified nonabelian field without Z 2 component were calculated. It was found that their sum is a substantially worse approximation of the SU(2) static potential than that found in the monopole case. It is further demonstrated that similar decomposition can be made for the flux tube action/energy density. (orig.)

  9. Evaluating the Effectiveness of Traditional and Alternative Principal Preparation Programs

    Science.gov (United States)

    Pannell, Summer; Peltier-Glaze, Bernnell M.; Haynes, Ingrid; Davis, Delilah; Skelton, Carrie

    2015-01-01

    This study sought to determine the effectiveness on increasing student achievement of principals trained in a traditional principal preparation program and those trained in an alternate route principal preparation program within the same Mississippi university. Sixty-six Mississippi principals and assistant principals participated in the study. Of…

  10. Riccati transformations and principal solutions of discrete linear systems

    International Nuclear Information System (INIS)

    Ahlbrandt, C.D.; Hooker, J.W.

    1984-01-01

    Consider a second-order linear matrix difference equation. A definition of principal and anti-principal, or recessive and dominant, solutions of the equation are given and the existence of principal and anti-principal solutions and the essential uniqueness of principal solutions is proven

  11. Principals, Trust, and Cultivating Vibrant Schools

    Directory of Open Access Journals (Sweden)

    Megan Tschannen-Moran

    2015-03-01

    Full Text Available Although principals are ultimately held accountable to student learning in their buildings, the most consistent research results have suggested that their impact on student achievement is largely indirect. Leithwood, Patten, and Jantzi proposed four paths through which this indirect influence would flow, and the purpose of this special issue is to examine in greater depth these mediating variables. Among mediating variables, we assert that trust is key. In this paper, we explore the evidence that points to the role that faculty trust in the principal plays in student learning and how principals can cultivate trust by attending to the five facets of trust, as well as the correlates of trust that mediate student learning, including academic press, collective teacher efficacy, and teacher professionalism. We argue that trust plays a role in each of the four paths identified by Leithwood, Patten, and Jantzi. Finally, we explore possible new directions for future research.

  12. Principal component regression for crop yield estimation

    CERN Document Server

    Suryanarayana, T M V

    2016-01-01

    This book highlights the estimation of crop yield in Central Gujarat, especially with regard to the development of Multiple Regression Models and Principal Component Regression (PCR) models using climatological parameters as independent variables and crop yield as a dependent variable. It subsequently compares the multiple linear regression (MLR) and PCR results, and discusses the significance of PCR for crop yield estimation. In this context, the book also covers Principal Component Analysis (PCA), a statistical procedure used to reduce a number of correlated variables into a smaller number of uncorrelated variables called principal components (PC). This book will be helpful to the students and researchers, starting their works on climate and agriculture, mainly focussing on estimation models. The flow of chapters takes the readers in a smooth path, in understanding climate and weather and impact of climate change, and gradually proceeds towards downscaling techniques and then finally towards development of ...

  13. Building indifferentiable compression functions from the PGV compression functions

    DEFF Research Database (Denmark)

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde

    2016-01-01

    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black......, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block...

  14. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  15. Plasma heating by adiabatic compression

    International Nuclear Information System (INIS)

    Ellis, R.A. Jr.

    1972-01-01

    These two lectures will cover the following three topics: (i) The application of adiabatic compression to toroidal devices is reviewed. The special case of adiabatic compression in tokamaks is considered in more detail, including a discussion of the equilibrium, scaling laws, and heating effects. (ii) The ATC (Adiabatic Toroidal Compressor) device which was completed in May 1972, is described in detail. Compression of a tokamak plasma across a static toroidal field is studied in this device. The device is designed to produce a pre-compression plasma with a major radius of 17 cm, toroidal field of 20 kG, and current of 90 kA. The compression leads to a plasma with major radius of 38 cm and minor radius of 10 cm. Scaling laws imply a density increase of a factor 6, temperature increase of a factor 3, and current increase of a factor 2.4. An additional feature of ATC is that it is a large tokamak which operates without a copper shell. (iii) Data which show that the expected MHD behavior is largely observed is presented and discussed. (U.S.)

  16. Concurrent data compression and protection

    International Nuclear Information System (INIS)

    Saeed, M.

    2009-01-01

    Data compression techniques involve transforming data of a given format, called source message, to data of a smaller sized format, called codeword. The primary objective of data encryption is to ensure security of data if it is intercepted by an eavesdropper. It transforms data of a given format, called plaintext, to another format, called ciphertext, using an encryption key or keys. Thus, combining the processes of compression and encryption together must be done in this order, that is, compression followed by encryption because all compression techniques heavily rely on the redundancies which are inherently a part of a regular text or speech. The aim of this research is to combine two processes of compression (using an existing scheme) with a new encryption scheme which should be compatible with encoding scheme embedded in encoder. The novel technique proposed by the authors is new, unique and is highly secured. The deployment of sentinel marker' enhances the security of the proposed TR-One algorithm from 2/sup 44/ ciphertexts to 2/sup 44/ +2/sub 20/ ciphertexts thus imposing extra challenges to the intruders. (author)

  17. Radiologic image compression -- A review

    International Nuclear Information System (INIS)

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  18. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  19. Geometry of Quantum Principal Bundles. Pt. 1

    International Nuclear Information System (INIS)

    Durdevic, M.

    1996-01-01

    A theory of principal bundles possessing quantum structure groups and classical base manifolds is presented. Structural analysis of such quantum principal bundles is performed. A differential calculus is constructed, combining differential forms on the base manifold with an appropriate differential calculus on the structure quantum group. Relations between the calculus on the group and the calculus on the bundle are investigated. A concept of (pseudo)tensoriality is formulated. The formalism of connections is developed. In particular, operators of horizontal projection, covariant derivative and curvature are constructed and analyzed. Generalizations of the first Structure Equation and of the Bianchi identity are found. Illustrative examples are presented. (orig.)

  20. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  1. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    International Nuclear Information System (INIS)

    Braunschweig, R.; Kaden, Ingmar; Schwarzer, J.; Sprengel, C.; Klose, K.

    2009-01-01

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  2. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    Energy Technology Data Exchange (ETDEWEB)

    Braunschweig, R.; Kaden, Ingmar [Klinik fuer Bildgebende Diagnostik und Interventionsradiologie, BG-Kliniken Bergmannstrost Halle (Germany); Schwarzer, J.; Sprengel, C. [Dept. of Management Information System and Operations Research, Martin-Luther-Univ. Halle Wittenberg (Germany); Klose, K. [Medizinisches Zentrum fuer Radiologie, Philips-Univ. Marburg (Germany)

    2009-07-15

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  3. Rectal perforation by compressed air.

    Science.gov (United States)

    Park, Young Jin

    2017-07-01

    As the use of compressed air in industrial work has increased, so has the risk of associated pneumatic injury from its improper use. However, damage of large intestine caused by compressed air is uncommon. Herein a case of pneumatic rupture of the rectum is described. The patient was admitted to the Emergency Room complaining of abdominal pain and distension. His colleague triggered a compressed air nozzle over his buttock. On arrival, vital signs were stable but physical examination revealed peritoneal irritation and marked distension of the abdomen. Computed tomography showed a large volume of air in the peritoneal cavity and subcutaneous emphysema at the perineum. A rectal perforation was found at laparotomy and the Hartmann procedure was performed.

  4. Compact torus compression of microwaves

    International Nuclear Information System (INIS)

    Hewett, D.W.; Langdon, A.B.

    1985-01-01

    The possibility that a compact torus (CT) might be accelerated to large velocities has been suggested by Hartman and Hammer. If this is feasible one application of these moving CTs might be to compress microwaves. The proposed mechanism is that a coaxial vacuum region in front of a CT is prefilled with a number of normal electromagnetic modes on which the CT impinges. A crucial assumption of this proposal is that the CT excludes the microwaves and therefore compresses them. Should the microwaves penetrate the CT, compression efficiency is diminished and significant CT heating results. MFE applications in the same parameters regime have found electromagnetic radiation capable of penetrating, heating, and driving currents. We report here a cursory investigation of rf penetration using a 1-D version of a direct implicit PIC code

  5. Premixed autoignition in compressible turbulence

    Science.gov (United States)

    Konduri, Aditya; Kolla, Hemanth; Krisman, Alexander; Chen, Jacqueline

    2016-11-01

    Prediction of chemical ignition delay in an autoignition process is critical in combustion systems like compression ignition engines and gas turbines. Often, ignition delay times measured in simple homogeneous experiments or homogeneous calculations are not representative of actual autoignition processes in complex turbulent flows. This is due the presence of turbulent mixing which results in fluctuations in thermodynamic properties as well as chemical composition. In the present study the effect of fluctuations of thermodynamic variables on the ignition delay is quantified with direct numerical simulations of compressible isotropic turbulence. A premixed syngas-air mixture is used to remove the effects of inhomogeneity in the chemical composition. Preliminary results show a significant spatial variation in the ignition delay time. We analyze the topology of autoignition kernels and identify the influence of extreme events resulting from compressibility and intermittency. The dependence of ignition delay time on Reynolds and turbulent Mach numbers is also quantified. Supported by Basic Energy Sciences, Dept of Energy, United States.

  6. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... cannot be expected to code losslessly at a rate of 125 Mbit/s. We investigate the rate and quality effects of quantization using standard JPEG-LS quantization and two new techniques: visual quantization and trellis quantization. Visual quantization is not part of baseline JPEG-LS, but is applicable...... in the framework of JPEG-LS. Visual tests show that this quantization technique gives much better quality than standard JPEG-LS quantization. Trellis quantization is a process by which the original image is altered in such a way as to make lossless JPEG-LS encoding more effective. For JPEG-LS and visual...

  7. Efficient access of compressed data

    International Nuclear Information System (INIS)

    Eggers, S.J.; Shoshani, A.

    1980-06-01

    A compression technique is presented that allows a high degree of compression but requires only logarithmic access time. The technique is a constant suppression scheme, and is most applicable to stable databases whose distribution of constants is fairly clustered. Furthermore, the repeated use of the technique permits the suppression of a multiple number of different constants. Of particular interest is the application of the constant suppression technique to databases the composite key of which is made up of an incomplete cross product of several attribute domains. The scheme for compressing the full cross product composite key is well known. This paper, however, also handles the general, incomplete case by applying the constant suppression technique in conjunction with a composite key suppression scheme

  8. Compressive behavior of fine sand.

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Bradley E. (Air Force Research Laboratory, Eglin, FL); Kabir, Md. E. (Purdue University, West Lafayette, IN); Song, Bo; Chen, Wayne (Purdue University, West Lafayette, IN)

    2010-04-01

    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  9. Effect of lower limb compression on blood flow and performance in elite wheelchair rugby athletes.

    Science.gov (United States)

    Vaile, Joanna; Stefanovic, Brad; Askew, Christopher D

    2016-01-01

    To investigate the effects of compression socks worn during exercise on performance and physiological responses in elite wheelchair rugby athletes. In a non-blinded randomized crossover design, participants completed two exercise trials (4 × 8 min bouts of submaximal exercise, each finishing with a timed maximal sprint) separated by 24 hr, with or without compression socks. National Sports Training Centre, Queensland, Australia. Ten national representative male wheelchair rugby athletes with cervical spinal cord injuries volunteered to participate. Participants wore medical grade compression socks on both legs during the exercise task (COMP), and during the control trial no compression was worn (CON). The efficacy of the compression socks was determined by assessments of limb blood flow, core body temperature, heart rate, and ratings of perceived exertion, perceived thermal strain, and physical performance. While no significant differences between conditions were observed for maximal sprint time, average lap time was better maintained in COMP compared to CON (Pbenefit may be associated with an augmentation of upper limb blood flow.

  10. Value maximizing maintenance policies under general repair

    International Nuclear Information System (INIS)

    Marais, Karen B.

    2013-01-01

    One class of maintenance optimization problems considers the notion of general repair maintenance policies where systems are repaired or replaced on failure. In each case the optimality is based on minimizing the total maintenance cost of the system. These cost-centric optimizations ignore the value dimension of maintenance and can lead to maintenance strategies that do not maximize system value. This paper applies these ideas to the general repair optimization problem using a semi-Markov decision process, discounted cash flow techniques, and dynamic programming to identify the value-optimal actions for any given time and system condition. The impact of several parameters on maintenance strategy, such as operating cost and revenue, system failure characteristics, repair and replacement costs, and the planning time horizon, is explored. This approach provides a quantitative basis on which to base maintenance strategy decisions that contribute to system value. These decisions are different from those suggested by traditional cost-based approaches. The results show (1) how the optimal action for a given time and condition changes as replacement and repair costs change, and identifies the point at which these costs become too high for profitable system operation; (2) that for shorter planning horizons it is better to repair, since there is no time to reap the benefits of increased operating profit and reliability; (3) how the value-optimal maintenance policy is affected by the system's failure characteristics, and hence whether it is worthwhile to invest in higher reliability; and (4) the impact of the repair level on the optimal maintenance policy. -- Highlights: •Provides a quantitative basis for maintenance strategy decisions that contribute to system value. •Shows how the optimal action for a given condition changes as replacement and repair costs change. •Shows how the optimal policy is affected by the system's failure characteristics. •Shows when it is

  11. Correlations between quality indexes of chest compression.

    Science.gov (United States)

    Zhang, Feng-Ling; Yan, Li; Huang, Su-Fang; Bai, Xiang-Jun

    2013-01-01

    Cardiopulmonary resuscitation (CPR) is a kind of emergency treatment for cardiopulmonary arrest, and chest compression is the most important and necessary part of CPR. The American Heart Association published the new Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care in 2010 and demanded for better performance of chest compression practice, especially in compression depth and rate. The current study was to explore the relationship of quality indexes of chest compression and to identify the key points in chest compression training and practice. Totally 219 healthcare workers accepted chest compression training by using Laerdal ACLS advanced life support resuscitation model. The quality indexes of chest compression, including compression hands placement, compression rate, compression depth, and chest wall recoil as well as self-reported fatigue time were monitored by the Laerdal Computer Skills and Reporting System. The quality of chest compression was related to the gender of the compressor. The indexes in males, including self-reported fatigue time, the accuracy of compression depth and the compression rate, the accuracy of compression rate, were higher than those in females. However, the accuracy of chest recoil was higher in females than in males. The quality indexes of chest compression were correlated with each other. The self-reported fatigue time was related to all the indexes except the compression rate. It is necessary to offer CPR training courses regularly. In clinical practice, it might be better to change the practitioner before fatigue, especially for females or weak practitioners. In training projects, more attention should be paid to the control of compression rate, in order to delay the fatigue, guarantee enough compression depth and improve the quality of chest compression.

  12. The Deputy Principal Instructional Leadership Role and Professional Learning: Perceptions of Secondary Principals, Deputies and Teachers

    Science.gov (United States)

    Leaf, Ann; Odhiambo, George

    2017-01-01

    Purpose: The purpose of this paper is to report on a study examining the perceptions of secondary principals, deputies and teachers, of deputy principal (DP) instructional leadership (IL), as well as deputies' professional learning (PL) needs. Framed within an interpretivist approach, the specific objectives of this study were: to explore the…

  13. Statewide Data on Supply and Demand of Principals after Policy Changes to Principal Preparation in Illinois

    Science.gov (United States)

    Haller, Alicia; Hunt, Erika

    2016-01-01

    Research has demonstrated that principals have a powerful impact on school improvement and student learning. Principals play a vital role in recruiting, developing, and retaining effective teachers; creating a school-wide culture of learning; and implementing a continuous improvement plan aimed at increasing student achievement. Leithwood, Louis,…

  14. Principal Self-Efficacy, Teacher Perceptions of Principal Performance, and Teacher Job Satisfaction

    Science.gov (United States)

    Evans, Molly Lynn

    2016-01-01

    In public schools, the principal's role is of paramount importance in influencing teachers to excel and to keep their job satisfaction high. The self-efficacy of leaders is an important characteristic of leadership, but this issue has not been extensively explored in school principals. Using internet-based questionnaires, this study obtained…

  15. Andragogical Practices of School Principals in Developing the Leadership Capacities of Assistant Principals

    Science.gov (United States)

    McDaniel, Luther

    2017-01-01

    The purpose of this mixed methods study was to assess school principals' perspectives of the extent to which they apply the principles of andragogy to the professional development of assistant principals in their schools. This study was conducted in school districts that constitute a RESA area in a southeastern state. The schools in these…

  16. What Principals Should Know About Food Allergies.

    Science.gov (United States)

    Munoz-Furlong, Anne

    2002-01-01

    Describes what principals should know about recent research findings on food allergies (peanuts, tree nuts, milk, eggs, soy, wheat) that can produce severe or life-threatening reactions in children. Asserts that every school should have trained staff and written procedures for reacting quickly to allergic reactions. (PKP)

  17. A Principal's Guide to Children's Allergies.

    Science.gov (United States)

    Munoz-Furlong, Anne

    1999-01-01

    Discusses several common children's allergies, including allergic rhinitis, asthma, atopic dermatitis, food allergies, and anaphylactic shock. Principals should become familiar with various medications and should work with children's parents and physicians to determine how to manage their allergies at school. Allergen avoidance is the best…

  18. Assessment of School Principals' Reassignment Process

    Science.gov (United States)

    Sezgin-Nartgün, Senay; Ekinci, Serkan

    2016-01-01

    This study aimed to identify administrators' views related to the assessment of school principals' reassignment in educational organizations. The study utilized qualitative research design and the study group composed of 8 school administrators selected via simple sampling who were employed in the Bolu central district in 2014-2015 academic year.…

  19. An Exploration of Principal Instructional Technology Leadership

    Science.gov (United States)

    Townsend, LaTricia Walker

    2013-01-01

    Nationwide the demand for schools to incorporate technology into their educational programs is great. In response, North Carolina developed the IMPACT model in 2003 to provide a comprehensive model for technology integration in the state. The model is aligned to national educational technology standards for teachers, students, and principals.…

  20. Principals' Leadership Styles and Student Achievement

    Science.gov (United States)

    Harnish, David Alan

    2012-01-01

    Many schools struggle to meet No Child Left Behind's stringent adequate yearly progress standards, although the benchmark has stimulated national creativity and reform. The purpose of this study was to explore teacher perceptions of principals' leadership styles, curriculum reform, and student achievement to ascertain possible factors to improve…

  1. How To Select a Good Assistant Principal.

    Science.gov (United States)

    Holman, Linda J.

    1997-01-01

    Notes that a well-structured job profile and interview can provide insight into the key qualities of an effective assistant principal. These include organizational skills, basic accounting knowledge, interpersonal skills, dependability, strong work ethic, effective problem-solving skills, leadership skills, written communication skills,…

  2. Principals' Transformational Leadership in School Improvement

    Science.gov (United States)

    Yang, Yingxiu

    2013-01-01

    Purpose: This paper aims to contribute experience and ideas of the transformational leadership, not only for the principal want to improve leadership himself (herself), but also for the school at critical period of improvement, through summarizing forming process and the problem during the course and key factors that affect the course.…

  3. Imprecise Beliefs in a Principal Agent Model

    NARCIS (Netherlands)

    Rigotti, L.

    1998-01-01

    This paper presents a principal-agent model where the agent has multiple, or imprecise, beliefs. We model this situation formally by assuming the agent's preferences are incomplete. One can interpret this multiplicity as an agent's limited knowledge of the surrounding environment. In this setting,

  4. Bootstrap confidence intervals for principal response curves

    NARCIS (Netherlands)

    Timmerman, Marieke E.; Ter Braak, Cajo J. F.

    2008-01-01

    The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

  5. Bootstrap Confidence Intervals for Principal Response Curves

    NARCIS (Netherlands)

    Timmerman, M.E.; Braak, ter C.J.F.

    2008-01-01

    The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the

  6. Islamitisch financieren tussen principes en realiteit

    NARCIS (Netherlands)

    Wolters, W.G.

    2009-01-01

    ‘De financiële crisis zou niet hebben plaatsgevonden, als de wereld de principes van islamitisch bankieren en financieren zou hebben aangenomen.’ Dat was één van de kenmerkende reacties van de kant van de islamitische bankiers, in de laatste maanden van 2008. Toen begon de wereldwijde financiële

  7. Dealing with Crises: One Principal's Experience.

    Science.gov (United States)

    Foley, Charles F.

    1986-01-01

    The principal of Concord High School (New Hampshire) recounts the 1985-86 school year's four crises--the visits of teacher-astronaut Christa McAuliffe and Secretary of Education William Bennett, the shooting of a former student, and the Challenger space shuttle explosion. The greatest challenge was resuming the normal schedule and fielding media…

  8. Principal Pressure in the Middle of Accountability

    Science.gov (United States)

    Derrington, Mary Lynne; Larsen, Donald E.

    2012-01-01

    When a new superintendent is hired, Tom Thompson, middle school principal, is squeezed between complying with the demands of the district and cultivating a positive culture in his school. He wrestles with the stress of facing tough leadership choices that take a toll on his physical and mental health. Tom realizes that a career-ending move might…

  9. The Relationship between Principals' Managerial Approaches and ...

    African Journals Online (AJOL)

    Students' discipline is critical to the attainment of positive school outcomes. This paper presents and discusses findings of a study on the relationship between principals' management approaches and the level of student discipline in selected public secondary schools in Kenya. The premise of the study was that the level of ...

  10. Primary School Principals' Experiences with Smartphone Apps

    Science.gov (United States)

    Çakir, Rahman; Aktay, Sayim

    2016-01-01

    Smartphones are not just pieces of hardware, they at same time also dip into software features such as communication systems. The aim of this study is to examine primary school principals' experiences with smart phone applications. Shedding light on this subject means that this research is qualitative. Criterion sampling has been intentionally…

  11. Principal normal indicatrices of closed space curves

    DEFF Research Database (Denmark)

    Røgen, Peter

    1999-01-01

    A theorem due to J. Weiner, which is also proven by B. Solomon, implies that a principal normal indicatrix of a closed space curve with nonvanishing curvature has integrated geodesic curvature zero and contains no subarc with integrated geodesic curvature pi. We prove that the inverse problem alw...

  12. Summer Principals'/Directors' Orientation Training Module.

    Science.gov (United States)

    Mata, Robert L.; Garcia, Richard L.

    Intended to provide current or potential project principals/directors with the basic knowledge, skills, abilities, and sensitivities needed to manage a summer migrant school project in the local educational setting, this module provides instruction in the project management areas of planning, preparation, control, and termination. The module…

  13. Probabilistic Principal Component Analysis for Metabolomic Data.

    LENUS (Irish Health Repository)

    Nyamundanda, Gift

    2010-11-23

    Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.

  14. Principals in Partnership with Math Coaches

    Science.gov (United States)

    Grant, Catherine Miles; Davenport, Linda Ruiz

    2009-01-01

    One of the most promising developments in math education is the fact that many districts are hiring math coaches--also called math resource teachers, math facilitators, math lead teachers, or math specialists--to assist elementary-level teachers with math instruction. What must not be lost, however, is that principals play an essential role in…

  15. Experimental and principal component analysis of waste ...

    African Journals Online (AJOL)

    The present study is aimed at determining through principal component analysis the most important variables affecting bacterial degradation in ponds. Data were collected from literature. In addition, samples were also collected from the waste stabilization ponds at the University of Nigeria, Nsukka and analyzed to ...

  16. Principal Component Analysis as an Efficient Performance ...

    African Journals Online (AJOL)

    This paper uses the principal component analysis (PCA) to examine the possibility of using few explanatory variables (X's) to explain the variation in Y. It applied PCA to assess the performance of students in Abia State Polytechnic, Aba, Nigeria. This was done by estimating the coefficients of eight explanatory variables in a ...

  17. Principal component analysis of psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...

  18. The Principal as Professional Development Leader

    Science.gov (United States)

    Lindstrom, Phyllis H.; Speck, Marsha

    2004-01-01

    Individual teachers have the greatest effect on student performance. Principals, as professional development leaders, are in the best position to provide teachers with the professional development strategies they need to improve skills and raise student achievement. This book guides readers through a step-by-step process to formulate, implement,…

  19. Burnout And Lifestyle Of Principals And Entrepreneurs

    Directory of Open Access Journals (Sweden)

    Jasna Lavrenčič

    2014-12-01

    Full Text Available Research Question (RQ: What kind of lifestyle do the principals and entrepreneurs lead? Does the lifestyle of principals and entrepreneurs influence burnout? Purpose: To find out, based on the results of a questionnaire, what kind of lifestyle both researched groups lead. Does lifestyle have an influence on the occurrence of the phenomenon of burnout. Method: We used the method of data collection by questionnaire. Acquired data were analyzed using SPSS, descriptive and inference statistics. Results: Results showed, that both groups lead a similar lifestyle and that lifestyle influences burnout with principals, as well as entrepreneurs. Organization: School principals and entrepreneurs are the heads of individual organizations or companies, the goal of which is success. To be successful in their work, they must adapt their lifestyle, which can be healthy or unhealthy. If their lifestyle is unhealthy, it can lead to burnout. Society: With results of the questionnaire we would like to answer the question about the lifestyle of both groups and its influence on the occurrence of burnout. Originality: The study of lifestyle and the occurrence of burnout in these two groups is the first study in this area. Limitations/Future Research: In continuation, research groups could be submitted to the research fields of effort physiology and tracking of certain haematological parameters, such as cholesterol, blood sugar and stress hormones - adrenaline, noradrenalin, cortisol. Thus, we could carry out an even more in depth research of the connection between lifestyle and burnout.

  20. Principal Connection / Amazon and the Whole Teacher

    Science.gov (United States)

    Hoerr, Thomas R.

    2015-01-01

    A recent controversy over Amazon's culture has strong implications for the whole child approach, and it offers powerful lessons for principals. A significant difference between the culture of so many businesses today and the culture at good schools is that in good schools, the welfare of the employees is very important. Student success is the…

  1. The Gender of Secondary School Principals.

    Science.gov (United States)

    Bonuso, Carl; Shakeshaft, Charol

    1983-01-01

    A study was conducted to understand why so few of the secondary school principals in New York State are women. Results suggest two possible causes: either sufficient women candidates do not apply for the positions, or sex discrimination still exists. (KH)

  2. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest

    NARCIS (Netherlands)

    Monsieurs, Koenraad G.; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F.; Calle, Paul A.

    2012-01-01

    Background and goal of study: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with

  3. Maximal venous outflow velocity: an index for iliac vein obstruction.

    Science.gov (United States)

    Jones, T Matthew; Cassada, David C; Heidel, R Eric; Grandas, Oscar G; Stevens, Scott L; Freeman, Michael B; Edmondson, James D; Goldman, Mitchell H

    2012-11-01

    Leg swelling is a common cause for vascular surgical evaluation, and iliocaval obstruction due to May-Thurner syndrome (MTS) can be difficult to diagnose. Physical examination and planar radiographic imaging give anatomic information but may miss the fundamental pathophysiology of MTS. Similarly, duplex ultrasonographic examination of the legs gives little information about central impedance of venous return above the inguinal ligament. We have modified the technique of duplex ultrasonography to evaluate the flow characteristics of the leg after tourniquet-induced venous engorgement, with the objective of revealing iliocaval obstruction characteristic of MTS. Twelve patients with signs and symptoms of MTS were compared with healthy control subjects for duplex-derived maximal venous outflow velocity (MVOV) after tourniquet-induced venous engorgement of the leg. The data for healthy control subjects were obtained from a previous study of asymptomatic volunteers using the same MVOV maneuvers. The tourniquet-induced venous engorgement mimics that caused during vigorous exercise. A right-to-left ratio of MVOV was generated for patient comparisons. Patients with clinical evidence of MTS had a mean right-to-left MVOV ratio of 2.0, asymptomatic control subjects had a mean ratio of 1.3, and MTS patients who had undergone endovascular treatment had a poststent mean ratio of 1.2 (P = 0.011). Interestingly, computed tomography and magnetic resonance imaging results, when available, were interpreted as positive in only 53% of the patients with MTS according to both our MVOV criteria and confirmatory venography. After intervention, the right-to-left MVOV ratio in the MTS patients was found to be reduced similar to asymptomatic control subjects, indicating a relief of central venous obstruction by stenting the compressive MTS anatomy. Duplex-derived MVOV measurements are helpful for detection of iliocaval venous obstruction, such as MTS. Right-to-left MVOV ratios and

  4. Microfluidic pressure sensing using trapped air compression.

    Science.gov (United States)

    Srivastava, Nimisha; Burns, Mark A

    2007-05-01

    We have developed a microfluidic method for measuring the fluid pressure head experienced at any location inside a microchannel. The principal component is a microfabricated sealed chamber with a single inlet and no exit; the entrance to the single inlet is positioned at the location where pressure is to be measured. The pressure measurement is then based on monitoring the movement of a liquid-air interface as it compresses air trapped inside the microfabricated sealed chamber and calculating the pressure using the ideal gas law. The method has been used to measure the pressure of the air stream and continuous liquid flow inside microfluidic channels (d approximately 50 microm). Further, a pressure drop has also been measured using multiple microfabricated sealed chambers. For air pressure, a resolution of 700 Pa within a full-scale range of 700-100 kPa was obtained. For liquids, pressure drops as low as 70 Pa were obtained in an operating range from 70 Pa to 10 kPa. Since the method primarily uses a microfluidic sealed chamber, it does not require additional fabrication steps and may easily be incorporated in several lab-on-a-chip fluidic applications for laminar as well as turbulent flow conditions.

  5. POLITENESS MAXIM OF MAIN CHARACTER IN SECRET FORGIVEN

    Directory of Open Access Journals (Sweden)

    Sang Ayu Isnu Maharani

    2017-06-01

    Full Text Available Maxim of Politeness is an interesting subject to be discussed, since politeness has been criticized from our childhood. We are obliques to be polite to anyone either in speaking or in acting. Somehow we are manage to show politeness in our spoken expression though our intention might be not so polite. For example we must appriciate others opinion although we feel objection toward the opinion. In this article the analysis of politeness is based on maxim proposes by Leech. He proposed six types of politeness maxim. The discussion shows that the main character (Kristen and Kami use all types of maxim in their conversation. The most commonly used are approbation maxim and agreement maxim

  6. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  7. Maximizers versus satisficers: Decision-making styles, competence, and outcomes

    OpenAIRE

    Andrew M. Parker; Wändi Bruine de Bruin; Baruch Fischhoff

    2007-01-01

    Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al.\\ (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decision...

  8. Natural maximal νμ-ντ mixing

    International Nuclear Information System (INIS)

    Wetterich, C.

    1999-01-01

    The naturalness of maximal mixing between myon- and tau-neutrinos is investigated. A spontaneously broken nonabelian generation symmetry can explain a small parameter which governs the deviation from maximal mixing. In many cases all three neutrino masses are almost degenerate. Maximal ν μ -ν τ -mixing suggests that the leading contribution to the light neutrino masses arises from the expectation value of a heavy weak triplet rather than from the seesaw mechanism. In this scenario the deviation from maximal mixing is predicted to be less than about 1%. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  9. On the way towards a generalized entropy maximization procedure

    International Nuclear Information System (INIS)

    Bagci, G. Baris; Tirnakli, Ugur

    2009-01-01

    We propose a generalized entropy maximization procedure, which takes into account the generalized averaging procedures and information gain definitions underlying the generalized entropies. This novel generalized procedure is then applied to Renyi and Tsallis entropies. The generalized entropy maximization procedure for Renyi entropies results in the exponential stationary distribution asymptotically for q element of (0,1] in contrast to the stationary distribution of the inverse power law obtained through the ordinary entropy maximization procedure. Another result of the generalized entropy maximization procedure is that one can naturally obtain all the possible stationary distributions associated with the Tsallis entropies by employing either ordinary or q-generalized Fourier transforms in the averaging procedure.

  10. Violating Bell inequalities maximally for two d-dimensional systems

    International Nuclear Information System (INIS)

    Chen Jingling; Wu Chunfeng; Oh, C. H.; Kwek, L. C.; Ge Molin

    2006-01-01

    We show the maximal violation of Bell inequalities for two d-dimensional systems by using the method of the Bell operator. The maximal violation corresponds to the maximal eigenvalue of the Bell operator matrix. The eigenvectors corresponding to these eigenvalues are described by asymmetric entangled states. We estimate the maximum value of the eigenvalue for large dimension. A family of elegant entangled states |Ψ> app that violate Bell inequality more strongly than the maximally entangled state but are somewhat close to these eigenvectors is presented. These approximate states can potentially be useful for quantum cryptography as well as many other important fields of quantum information

  11. CEPRAM: Compression for Endurance in PCM RAM

    OpenAIRE

    González Alberquilla, Rodrigo; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Tirado Fernández, Francisco

    2017-01-01

    We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...

  12. Entropy, Coding and Data Compression

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 9. Entropy, Coding and Data Compression. S Natarajan. General Article Volume 6 Issue 9 September 2001 pp 35-45. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/006/09/0035-0045 ...

  13. Shock compression of synthetic opal

    International Nuclear Information System (INIS)

    Inoue, A; Okuno, M; Okudera, H; Mashimo, T; Omurzak, E; Katayama, S; Koyano, M

    2010-01-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO 4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO 2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  14. Range Compressed Holographic Aperture Ladar

    Science.gov (United States)

    2017-06-01

    entropy saturation behavior of the estimator is analytically described. Simultaneous range-compression and aperture synthesis is experimentally...4 2.1 Circular and Inverse -Circular HAL...2.3 Single Aperture, Multi-λ Imaging ...................................................................................... 14 2.4 Simultaneous Range

  15. Compression of Probabilistic XML documents

    NARCIS (Netherlands)

    Veldman, Irma

    2009-01-01

    Probabilistic XML (PXML) files resulting from data integration can become extremely large, which is undesired. For XML there are several techniques available to compress the document and since probabilistic XML is in fact (a special form of) XML, it might benefit from these methods even more. In

  16. Shock compression of synthetic opal

    Science.gov (United States)

    Inoue, A.; Okuno, M.; Okudera, H.; Mashimo, T.; Omurzak, E.; Katayama, S.; Koyano, M.

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  17. Shock compression of synthetic opal

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, A; Okuno, M; Okudera, H [Department of Earth Sciences, Kanazawa University Kanazawa, Ishikawa, 920-1192 (Japan); Mashimo, T; Omurzak, E [Shock Wave and Condensed Matter Research Center, Kumamoto University, Kumamoto, 860-8555 (Japan); Katayama, S; Koyano, M, E-mail: okuno@kenroku.kanazawa-u.ac.j [JAIST, Nomi, Ishikawa, 923-1297 (Japan)

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO{sub 4} tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO{sub 2} glass. However, internal silanole groups still remain even at 38.1 GPa.

  18. Force balancing in mammographic compression

    International Nuclear Information System (INIS)

    Branderhorst, W.; Groot, J. E. de; Lier, M. G. J. T. B. van; Grimbergen, C. A.; Neeter, L. M. F. H.; Heeten, G. J. den; Neeleman, C.

    2016-01-01

    Purpose: In mammography, the height of the image receptor is adjusted to the patient before compressing the breast. An inadequate height setting can result in an imbalance between the forces applied by the image receptor and the paddle, causing the clamped breast to be pushed up or down relative to the body during compression. This leads to unnecessary stretching of the skin and other tissues around the breast, which can make the imaging procedure more painful for the patient. The goal of this study was to implement a method to measure and minimize the force imbalance, and to assess its feasibility as an objective and reproducible method of setting the image receptor height. Methods: A trial was conducted consisting of 13 craniocaudal mammographic compressions on a silicone breast phantom, each with the image receptor positioned at a different height. The image receptor height was varied over a range of 12 cm. In each compression, the force exerted by the compression paddle was increased up to 140 N in steps of 10 N. In addition to the paddle force, the authors measured the force exerted by the image receptor and the reaction force exerted on the patient body by the ground. The trial was repeated 8 times, with the phantom remounted at a slightly different orientation and position between the trials. Results: For a given paddle force, the obtained results showed that there is always exactly one image receptor height that leads to a balance of the forces on the breast. For the breast phantom, deviating from this specific height increased the force imbalance by 9.4 ± 1.9 N/cm (6.7%) for 140 N paddle force, and by 7.1 ± 1.6 N/cm (17.8%) for 40 N paddle force. The results also show that in situations where the force exerted by the image receptor is not measured, the craniocaudal force imbalance can still be determined by positioning the patient on a weighing scale and observing the changes in displayed weight during the procedure. Conclusions: In mammographic breast

  19. Adiabatic compression of ion rings

    International Nuclear Information System (INIS)

    Larrabee, D.A.; Lovelace, R.V.

    1982-01-01

    A study has been made of the compression of collisionless ion rings in an increasing external magnetic field, B/sub e/ = zB/sub e/(t), by numerically implementing a previously developed kinetic theory of ring compression. The theory is general in that there is no limitation on the ring geometry or the compression ratio, lambdaequivalentB/sub e/ (final)/B/sub e/ (initial)> or =1. However, the motion of a single particle in an equilibrium is assumed to be completely characterized by its energy H and canonical angular momentum P/sub theta/ with the absence of a third constant of the motion. The present computational work assumes that plasma currents are negligible, as is appropriate for a low-temperature collisional plasma. For a variety of initial ring geometries and initial distribution functions (having a single value of P/sub theta/), it is found that the parameters for ''fat'', small aspect ratio rings follow general scaling laws over a large range of compression ratios, 1 3 : The ring radius varies as lambda/sup -1/2/; the average single particle energy as lambda/sup 0.72/; the root mean square energy spread as lambda/sup 1.1/; and the total current as lambda/sup 0.79/. The field reversal parameter is found to saturate at values typically between 2 and 3. For large compression ratios the current density is found to ''hollow out''. This hollowing tends to improve the interchange stability of an embedded low β plasma. The implications of these scaling laws for fusion reactor systems are discussed

  20. Effect of compressibility on the hypervelocity penetration

    Science.gov (United States)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  1. Compression map, functional groups and fossilization: A chemometric approach (Pennsylvanian neuropteroid foliage, Canada)

    Science.gov (United States)

    D'Angelo, J. A.; Zodrow, E.L.; Mastalerz, Maria

    2012-01-01

    Nearly all of the spectrochemical studies involving Carboniferous foliage of seed-ferns are based on a limited number of pinnules, mainly compressions. In contrast, in this paper we illustrate working with a larger pinnate segment, i.e., a 22-cm long neuropteroid specimen, compression-preserved with cuticle, the compression map. The objective is to study preservation variability on a larger scale, where observation of transparency/opacity of constituent pinnules is used as a first approximation for assessing the degree of pinnule coalification/fossilization. Spectrochemical methods by Fourier transform infrared spectrometry furnish semi-quantitative data for principal component analysis.The compression map shows a high degree of preservation variability, which ranges from comparatively more coalified pinnules to less coalified pinnules that resemble fossilized-cuticles, noting that the pinnule midveins are preserved more like fossilized-cuticles. A general overall trend of coalified pinnules towards fossilized-cuticles, i.e., variable chemistry, is inferred from the semi-quantitative FTIR data as higher contents of aromatic compounds occur in the visually more opaque upper location of the compression map. The latter also shows a higher condensation of the aromatic nuclei along with some variation in both ring size and degree of aromatic substitution. From principal component analysis we infer correspondence between transparency/opacity observation and chemical information which correlate with varying degree to fossilization/coalification among pinnules. ?? 2011 Elsevier B.V.

  2. Evaluation of anti-hyperglycemic effect of Actinidia kolomikta (Maxim. etRur.) Maxim. root extract.

    Science.gov (United States)

    Hu, Xuansheng; Cheng, Delin; Wang, Linbo; Li, Shuhong; Wang, Yuepeng; Li, Kejuan; Yang, Yingnan; Zhang, Zhenya

    2015-05-01

    This study aimed to evaluate the anti-hyperglycemic effect of ethanol extract from Actinidia kolomikta (Maxim. etRur.) Maxim. root (AKE).An in vitro evaluation was performed by using rat intestinal α-glucosidase (maltase and sucrase), the key enzymes linked with type 2 diabetes. And an in vivo evaluation was also performed by loading maltose, sucrose, glucose to normal rats. As a result, AKE showed concentration-dependent inhibition effects on rat intestinal maltase and rat intestinal sucrase with IC(50) values of 1.83 and 1.03mg/mL, respectively. In normal rats, after loaded with maltose, sucrose and glucose, administration of AKE significantly reduced postprandial hyperglycemia, which is similar to acarbose used as an anti-diabetic drug. High contents of total phenolics (80.49 ± 0.05mg GAE/g extract) and total flavonoids (430.69 ± 0.91mg RE/g extract) were detected in AKE. In conclusion, AKE possessed anti-hyperglycemic effects and the possible mechanisms were associated with its inhibition on α-glucosidase and the improvement on insulin release and/or insulin sensitivity as well. The anti-hyperglycemic activity possessed by AKE maybe attributable to its high contents of phenolic and flavonoid compounds.

  3. Alternative approaches to maximally supersymmetric field theories

    International Nuclear Information System (INIS)

    Broedel, Johannes

    2010-01-01

    The central objective of this work is the exploration and application of alternative possibilities to describe maximally supersymmetric field theories in four dimensions: N=4 super Yang-Mills theory and N=8 supergravity. While twistor string theory has been proven very useful in the context of N=4 SYM, no analogous formulation for N=8 supergravity is available. In addition to the part describing N=4 SYM theory, twistor string theory contains vertex operators corresponding to the states of N=4 conformal supergravity. Those vertex operators have to be altered in order to describe (non-conformal) Einstein supergravity. A modified version of the known open twistor string theory, including a term which breaks the conformal symmetry for the gravitational vertex operators, has been proposed recently. In a first part of the thesis structural aspects and consistency of the modified theory are discussed. Unfortunately, the majority of amplitudes can not be constructed, which can be traced back to the fact that the dimension of the moduli space of algebraic curves in twistor space is reduced in an inconsistent manner. The issue of a possible finiteness of N=8 supergravity is closely related to the question of the existence of valid counterterms in the perturbation expansion of the theory. In particular, the coefficient in front of the so-called R 4 counterterm candidate has been shown to vanish by explicit calculation. This behavior points into the direction of a symmetry not taken into account, for which the hidden on-shell E 7(7) symmetry is the prime candidate. The validity of the so-called double-soft scalar limit relation is a necessary condition for a theory exhibiting E 7(7) symmetry. By calculating the double-soft scalar limit for amplitudes derived from an N=8 supergravity action modified by an additional R 4 counterterm, one can test for possible constraints originating in the E 7(7) symmetry. In a second part of the thesis, the appropriate amplitudes are calculated

  4. Principal components analysis in clinical studies.

    Science.gov (United States)

    Zhang, Zhongheng; Castelló, Adela

    2017-09-01

    In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.

  5. A Genealogical Interpretation of Principal Components Analysis

    Science.gov (United States)

    McVean, Gil

    2009-01-01

    Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557

  6. PCA: Principal Component Analysis for spectra modeling

    Science.gov (United States)

    Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas

    2012-07-01

    The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.

  7. COPD phenotype description using principal components analysis

    DEFF Research Database (Denmark)

    Roy, Kay; Smith, Jacky; Kolsum, Umme

    2009-01-01

    BACKGROUND: Airway inflammation in COPD can be measured using biomarkers such as induced sputum and Fe(NO). This study set out to explore the heterogeneity of COPD using biomarkers of airway and systemic inflammation and pulmonary function by principal components analysis (PCA). SUBJECTS...... AND METHODS: In 127 COPD patients (mean FEV1 61%), pulmonary function, Fe(NO), plasma CRP and TNF-alpha, sputum differential cell counts and sputum IL8 (pg/ml) were measured. Principal components analysis as well as multivariate analysis was performed. RESULTS: PCA identified four main components (% variance...... associations between the variables within components 1 and 2. CONCLUSION: COPD is a multi dimensional disease. Unrelated components of disease were identified, including neutrophilic airway inflammation which was associated with systemic inflammation, and sputum eosinophils which were related to increased Fe...

  8. Executive Compensation and Principal-Agent Theory.

    OpenAIRE

    Garen, John E

    1994-01-01

    The empirical literature on executive compensation generally fails to specify a model of executive pay on which to base hypotheses regarding its determinants. In contrast, this paper analyzes a simple principal-agent model to determine how well it explains variations in CEO incentive pay and salaries. Many findings are consistent with the basic intuition of principle-agent models that compensation is structured to trade off incentives with insurance. However, statistical significance for some...

  9. Resonant Homoclinic Flips Bifurcation in Principal Eigendirections

    Directory of Open Access Journals (Sweden)

    Tiansi Zhang

    2013-01-01

    Full Text Available A codimension-4 homoclinic bifurcation with one orbit flip and one inclination flip at principal eigenvalue direction resonance is considered. By introducing a local active coordinate system in some small neighborhood of homoclinic orbit, we get the Poincaré return map and the bifurcation equation. A detailed investigation produces the number and the existence of 1-homoclinic orbit, 1-periodic orbit, and double 1-periodic orbits. We also locate their bifurcation surfaces in certain regions.

  10. Interplay between tilted and principal axis rotation

    International Nuclear Information System (INIS)

    Datta, Pradip; Roy, Santosh; Chattopadhyay, S.

    2014-01-01

    At IUAC-INGA, our group has studied four neutron rich nuclei of mass-110 region, namely 109,110 Ag and 108,110 Cd. These nuclei provide the unique platform to study the interplay between Tilted and Principal axis rotation since these are moderately deformed and at the same time, shears structures are present at higher spins. The salient features of the high spin behaviors of these nuclei will be discussed which are the signatures of this interplay

  11. Interplay between tilted and principal axis rotation

    Energy Technology Data Exchange (ETDEWEB)

    Datta, Pradip [Ananda Mohan College, 102/1 Raja Rammohan Sarani, Kolkata 700 009 (India); Roy, Santosh; Chattopadhyay, S. [Saha Institute of Nuclear Physics, 1/AF Bidhan Nagar, Kolkata 700 064 (India)

    2014-08-14

    At IUAC-INGA, our group has studied four neutron rich nuclei of mass-110 region, namely {sup 109,110}Ag and {sup 108,110}Cd. These nuclei provide the unique platform to study the interplay between Tilted and Principal axis rotation since these are moderately deformed and at the same time, shears structures are present at higher spins. The salient features of the high spin behaviors of these nuclei will be discussed which are the signatures of this interplay.

  12. Flux compression generators as plasma compression power sources

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.; Thomson, D.B.; Garn, W.B.

    1979-01-01

    A survey is made of applications where explosive-driven magnetic flux compression generators have been or can be used to directly power devices that produce dense plasmas. Representative examples are discussed that are specific to the theta pinch, the plasma gun, the dense plasma focus and the Z pinch. These examples are used to illustrate the high energy and power capabilities of explosive generators. An application employing a rocket-borne, generator-powered plasma gun emphasizes the size and weight potential of flux compression power supplies. Recent results from a local effort to drive a dense plasma focus are provided. Imploding liners ae discussed in the context of both the theta and Z pinches

  13. Multilevel sparse functional principal component analysis.

    Science.gov (United States)

    Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S

    2014-01-29

    We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.

  14. A principal components model of soundscape perception.

    Science.gov (United States)

    Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta

    2010-11-01

    There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.

  15. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT)

    Energy Technology Data Exchange (ETDEWEB)

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-12-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  16. The task of control digital image compression

    OpenAIRE

    TASHMANOV E.B.; МАМАTOV М.S.

    2014-01-01

    In this paper we consider the relationship of control tasks and image compression losses. The main idea of this approach is to allocate structural lines simplified image and further compress the selected data

  17. Discrete Wigner Function Reconstruction and Compressed Sensing

    OpenAIRE

    Zhang, Jia-Ning; Fang, Lei; Ge, Mo-Lin

    2011-01-01

    A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.

  18. Compressibility Analysis of the Tongue During Speech

    National Research Council Canada - National Science Library

    Unay, Devrim

    2001-01-01

    .... In this paper, 3D compression and expansion analysis of the tongue will be presented. Patterns of expansion and compression have been compared for different syllables and various repetitions of each syllable...

  19. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  20. Harnessing Disorder in Compression Based Nanofabrication

    Science.gov (United States)

    Engel, Clifford John

    The future of nanotechnologies depends on the successful development of versatile, low-cost techniques for patterning micro- and nanoarchitectures. While most approaches to nanofabrication have focused primarily on making periodic structures at ever smaller length scales with an ultimate goal of massively scaling their production, I have focused on introducing control into relatively disordered nanofabrication systems. Well-ordered patterns are increasingly unnecessary for a growing range of applications, from anti-biofouling coatings to light trapping to omniphobic surfaces. The ability to manipulate disorder, at will and over multiple length scales, starting with the nanoscale, can open new prospects for textured substrates and unconventional applications. Taking advantage of previously considered defects; I have been able to develop nanofabrication techniques with potential for massive scalability and the incorporation into a wide range of potential application. This thesis first describes the manipulation of the non-Newtonian properties of liquid Ga and Ga alloys to confine the metal and metal alloys in gratings with sub-wavelength periodicities. Through a solid to liquid phase change, I was able to access the superior plasmonic properties of liquid Ga for the generation of surface plasmon polaritons (SPP). The switching contract between solid and liquid Ga confine in the nanogratings allowed for reversible manipulation of SPP properties through heating and cooling around the relatively low melting temperature of Ga (29.8 °C). The remaining chapters focus on the development and characterization of an all polymer wrinkle material system. Wrinkles, spontaneous disordered features that are produced in response to compressive force, are an ideal for a growing number of applications where fine feature control is no longer the main motivation. However the mechanical limitations of many wrinkle systems have restricted the potential applications of wrinkled surfaces

  1. Kinetic theory in maximal-acceleration invariant phase space

    International Nuclear Information System (INIS)

    Brandt, H.E.

    1989-01-01

    A vanishing directional derivative of a scalar field along particle trajectories in maximal acceleration invariant phase space is identical in form to the ordinary covariant Vlasov equation in curved spacetime in the presence of both gravitational and nongravitational forces. A natural foundation is thereby provided for a covariant kinetic theory of particles in maximal-acceleration invariant phase space. (orig.)

  2. IIB solutions with N>28 Killing spinors are maximally supersymmetric

    International Nuclear Information System (INIS)

    Gran, U.; Gutowski, J.; Papadopoulos, G.; Roest, D.

    2007-01-01

    We show that all IIB supergravity backgrounds which admit more than 28 Killing spinors are maximally supersymmetric. In particular, we find that for all N>28 backgrounds the supercovariant curvature vanishes, and that the quotients of maximally supersymmetric backgrounds either preserve all 32 or N<29 supersymmetries

  3. Muscle mitochondrial capacity exceeds maximal oxygen delivery in humans

    DEFF Research Database (Denmark)

    Boushel, Robert Christopher; Gnaiger, Erich; Calbet, Jose A L

    2011-01-01

    Across a wide range of species and body mass a close matching exists between maximal conductive oxygen delivery and mitochondrial respiratory rate. In this study we investigated in humans how closely in-vivo maximal oxygen consumption (VO(2) max) is matched to state 3 muscle mitochondrial respira...

  4. Pace's Maxims for Homegrown Library Projects. Coming Full Circle

    Science.gov (United States)

    Pace, Andrew K.

    2005-01-01

    This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…

  5. The Interdependence of Principal School Leadership and Student Achievement

    Science.gov (United States)

    Soehner, David; Ryan, Thomas

    2011-01-01

    This review illuminated principal school leadership as a variable that impacted achievement. The principal as school leader and manager was explored because these roles were thought to impact student achievement both directly and indirectly. Specific principal leadership behaviors and principal effectiveness were explored as variables potentially…

  6. Negligence--When Is the Principal Liable? A Legal Memorandum.

    Science.gov (United States)

    Stern, Ralph D., Ed.

    Negligence, a tort liability, is defined, discussed, and reviewed in relation to several court decisions involving school principals. The history of liability suits against school principals suggests that a reasonable, prudent principal can avoid legal problems. Ten guidelines are presented to assist principals in avoiding charges of negligence.…

  7. Management Of Indiscipline Among Teachers By Principals Of ...

    African Journals Online (AJOL)

    This study compared the management of indiscipline among teachers by public and private school principals in Akwa Ibom State. The sample comprised four hundred and fifty (450) principals/vice principals randomly selected from a population of one thousand, four hundred and twenty eight (1,428) principals. The null ...

  8. On Normalized Compression Distance and Large Malware

    OpenAIRE

    Borbely, Rebecca Schuller

    2015-01-01

    Normalized Compression Distance (NCD) is a popular tool that uses compression algorithms to cluster and classify data in a wide range of applications. Existing discussions of NCD's theoretical merit rely on certain theoretical properties of compression algorithms. However, we demonstrate that many popular compression algorithms don't seem to satisfy these theoretical properties. We explore the relationship between some of these properties and file size, demonstrating that this theoretical pro...

  9. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  10. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  11. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  12. A biological compression model and its applications.

    Science.gov (United States)

    Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd

    2011-01-01

    A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.

  13. FRESCO: Referential compression of highly similar sequences.

    Science.gov (United States)

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  14. Anglo-American views of Gavrilo Princip

    Directory of Open Access Journals (Sweden)

    Markovich Slobodan G.

    2015-01-01

    Full Text Available The paper deals with Western (Anglo-American views on the Sarajevo assassination/attentat and Gavrilo Princip. Articles on the assassination and Princip in two leading quality dailies (The Times and The New York Times have particularly been analysed as well as the views of leading historians and journalists who covered the subject including: R. G. D. Laffan, R. W. Seton-Watson, Winston Churchill, Sidney Fay, Bernadotte Schmitt, Rebecca West, A. J. P. Taylor, Vladimir Dedijer, Christopher Clark and Tim Butcher. In the West, the original general condemnation of the assassination and its main culprits was challenged when Rebecca West published her famous travelogue on Yugoslavia in 1941. Another Brit, the remarkable historian A. J. P. Taylor, had a much more positive view on the Sarajevo conspirators and blamed Germany and Austria-Hungary for the outbreak of the Great War. A turning point in Anglo-American perceptions was the publication of Vladimir Dedijer’s monumental book The Road to Sarajevo (1966, which humanised the main conspirators, a process initiated by R. West. Dedijer’s book was translated from English into all major Western languages and had an immediate impact on the understanding of the Sarajevo assassination. The rise of national antagonisms in Bosnia gradually alienated Princip from Bosnian Muslims and Croats, a process that began in the 1980s and was completed during the wars of the Yugoslav succession. Although all available sources clearly show that Princip, an ethnic Serb, gradually developed a broader Serbo-Croat and Yugoslav identity, he was ethnified and seen exclusively as a Serb by Bosnian Croats and Bosniaks and Western journalists in the 1990s. In the past century imagining Princip in Serbia and the West involved a whole spectrum of views. In interwar Anglo-American perceptions he was a fanatic and lunatic. He became humanised by Rebecca West (1941, A. J. P. Taylor showed understanding for his act (1956, he was fully

  15. Principal Investigator-in-a-Box

    Science.gov (United States)

    Young, Laurence R.

    1999-01-01

    Human performance in orbit is currently limited by several factors beyond the intrinsic awkwardness of motor control in weightlessness. Cognitive functioning can be affected by such factors as cumulative sleep loss, stress and the psychological effects of long-duration small-group isolation. When an astronaut operates a scientific experiment, the performance decrement associated with such factors can lead to lost or poor quality data and even the total loss of a scientific objective, at great cost to the sponsors and to the dismay of the Principal Investigator. In long-duration flights, as anticipated on the International Space Station and on any planetary exploration, the experimental model is further complicated by long delays between training and experiment, and the large number of experiments each crew member must perform. Although no documented studies have been published on the subject, astronauts report that an unusually large number of simple errors are made in space. Whether a result of the effects of microgravity, accumulated fatigue, stress or other factors, this pattern of increased error supports the need for a computerized decision-making aid for astronauts performing experiments. Artificial intelligence and expert systems might serve as powerful tools for assisting experiments in space. Those conducting space experiments typically need assistance exactly when the planned checklist does not apply. Expert systems, which use bits of human knowledge and human methods to respond appropriately to unusual situations, have a flexibility that is highly desirable in circumstances where an invariably predictable course of action/response does not exist. Frequently the human expert on the ground is unavailable, lacking the latest information, or not consulted by the astronaut conducting the experiment. In response to these issues, we have developed "Principal Investigator-in-a-Box," or [PI], to capture the reasoning process of the real expert, the Principal

  16. Reevaluation of JPEG image compression to digitalized gastrointestinal endoscopic color images: a pilot study

    Science.gov (United States)

    Kim, Christopher Y.

    1999-05-01

    Endoscopic images p lay an important role in describing many gastrointestinal (GI) disorders. The field of radiology has been on the leading edge of creating, archiving and transmitting digital images. With the advent of digital videoendoscopy, endoscopists now have the ability to generate images for storage and transmission. X-rays can be compressed 30-40X without appreciable decline in quality. We reported results of a pilot study using JPEG compression of 24-bit color endoscopic images. For that study, the result indicated that adequate compression ratios vary according to the lesion and that images could be compressed to between 31- and 99-fold smaller than the original size without an appreciable decline in quality. The purpose of this study was to expand upon the methodology of the previous sty with an eye towards application for the WWW, a medium which would expand both clinical and educational purposes of color medical imags. The results indicate that endoscopists are able to tolerate very significant compression of endoscopic images without loss of clinical image quality. This finding suggests that even 1 MB color images can be compressed to well under 30KB, which is considered a maximal tolerable image size for downloading on the WWW.

  17. Operability test procedure for 241-U compressed air system and heat pump

    International Nuclear Information System (INIS)

    Freeman, R.D.

    1994-01-01

    The 241-U-701 compressed air system supplies instrument quality compressed air to Tank Farm 241-U. The supply piping to the 241-U Tank Farm is not included in the modification. Modifications to the 241-U-701 compressed air system include installation of a 15 HP Reciprocating Air Compressor, Ingersoll-Rand Model 10T3NLM-E15; an air dryer, Hankinson, Model DH-45; and miscellaneous system equipment and piping (valves, filters, etc.) to meet the design. A newly installed heat pump allows the compressor to operate within an enclosed relatively dust free atmosphere and keeps the compressor room within a standard acceptable temperature range, which makes possible efficient compressor operation, reduces maintenance, and maximizes compressor operating life. This document is an Operability Test Procedure (OTP) which will further verify (in addition to the Acceptance Test Procedure) that the 241-U-701 compressed air system and heat pump operate within their intended design parameters. The activities defined in this OTP will be performed to ensure the performance of the new compressed air system will be adequate, reliable and efficient. Completion of this OTP and sign off of the OTP Acceptance of Test Results is necessary for turnover of the compressed air system from Engineering to Operations

  18. Empirical and Statistical Evaluation of the Effectiveness of Four Lossless Data Compression Algorithms

    Directory of Open Access Journals (Sweden)

    N. A. Azeez

    2017-04-01

    Full Text Available Data compression is the process of reducing the size of a file to effectively reduce storage space and communication cost. The evolvement in technology and digital age has led to an unparalleled usage of digital files in this current decade. The usage of data has resulted to an increase in the amount of data being transmitted via various channels of data communication which has prompted the need to look into the current lossless data compression algorithms to check for their level of effectiveness so as to maximally reduce the bandwidth requirement in communication and transfer of data. Four lossless data compression algorithm: Lempel-Ziv Welch algorithm, Shannon-Fano algorithm, Adaptive Huffman algorithm and Run-Length encoding have been selected for implementation. The choice of these algorithms was based on their similarities, particularly in application areas. Their level of efficiency and effectiveness were evaluated using some set of predefined performance evaluation metrics namely compression ratio, compression factor, compression time, saving percentage, entropy and code efficiency. The algorithms implementation was done in the NetBeans Integrated Development Environment using Java as the programming language. Through the statistical analysis performed using Boxplot and ANOVA and comparison made on the four algo

  19. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  20. H.264/AVC Video Compression on Smartphones

    Science.gov (United States)

    Sharabayko, M. P.; Markov, N. G.

    2017-01-01

    In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

  1. Relationship between the edgewise compression strength of ...

    African Journals Online (AJOL)

    The results of this study were used to determine the linear regression constants in the Maltenfort model by correlating the measured board edgewise compression strength (ECT) with the predicted strength, using the paper components' compression strengths, measured with the short-span compression test (SCT) and the ...

  2. A comparative meta-analysis of maximal aerobic metabolism of vertebrates: implications for respiratory and cardiovascular limits to gas exchange.

    Science.gov (United States)

    Hillman, Stanley S; Hancock, Thomas V; Hedrick, Michael S

    2013-02-01

    Maximal aerobic metabolic rates (MMR) in vertebrates are supported by increased conductive and diffusive fluxes of O(2) from the environment to the mitochondria necessitating concomitant increases in CO(2) efflux. A question that has received much attention has been which step, respiratory or cardiovascular, provides the principal rate limitation to gas flux at MMR? Limitation analyses have principally focused on O(2) fluxes, though the excess capacity of the lung for O(2) ventilation and diffusion remains unexplained except as a safety factor. Analyses of MMR normally rely upon allometry and temperature to define these factors, but cannot account for much of the variation and often have narrow phylogenetic breadth. The unique aspect of our comparative approach was to use an interclass meta-analysis to examine cardio-respiratory variables during the increase from resting metabolic rate to MMR among vertebrates from fish to mammals, independent of allometry and phylogeny. Common patterns at MMR indicate universal principles governing O(2) and CO(2) transport in vertebrate cardiovascular and respiratory systems, despite the varied modes of activities (swimming, running, flying), different cardio-respiratory architecture, and vastly different rates of metabolism (endothermy vs. ectothermy). Our meta-analysis supports previous studies indicating a cardiovascular limit to maximal O(2) transport and also implicates a respiratory system limit to maximal CO(2) efflux, especially in ectotherms. Thus, natural selection would operate on the respiratory system to enhance maximal CO(2) excretion and the cardiovascular system to enhance maximal O(2) uptake. This provides a possible evolutionary explanation for the conundrum of why the respiratory system appears functionally over-designed from an O(2) perspective, a unique insight from previous work focused solely on O(2) fluxes. The results suggest a common gas transport blueprint, or Bauplan, in the vertebrate clade.

  3. An Investigation of Teacher, Principal, and Superintendent Perceptions on the Ability of the National Framework for Principal Evaluations to Measure Principals' Leadership Competencies

    Science.gov (United States)

    Lamb, Lori D.

    2014-01-01

    The purpose of this qualitative study was to investigate the perceptions of effective principals' leadership competencies; determine if the perceptions of teachers, principals, and superintendents aligned with the proposed National Framework for Principal Evaluations initiative. This study examined the six domains of leadership outlined by the…

  4. Do Qualification, Experience and Age Matter for Principals Leadership Styles?

    OpenAIRE

    Muhammad Javed Sawati; Saeed Anwar; Muhammad Iqbal Majoka

    2013-01-01

    The main focus of present study was to find out the prevalent leadership styles of principals in government schools of Khyber Pakhtunkhwa and to find relationship of leadership styles with qualifications, age and experience of the principals. On the basis of analyzed data, four major leadership styles of the principals were identified as Eclectic, Democratic, Autocratic, and Free-rein. However, a small proportion of the principal had no dominant leadership style. This study shows that princip...

  5. Using autoencoders for mammogram compression.

    Science.gov (United States)

    Tan, Chun Chet; Eswaran, Chikkannan

    2011-02-01

    This paper presents the results obtained for medical image compression using autoencoder neural networks. Since mammograms (medical images) are usually of big sizes, training of autoencoders becomes extremely tedious and difficult if the whole image is used for training. We show in this paper that the autoencoders can be trained successfully by using image patches instead of the whole image. The compression performances of different types of autoencoders are compared based on two parameters, namely mean square error and structural similarity index. It is found from the experimental results that the autoencoder which does not use Restricted Boltzmann Machine pre-training yields better results than those which use this pre-training method.

  6. Culture: copying, compression, and conventionality.

    Science.gov (United States)

    Tamariz, Mónica; Kirby, Simon

    2015-01-01

    Through cultural transmission, repeated learning by new individuals transforms cultural information, which tends to become increasingly compressible (Kirby, Cornish, & Smith, ; Smith, Tamariz, & Kirby, ). Existing diffusion chain studies include in their design two processes that could be responsible for this tendency: learning (storing patterns in memory) and reproducing (producing the patterns again). This paper manipulates the presence of learning in a simple iterated drawing design experiment. We find that learning seems to be the causal factor behind the increase in compressibility observed in the transmitted information, while reproducing is a source of random heritable innovations. Only a theory invoking these two aspects of cultural learning will be able to explain human culture's fundamental balance between stability and innovation. Copyright © 2014 Cognitive Science Society, Inc.

  7. Instability of ties in compression

    DEFF Research Database (Denmark)

    Buch-Hansen, Thomas Cornelius

    2013-01-01

    Masonry cavity walls are loaded by wind pressure and vertical load from upper floors. These loads results in bending moments and compression forces in the ties connecting the outer and the inner wall in a cavity wall. Large cavity walls are furthermore loaded by differential movements from...... the temperature gradient between the outer and the inner wall, which results in critical increase of the bending moments in the ties. Since the ties are loaded by combined compression and moment forces, the loadbearing capacity is derived from instability equilibrium equations. Most of them are iterative, since...... exact instability solutions are complex to derive, not to mention the extra complexity introducing dimensional instability from the temperature gradients. Using an inverse variable substitution and comparing an exact theory with an analytical instability solution a method to design tie...

  8. Diagnostic imaging of compression neuropathy

    International Nuclear Information System (INIS)

    Weishaupt, D.; Andreisek, G.

    2007-01-01

    Compression-induced neuropathy of peripheral nerves can cause severe pain of the foot and ankle. Early diagnosis is important to institute prompt treatment and to minimize potential injury. Although clinical examination combined with electrophysiological studies remain the cornerstone of the diagnostic work-up, in certain cases, imaging may provide key information with regard to the exact anatomic location of the lesion or aid in narrowing the differential diagnosis. In other patients with peripheral neuropathies of the foot and ankle, imaging may establish the etiology of the condition and provide information crucial for management and/or surgical planning. MR imaging and ultrasound provide direct visualization of the nerve and surrounding abnormalities. Bony abnormalities contributing to nerve compression are best assessed by radiographs and CT. Knowledge of the anatomy, the etiology, typical clinical findings, and imaging features of peripheral neuropathies affecting the peripheral nerves of the foot and ankle will allow for a more confident diagnosis. (orig.) [de

  9. [Medical image compression: a review].

    Science.gov (United States)

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  10. Compressed optimization of device architectures

    Energy Technology Data Exchange (ETDEWEB)

    Frees, Adam [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Gamble, John King [Microsoft Research, Redmond, WA (United States). Quantum Architectures and Computation Group; Ward, Daniel Robert [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Blume-Kohout, Robin J [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Eriksson, M. A. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Friesen, Mark [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Coppersmith, Susan N. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics

    2014-09-01

    Recent advances in nanotechnology have enabled researchers to control individual quantum mechanical objects with unprecedented accuracy, opening the door for both quantum and extreme- scale conventional computation applications. As these devices become more complex, designing for facility of control becomes a daunting and computationally infeasible task. Here, motivated by ideas from compressed sensing, we introduce a protocol for the Compressed Optimization of Device Architectures (CODA). It leads naturally to a metric for benchmarking and optimizing device designs, as well as an automatic device control protocol that reduces the operational complexity required to achieve a particular output. Because this protocol is both experimentally and computationally efficient, it is readily extensible to large systems. For this paper, we demonstrate both the bench- marking and device control protocol components of CODA through examples of realistic simulations of electrostatic quantum dot devices, which are currently being developed experimentally for quantum computation.

  11. Compressed air energy storage system

    Science.gov (United States)

    Ahrens, Frederick W.; Kartsounes, George T.

    1981-01-01

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustible fuel. Preferably the internal combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  12. Compressing spatio-temporal trajectories

    DEFF Research Database (Denmark)

    Gudmundsson, Joachim; Katajainen, Jyrki; Merrick, Damian

    2009-01-01

    such that the most common spatio-temporal queries can still be answered approximately after the compression has taken place. In the process, we develop an implementation of the Douglas–Peucker path-simplification algorithm which works efficiently even in the case where the polygonal path given as input is allowed...... to self-intersect. For a polygonal path of size n, the processing time is O(nlogkn) for k=2 or k=3 depending on the type of simplification....

  13. Trust Me, Principal, or Burn Out! The Relationship between Principals' Burnout and Trust in Students and Parents

    Science.gov (United States)

    Ozer, Niyazi

    2013-01-01

    The purpose of this study was to determine the primary school principals' views on trust in students and parents and also, to explore the relationships between principals' levels of professional burnout and their trust in students and parents. To this end, Principal Trust Survey and Friedman Principal Burnout scales were administered on 119…

  14. [Compression treatment for burned skin].

    Science.gov (United States)

    Jaafar, Fadhel; Lassoued, Mohamed A; Sahnoun, Mahdi; Sfar, Souad; Cheikhrouhou, Morched

    2012-02-01

    The regularity of a compressive knit is defined as its ability to perform its function in a burnt skin. This property is essential to avoid the phenomenon of rejection of the material or toxicity problems But: Make knits biocompatible with high burnet of human skin. We fabric knits of elastic material. To ensure good adhesion to the skin, we made elastic material, typically a tight loop knitted. The Length of yarn absorbed by stitch and the raw matter are changed with each sample. The physical properties of each sample are measured and compared. Surface modifications are made to these samples by impregnation of microcapsules based on jojoba oil. Knits are compressif, elastic in all directions, light, thin, comfortable, and washable for hygiene issues. In addition, the washing can find their compressive properties. The Jojoba Oil microcapsules hydrated the human burnet skin. This moisturizer is used to the firmness of the wound and it gives flexibility to the skin. Compressive Knits are biocompatible with burnet skin. The mixture of natural and synthetic fibers is irreplaceable in terms comfort and regularity.

  15. Compressibility effects on turbulent mixing

    Science.gov (United States)

    Panickacheril John, John; Donzis, Diego

    2016-11-01

    We investigate the effect of compressibility on passive scalar mixing in isotropic turbulence with a focus on the fundamental mechanisms that are responsible for such effects using a large Direct Numerical Simulation (DNS) database. The database includes simulations with Taylor Reynolds number (Rλ) up to 100, turbulent Mach number (Mt) between 0.1 and 0.6 and Schmidt number (Sc) from 0.5 to 1.0. We present several measures of mixing efficiency on different canonical flows to robustly identify compressibility effects. We found that, like shear layers, mixing is reduced as Mach number increases. However, data also reveal a non-monotonic trend with Mt. To assess directly the effect of dilatational motions we also present results with both dilatational and soleniodal forcing. Analysis suggests that a small fraction of dilatational forcing decreases mixing time at higher Mt. Scalar spectra collapse when normalized by Batchelor variables which suggests that a compressive mechanism similar to Batchelor mixing in incompressible flows might be responsible for better mixing at high Mt and with dilatational forcing compared to pure solenoidal mixing. We also present results on scalar budgets, in particular on production and dissipation. Support from NSF is gratefully acknowledged.

  16. Image compression of bone images

    International Nuclear Information System (INIS)

    Hayrapetian, A.; Kangarloo, H.; Chan, K.K.; Ho, B.; Huang, H.K.

    1989-01-01

    This paper reports a receiver operating characteristic (ROC) experiment conducted to compare the diagnostic performance of a compressed bone image with the original. The compression was done on custom hardware that implements an algorithm based on full-frame cosine transform. The compression ratio in this study is approximately 10:1, which was decided after a pilot experiment. The image set consisted of 45 hand images, including normal images and images containing osteomalacia and osteitis fibrosa. Each image was digitized with a laser film scanner to 2,048 x 2,048 x 8 bits. Six observers, all board-certified radiologists, participated in the experiment. For each ROC session, an independent ROC curve was constructed and the area under that curve calculated. The image set was randomized for each session, as was the order for viewing the original and reconstructed images. Analysis of variance was used to analyze the data and derive statistically significant results. The preliminary results indicate that the diagnostic quality of the reconstructed image is comparable to that of the original image

  17. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  18. Compression of realistic laser pulses in hollow-core photonic bandgap fibers

    DEFF Research Database (Denmark)

    Lægsgaard, Jesper; Roberts, John

    2009-01-01

    Dispersive compression of chirped few-picosecond pulses at the microjoule level in a hollow-core photonic bandgap fiber is studied numerically. The performance of ideal parabolic input pulses is compared to pulses from a narrowband picosecond oscillator broadened by self-phase modulation during...... amplification. It is shown that the parabolic pulses are superior for compression of high-quality femtosecond pulses up to the few-megawatts level. With peak powers of 5-10 MW or higher, there is no significant difference in power scaling and pulse quality between the two pulse types for comparable values...... of power, duration, and bandwidth. The same conclusion is found for the peak power and energy of solitons formed beyond the point of maximal compression. Long-pass filtering of these solitons is shown to be a promising route to clean solitonlike output pulses with peak powers of several MW....

  19. A Numerical and Experimental Study of Ejector Internal Flow Structure and Geometry Modification for Maximized Performance

    Science.gov (United States)

    Falsafioon, Mehdi; Aidoun, Zine; Poirier, Michel

    2017-12-01

    A wide range of industrial refrigeration systems are good candidates to benefit from the cooling and refrigeration potential of supersonic ejectors. These are thermally activated and can use waste heat recovery from industrial processes where it is abundantly generated and rejected to the environment. In other circumstances low cost heat from biomass or solar energy may also be used in order to produce a cooling effect. Ejector performance is however typically modest and needs to be maximized in order to take full advantage of the simplicity and low cost of the technology. In the present work, the behavior of ejectors with different nozzle exit positions has been investigated using a prototype as well as a CFD model. The prototype was used in order to measure the performance advantages of refrigerant (R-134a) flowing inside the ejector. For the CFD model, it is assumed that the ejectors are axi-symmetric along x-axis, thus the generated model is in 2D. The preliminary CFD results are validated with experimental data over a wide range of conditions and are in good accordance in terms of entrainment and compression ratios. Next, the flow patterns of four different topologies are studied in order to discuss the optimum geometry in term of ejector entrainment improvement. Finally, The numerical simulations were used to find an optimum value corresponding to maximized entrainment ratio for fixed operating conditions.

  20. Radar fall detection using principal component analysis

    Science.gov (United States)

    Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem

    2016-05-01

    Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.