WorldWideScience

Sample records for preprocessing confusion matrices

  1. Confusing confusability

    DEFF Research Database (Denmark)

    Starrfelt, Randi; Lindegaard, Martin; Bundesen, Claus

    2015-01-01

    in neuropsychological research. We conclude that it is premature to control for this variable when selecting stimuli in studies of reading and alexia. Although letter confusability may play a role in (impaired) reading, it remains to be determined how this measure should be calculated, and what effect it may have...

  2. Cosmic confusion

    CERN Document Server

    Magueijo, J

    1994-01-01

    We propose to minimise the cosmic confusion between Gaussian and non Gaussian theories by investigating the structure in the m's for each multipole of the cosmic radiation temperature anisotropies. We prove that Gaussian theories are (nearly) the only theories which treat all the m's equally. Hence we introduce a set of invariant measures of ``m-preference'' to be seen as non-Gaussianity indicators. We then derive the distribution function for the quadrupole ``m-preference'' measure in Gaussian theories. A class of physically motivated toy non Gaussian theories is introduced as an example. We show how the quadrupole m-structure is crucial in reducing the confusion between these theories and Gaussian theories.

  3. Simplifying the Visualization of Confusion Matrix (Poster)

    OpenAIRE

    Beauxis-Aussalet, Emmanuelle; Hardman, Hazel Lynda

    2014-01-01

    Supervised Machine Learning techniques can automatically extract information from a variety of multimedia sources, e.g., image, text, sound, video. But it produces imperfect results since the multimedia content can be misinterpreted. Errors are commonly measured using confusion matrices, encoding type I and II errors for each class. Non-expert users encounter difficulties in understanding and using confusion matrices. They need to be read both column- and row-wise, which is tedious and error ...

  4. LANDSAT data preprocessing

    Science.gov (United States)

    Austin, W. W.

    1983-01-01

    The effect on LANDSAT data of a Sun angle correction, an intersatellite LANDSAT-2 and LANDSAT-3 data range adjustment, and the atmospheric correction algorithm was evaluated. Fourteen 1978 crop year LACIE sites were used as the site data set. The preprocessing techniques were applied to multispectral scanner channel data and transformed data were plotted and used to analyze the effectiveness of the preprocessing techniques. Ratio transformations effectively reduce the need for preprocessing techniques to be applied directly to the data. Subtractive transformations are more sensitive to Sun angle and atmospheric corrections than ratios. Preprocessing techniques, other than those applied at the Goddard Space Flight Center, should only be applied as an option of the user. While performed on LANDSAT data the study results are also applicable to meteorological satellite data.

  5. Simplifying the Visualization of Confusion Matrix (Poster)

    NARCIS (Netherlands)

    Beauxis-Aussalet, E.M.A.L.; Hardman, L.

    2014-01-01

    Supervised Machine Learning techniques can automatically extract information from a variety of multimedia sources, e.g., image, text, sound, video. But it produces imperfect results since the multimedia content can be misinterpreted. Errors are commonly measured using confusion matrices, encoding ty

  6. Normalization: A Preprocessing Stage

    OpenAIRE

    Patro, S. Gopal Krishna; Sahu, Kishore Kumar

    2015-01-01

    As we know that the normalization is a pre-processing stage of any type problem statement. Especially normalization takes important role in the field of soft computing, cloud computing etc. for manipulation of data like scale down or scale up the range of data before it becomes used for further stage. There are so many normalization techniques are there namely Min-Max normalization, Z-score normalization and Decimal scaling normalization. So by referring these normalization techniques we are ...

  7. Predicting consonant recognition and confusions in normal-hearing listeners

    DEFF Research Database (Denmark)

    Zaar, Johannes; Dau, Torsten

    2017-01-01

    confusion groups. The large predictive power of the proposed model suggests that adaptive processes in the auditory preprocessing in combination with a cross-correlation based template-matching back end can account for some of the processes underlying consonant perception in normal-hearing listeners...

  8. High speed preprocessing system

    Indian Academy of Sciences (India)

    M Sankar Kishore

    2000-10-01

    In systems employing tracking, the area of interest is recognized using a high resolution camera and is handed overto the low resolution receiver. The images seen by the low resolution receiver and by the operator through the high resolution camera are different in spatial resolution. In order to establish the correlation between these two images, the high-resolution camera image needsto be preprocessed and made similar to the low-resolution receiver image. This paper discusses the implementation of a suitable preprocessing technique, emphasis being given to develop a system both in hardware and software to reduce processing time. By applying different software/hardware techniques, the execution time has been brought down from a few seconds to a few milliseconds for a typical set of conditions. The hardware is designed around i486 processors and software is developed in PL/M. The system is tested to match the images obtained by two different sensors of the same scene. The hardware and software have been evaluated with different sets of images.

  9. Clarifying nipple confusion.

    Science.gov (United States)

    Zimmerman, E; Thompson, K

    2015-11-01

    Nipple confusion, an infant's difficulty with or preference for one feeding mechanism over another after exposure to artificial nipple(s), has been widely debated. This is in part due to conflicting statements, one by the American Academy of Pediatrics in 2005 suggesting that infants should be given a pacifier to protect against Sudden Infant Death Syndrome, and the other by the World Health Organization in 2009 stating that breastfeeding infants should never be given artificial nipples. Despite the limited and inconsistent evidence, nipple confusion is widely believed by practitioners. Therefore, there is a unique opportunity to examine the evidence surrounding nipple confusion by assessing the research that supports/refutes that bottle feeding/pacifier use impedes breastfeeding efficacy/success/duration. This review examined 14 articles supporting and refuting nipple confusion. These articles were reviewed using the Johns Hopkins Nursing Evidence-Based Practice Rating Scale. Based on our review, we have found emerging evidence to suggest the presence of nipple confusion only as it relates to bottle usage and found very little evidence to support nipple confusion with regards to pacifier use. The primary difficulty in conclusively studying nipple confusion is establishing causality, namely determining whether bottles'/pacifiers' nipples are causing infants to refuse the breast or whether they are simply markers of other maternal/infant characteristics. Future research should focus on prospectively examining the causality of nipple confusion.

  10. Nietzsche: A Confused Philosopher?

    Directory of Open Access Journals (Sweden)

    Paul B. Badey

    2012-06-01

    Full Text Available Nietzsche was one of the 18th century philosophers that cast his supreme presence on the world. His nihilism, existentialism, euphoria and eventual unstable nature has presented him as a confused philosopher. His attack on moral, religion, freedom, and his notion that God is dead make him an interesting character. This paper attempts and understanding of Nietzsche’s philosophy based on his concept of superman, the doctrine of will to power and his concept of morality that transcends all moralities, and concludes that rather than being confused, that confusion is really an art of philosophizing.

  11. Preprocessing of NMR metabolomics data.

    Science.gov (United States)

    Euceda, Leslie R; Giskeødegård, Guro F; Bathen, Tone F

    2015-05-01

    Metabolomics involves the large scale analysis of metabolites and thus, provides information regarding cellular processes in a biological sample. Independently of the analytical technique used, a vast amount of data is always acquired when carrying out metabolomics studies; this results in complex datasets with large amounts of variables. This type of data requires multivariate statistical analysis for its proper biological interpretation. Prior to multivariate analysis, preprocessing of the data must be carried out to remove unwanted variation such as instrumental or experimental artifacts. This review aims to outline the steps in the preprocessing of NMR metabolomics data and describe some of the methods to perform these. Since using different preprocessing methods may produce different results, it is important that an appropriate pipeline exists for the selection of the optimal combination of methods in the preprocessing workflow.

  12. Preprocessing of raw metabonomic data.

    Science.gov (United States)

    Vettukattil, Riyas

    2015-01-01

    Recent advances in metabolic profiling techniques allow global profiling of metabolites in cells, tissues, or organisms, using a wide range of analytical techniques such as nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry (MS). The raw data acquired from these instruments are abundant with technical and structural complexity, which makes it statistically difficult to extract meaningful information. Preprocessing involves various computational procedures where data from the instruments (gas chromatography (GC)/liquid chromatography (LC)-MS, NMR spectra) are converted into a usable form for further analysis and biological interpretation. This chapter covers the common data preprocessing techniques used in metabonomics and is primarily focused on baseline correction, normalization, scaling, peak alignment, detection, and quantification. Recent years have witnessed development of several software tools for data preprocessing, and an overview of the frequently used tools in data preprocessing pipeline is covered.

  13. Data preprocessing in data mining

    CERN Document Server

    García, Salvador; Herrera, Francisco

    2015-01-01

    Data Preprocessing for Data Mining addresses one of the most important issues within the well-known Knowledge Discovery from Data process. Data directly taken from the source will likely have inconsistencies, errors or most importantly, it is not ready to be considered for a data mining process. Furthermore, the increasing amount of data in recent science, industry and business applications, calls to the requirement of more complex tools to analyze it. Thanks to data preprocessing, it is possible to convert the impossible into possible, adapting the data to fulfill the input demands of each data mining algorithm. Data preprocessing includes the data reduction techniques, which aim at reducing the complexity of the data, detecting or removing irrelevant and noisy elements from the data. This book is intended to review the tasks that fill the gap between the data acquisition from the source and the data mining process. A comprehensive look from a practical point of view, including basic concepts and surveying t...

  14. Visualization of Confusion Matrix for Non-Expert Users (Poster)

    NARCIS (Netherlands)

    Beauxis-Aussalet, E.M.A.L.; Hardman, L.

    2014-01-01

    Machine Learning techniques can automatically extract information from a variety of multimedia sources, e.g., image, text, sound, video. But it produces imperfect results since the multimedia content can be misinterpreted. Machine Learning errors are commonly measured using confusion matrices. They

  15. CONFUSION WITH TELEPHONE NUMBERS

    CERN Document Server

    Telecom Service

    2002-01-01

    he area code is now required for all telephone calls within Switzerland. Unfortunately this is causing some confusion. CERN has received complaints that incoming calls intended for CERN mobile phones are being directed to private subscribers. This is caused by mistakenly dialing the WRONG code (e.g. 022) in front of the mobile number. In order to avoid these problems, please inform your correspondents that the correct numbers are: 079 201 XXXX from Switzerland; 0041 79 201 XXXX from other countries. Telecom Service

  16. CONFUSION WITH TELEPHONE NUMBERS

    CERN Document Server

    Telecom Service

    2002-01-01

    The area code is now required for all telephone calls within Switzerland. Unfortunately this is causing some confusion. CERN has received complaints that incoming calls intended for CERN mobile phones are being directed to private subscribers. This is caused by mistakenly dialing the WRONG code (e.g. 022) in front of the mobile number. In order to avoid these problems, please inform your correspondents that the correct numbers are: 079 201 XXXX from Switzerland; 0041 79 201 XXXX from other countries. Telecom Service  

  17. Optimal Preprocessing Of GPS Data

    Science.gov (United States)

    Wu, Sien-Chong; Melbourne, William G.

    1994-01-01

    Improved technique for preprocessing data from Global Positioning System receiver reduces processing time and number of data to be stored. Optimal in sense that it maintains strength of data. Also increases ability to resolve ambiguities in numbers of cycles of received GPS carrier signals.

  18. Effective Feature Preprocessing for Time Series Forecasting

    DEFF Research Database (Denmark)

    Zhao, Junhua; Dong, Zhaoyang; Xu, Zhao

    2006-01-01

    Time series forecasting is an important area in data mining research. Feature preprocessing techniques have significant influence on forecasting accuracy, therefore are essential in a forecasting model. Although several feature preprocessing techniques have been applied in time series forecasting...... performance in time series forecasting. It is demonstrated in our experiment that, effective feature preprocessing can significantly enhance forecasting accuracy. This research can be a useful guidance for researchers on effectively selecting feature preprocessing techniques and integrating them with time...... series forecasting models....

  19. Preprocessing of compressed digital video

    Science.gov (United States)

    Segall, C. Andrew; Karunaratne, Passant V.; Katsaggelos, Aggelos K.

    2000-12-01

    Pre-processing algorithms improve on the performance of a video compression system by removing spurious noise and insignificant features from the original images. This increases compression efficiency and attenuates coding artifacts. Unfortunately, determining the appropriate amount of pre-filtering is a difficult problem, as it depends on both the content of an image as well as the target bit-rate of compression algorithm. In this paper, we explore a pre- processing technique that is loosely coupled to the quantization decisions of a rate control mechanism. This technique results in a pre-processing system that operates directly on the Displaced Frame Difference (DFD) and is applicable to any standard-compatible compression system. Results explore the effect of several standard filters on the DFD. An adaptive technique is then considered.

  20. Sparse and Unique Nonnegative Matrix Factorization Through Data Preprocessing

    CERN Document Server

    Gillis, Nicolas

    2012-01-01

    Nonnegative matrix factorization (NMF) has become a very popular technique in machine learning because it automatically extracts meaningful features through a sparse and part-based representation. However, NMF has the drawback of being highly ill-posed, that is, there typically exist many different but equivalent factorizations. In this paper, we introduce a completely new way to obtaining more well-posed NMF problems whose solutions are sparser. Our technique is based on the preprocessing of the nonnegative input data matrix, and relies on the theory of M-matrices and the geometric interpretation of NMF. This approach provably leads to optimal and sparse solutions under the separability assumption of Donoho and Stodden (NIPS, 2003), and, for rank-three matrices, makes the number of exact factorizations finite. We illustrate the effectiveness of our technique on several image datasets.

  1. Detection of epileptic seizure in EEG signals using linear least squares preprocessing.

    Science.gov (United States)

    Roshan Zamir, Z

    2016-09-01

    An epileptic seizure is a transient event of abnormal excessive neuronal discharge in the brain. This unwanted event can be obstructed by detection of electrical changes in the brain that happen before the seizure takes place. The automatic detection of seizures is necessary since the visual screening of EEG recordings is a time consuming task and requires experts to improve the diagnosis. Much of the prior research in detection of seizures has been developed based on artificial neural network, genetic programming, and wavelet transforms. Although the highest achieved accuracy for classification is 100%, there are drawbacks, such as the existence of unbalanced datasets and the lack of investigations in performances consistency. To address these, four linear least squares-based preprocessing models are proposed to extract key features of an EEG signal in order to detect seizures. The first two models are newly developed. The original signal (EEG) is approximated by a sinusoidal curve. Its amplitude is formed by a polynomial function and compared with the predeveloped spline function. Different statistical measures, namely classification accuracy, true positive and negative rates, false positive and negative rates and precision, are utilised to assess the performance of the proposed models. These metrics are derived from confusion matrices obtained from classifiers. Different classifiers are used over the original dataset and the set of extracted features. The proposed models significantly reduce the dimension of the classification problem and the computational time while the classification accuracy is improved in most cases. The first and third models are promising feature extraction methods with the classification accuracy of 100%. Logistic, LazyIB1, LazyIB5, and J48 are the best classifiers. Their true positive and negative rates are 1 while false positive and negative rates are 0 and the corresponding precision values are 1. Numerical results suggest that these

  2. Random matrices

    CERN Document Server

    Mehta, Madan Lal

    1990-01-01

    Since the publication of Random Matrices (Academic Press, 1967) so many new results have emerged both in theory and in applications, that this edition is almost completely revised to reflect the developments. For example, the theory of matrices with quaternion elements was developed to compute certain multiple integrals, and the inverse scattering theory was used to derive asymptotic results. The discovery of Selberg's 1944 paper on a multiple integral also gave rise to hundreds of recent publications. This book presents a coherent and detailed analytical treatment of random matrices, leading

  3. Effective Feature Preprocessing for Time Series Forecasting

    DEFF Research Database (Denmark)

    Zhao, Junhua; Dong, Zhaoyang; Xu, Zhao

    2006-01-01

    Time series forecasting is an important area in data mining research. Feature preprocessing techniques have significant influence on forecasting accuracy, therefore are essential in a forecasting model. Although several feature preprocessing techniques have been applied in time series forecasting......, there is so far no systematic research to study and compare their performance. How to select effective techniques of feature preprocessing in a forecasting model remains a problem. In this paper, the authors conduct a comprehensive study of existing feature preprocessing techniques to evaluate their empirical...... performance in time series forecasting. It is demonstrated in our experiment that, effective feature preprocessing can significantly enhance forecasting accuracy. This research can be a useful guidance for researchers on effectively selecting feature preprocessing techniques and integrating them with time...

  4. A Gender Recognition Approach with an Embedded Preprocessing

    Directory of Open Access Journals (Sweden)

    Md. Mostafijur Rahman

    2015-05-01

    Full Text Available Gender recognition from facial images has become an empirical aspect in present world. It is one of the main problems of computer vision and researches have been conducting on it. Though several techniques have been proposed, most of the techniques focused on facial images in controlled situation. But the problem arises when the classification is performed in uncontrolled conditions like high rate of noise, lack of illumination, etc. To overcome these problems, we propose a new gender recognition framework which first preprocess and enhances the input images using Adaptive Gama Correction with Weighting Distribution. We used Labeled Faces in the Wild (LFW database for our experimental purpose which contains real life images of uncontrolled condition. For measuring the performance of our proposed method, we have used confusion matrix, precision, recall, F-measure, True Positive Rate (TPR, and False Positive Rate (FPR. In every case, our proposed framework performs superior over other existing state-of-the-art techniques.

  5. Preprocessing and Morphological Analysis in Text Mining

    Directory of Open Access Journals (Sweden)

    Krishna Kumar Mohbey Sachin Tiwari

    2011-12-01

    Full Text Available This paper is based on the preprocessing activities which is performed by the software or language translators before applying mining algorithms on the huge data. Text mining is an important area of Data mining and it plays a vital role for extracting useful information from the huge database or data ware house. But before applying the text mining or information extraction process, preprocessing is must because the given data or dataset have the noisy, incomplete, inconsistent, dirty and unformatted data. In this paper we try to collect the necessary requirements for preprocessing. When we complete the preprocess task then we can easily extract the knowledgful information using mining strategy. This paper also provides the information about the analysis of data like tokenization, stemming and semantic analysis like phrase recognition and parsing. This paper also collect the procedures for preprocessing data i.e. it describe that how the stemming, tokenization or parsing are applied.

  6. Random Matrices

    CERN Document Server

    Stephanov, M A; Wettig, T

    2005-01-01

    We review elementary properties of random matrices and discuss widely used mathematical methods for both hermitian and nonhermitian random matrix ensembles. Applications to a wide range of physics problems are summarized. This paper originally appeared as an article in the Wiley Encyclopedia of Electrical and Electronics Engineering.

  7. Formal matrices

    CERN Document Server

    Krylov, Piotr

    2017-01-01

    This monograph is a comprehensive account of formal matrices, examining homological properties of modules over formal matrix rings and summarising the interplay between Morita contexts and K theory. While various special types of formal matrix rings have been studied for a long time from several points of view and appear in various textbooks, for instance to examine equivalences of module categories and to illustrate rings with one-sided non-symmetric properties, this particular class of rings has, so far, not been treated systematically. Exploring formal matrix rings of order 2 and introducing the notion of the determinant of a formal matrix over a commutative ring, this monograph further covers the Grothendieck and Whitehead groups of rings. Graduate students and researchers interested in ring theory, module theory and operator algebras will find this book particularly valuable. Containing numerous examples, Formal Matrices is a largely self-contained and accessible introduction to the topic, assuming a sol...

  8. Constant-overhead secure computation of Boolean circuits using preprocessing

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Zakarias, S.

    2013-01-01

    We present a protocol for securely computing a Boolean circuit C in presence of a dishonest and malicious majority. The protocol is unconditionally secure, assuming a preprocessing functionality that is not given the inputs. For a large number of players the work for each player is the same...... as computing the circuit in the clear, up to a constant factor. Our protocol is the first to obtain these properties for Boolean circuits. On the technical side, we develop new homomorphic authentication schemes based on asymptotically good codes with an additional multiplication property. We also show a new...... algorithm for verifying the product of Boolean matrices in quadratic time with exponentially small error probability, where previous methods only achieved constant error....

  9. Constant-Overhead Secure Computation of Boolean Circuits using Preprocessing

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Zakarias, Sarah Nouhad Haddad

    We present a protocol for securely computing a Boolean circuit $C$ in presence of a dishonest and malicious majority. The protocol is unconditionally secure, assuming access to a preprocessing functionality that is not given the inputs to compute on. For a large number of players the work done...... by each player is the same as the work needed to compute the circuit in the clear, up to a constant factor. Our protocol is the first to obtain these properties for Boolean circuits. On the technical side, we develop new homomorphic authentication schemes based on asymptotically good codes...... with an additional multiplication property. We also show a new algorithm for verifying the product of Boolean matrices in quadratic time with exponentially small error probability, where previous methods would only give a constant error....

  10. Facilitating Watermark Insertion by Preprocessing Media

    Directory of Open Access Journals (Sweden)

    Matt L. Miller

    2004-10-01

    Full Text Available There are several watermarking applications that require the deployment of a very large number of watermark embedders. These applications often have severe budgetary constraints that limit the computation resources that are available. Under these circumstances, only simple embedding algorithms can be deployed, which have limited performance. In order to improve performance, we propose preprocessing the original media. It is envisaged that this preprocessing occurs during content creation and has no budgetary or computational constraints. Preprocessing combined with simple embedding creates a watermarked Work, the performance of which exceeds that of simple embedding alone. However, this performance improvement is obtained without any increase in the computational complexity of the embedder. Rather, the additional computational burden is shifted to the preprocessing stage. A simple example of this procedure is described and experimental results confirm our assertions.

  11. Dazed and Confused in Academia

    Institute of Scientific and Technical Information of China (English)

    VALERIE SARTOR

    2010-01-01

    @@ Recently, Joe, a foreign English teacher in China, said to me, "I feel dazed and disoriented; my students here are unusually quiet,I can't tell if they understand my lessons."His plight is not surprising. Many foreign ESL (English as a second language) instruc-tors often feel confused when they first step into a Chinese classroom. Chinese ESL students are different because their mindset is not Western.

  12. Dazed and Confused in Academia

    Institute of Scientific and Technical Information of China (English)

    VALERIE; SARTOR

    2010-01-01

    Recently,Joe,a foreign English teacher in China,said to me,"I feel dazed and disoriented;my students here are unusually quiet. I can’t tell if they understand my lessons." His plight is not surprising.Many foreign ESL(English as a second language)instructors often feel confused when they first step into a Chinese classroom.Chinese ESL students are different because their mindset

  13. Perceptual analysis from confusions between vowels

    NARCIS (Netherlands)

    van der Kamp, L.J.T.; Pols, L.C.W.

    1971-01-01

    In an experiment on vowel identification confusions were obtained between 11 Dutch vowel sounds. To recover the perceptual configurations of the stimuli multidimensional scaling techniques were applied directly to the asymmetric confusion matrix, and to the symmetrized confusion matrix. In order to

  14. Preprocessing of ionospheric echo Doppler spectra

    Institute of Scientific and Technical Information of China (English)

    FANG Liang; ZHAO Zhengyu; WANG Feng; SU Fanfan

    2007-01-01

    The real-time information of the distant ionosphere can be acquired by using the Wuhan ionospheric oblique backscattering sounding system(WIOBSS),which adopts a discontinuous wave mechanism.After the characteristics of the ionospheric echo Doppler spectra were analyzed,the signal preprocessing was developed in this paper,which aimed at improving the Doppler spectra.The results indicate that the preprocessing not only makes the system acquire a higher ability of target detection but also suppresses the radio frequency interference by 6-7 dB.

  15. Preprocessing Moist Lignocellulosic Biomass for Biorefinery Feedstocks

    Energy Technology Data Exchange (ETDEWEB)

    Neal Yancey; Christopher T. Wright; Craig Conner; J. Richard Hess

    2009-06-01

    Biomass preprocessing is one of the primary operations in the feedstock assembly system of a lignocellulosic biorefinery. Preprocessing is generally accomplished using industrial grinders to format biomass materials into a suitable biorefinery feedstock for conversion to ethanol and other bioproducts. Many factors affect machine efficiency and the physical characteristics of preprocessed biomass. For example, moisture content of the biomass as received from the point of production has a significant impact on overall system efficiency and can significantly affect the characteristics (particle size distribution, flowability, storability, etc.) of the size-reduced biomass. Many different grinder configurations are available on the market, each with advantages under specific conditions. Ultimately, the capacity and/or efficiency of the grinding process can be enhanced by selecting the grinder configuration that optimizes grinder performance based on moisture content and screen size. This paper discusses the relationships of biomass moisture with respect to preprocessing system performance and product physical characteristics and compares data obtained on corn stover, switchgrass, and wheat straw as model feedstocks during Vermeer HG 200 grinder testing. During the tests, grinder screen configuration and biomass moisture content were varied and tested to provide a better understanding of their relative impact on machine performance and the resulting feedstock physical characteristics and uniformity relative to each crop tested.

  16. Preprocessing Moist Lignocellulosic Biomass for Biorefinery Feedstocks

    Energy Technology Data Exchange (ETDEWEB)

    Neal Yancey; Christopher T. Wright; Craig Conner; J. Richard Hess

    2009-06-01

    Biomass preprocessing is one of the primary operations in the feedstock assembly system of a lignocellulosic biorefinery. Preprocessing is generally accomplished using industrial grinders to format biomass materials into a suitable biorefinery feedstock for conversion to ethanol and other bioproducts. Many factors affect machine efficiency and the physical characteristics of preprocessed biomass. For example, moisture content of the biomass as received from the point of production has a significant impact on overall system efficiency and can significantly affect the characteristics (particle size distribution, flowability, storability, etc.) of the size-reduced biomass. Many different grinder configurations are available on the market, each with advantages under specific conditions. Ultimately, the capacity and/or efficiency of the grinding process can be enhanced by selecting the grinder configuration that optimizes grinder performance based on moisture content and screen size. This paper discusses the relationships of biomass moisture with respect to preprocessing system performance and product physical characteristics and compares data obtained on corn stover, switchgrass, and wheat straw as model feedstocks during Vermeer HG 200 grinder testing. During the tests, grinder screen configuration and biomass moisture content were varied and tested to provide a better understanding of their relative impact on machine performance and the resulting feedstock physical characteristics and uniformity relative to each crop tested.

  17. Efficient Preprocessing technique using Web log mining

    Science.gov (United States)

    Raiyani, Sheetal A.; jain, Shailendra

    2012-11-01

    Web Usage Mining can be described as the discovery and Analysis of user access pattern through mining of log files and associated data from a particular websites. No. of visitors interact daily with web sites around the world. enormous amount of data are being generated and these information could be very prize to the company in the field of accepting Customerís behaviors. In this paper a complete preprocessing style having data cleaning, user and session Identification activities to improve the quality of data. Efficient preprocessing technique one of the User Identification which is key issue in preprocessing technique phase is to identify the Unique web users. Traditional User Identification is based on the site structure, being supported by using some heuristic rules, for use of this reduced the efficiency of user identification solve this difficulty we introduced proposed Technique DUI (Distinct User Identification) based on IP address ,Agent and Session time ,Referred pages on desired session time. Which can be used in counter terrorism, fraud detection and detection of unusual access of secure data, as well as through detection of regular access behavior of users improve the overall designing and performance of upcoming access of preprocessing results.

  18. Inverse m-matrices and ultrametric matrices

    CERN Document Server

    Dellacherie, Claude; San Martin, Jaime

    2014-01-01

    The study of M-matrices, their inverses and discrete potential theory is now a well-established part of linear algebra and the theory of Markov chains. The main focus of this monograph is the so-called inverse M-matrix problem, which asks for a characterization of nonnegative matrices whose inverses are M-matrices. We present an answer in terms of discrete potential theory based on the Choquet-Deny Theorem. A distinguished subclass of inverse M-matrices is ultrametric matrices, which are important in applications such as taxonomy. Ultrametricity is revealed to be a relevant concept in linear algebra and discrete potential theory because of its relation with trees in graph theory and mean expected value matrices in probability theory. Remarkable properties of Hadamard functions and products for the class of inverse M-matrices are developed and probabilistic insights are provided throughout the monograph.

  19. Random unistochastic matrices

    Energy Technology Data Exchange (ETDEWEB)

    Zyczkowski, Karol [Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Al. Lotnikow 32/44, 02-668 Warsaw (Poland); Kus, Marek [Centrum Fizyki Teoretycznej, Polska Akademia Nauk, Al. Lotnikow 32/44, 02-668 Warsaw (Poland); Slomczynski, Wojciech [Instytut Matematyki, Uniwersytet Jagiellonski, ul. Reymonta 4, 30-059 Cracow (Poland); Sommers, Hans-Juergen [Fachbereich 7 Physik, Universitaet Essen, 45117 Essen (Germany)

    2003-03-28

    An ensemble of random unistochastic (orthostochastic) matrices is defined by taking squared moduli of elements of random unitary (orthogonal) matrices distributed according to the Haar measure on U(N) (or O(N)). An ensemble of symmetric unistochastic matrices is obtained with use of unitary symmetric matrices pertaining to the circular orthogonal ensemble. We study the distribution of complex eigenvalues of bistochastic, unistochastic and orthostochastic matrices in the complex plane. We compute averages (entropy, traces) over the ensembles of unistochastic matrices and present inequalities concerning the entropies of products of bistochastic matrices.

  20. Random unistochastic matrices

    OpenAIRE

    Zyczkowski, K.; Slomczynski, W.; Kus, M.; Sommers, H. -J.

    2001-01-01

    An ensemble of random unistochastic (orthostochastic) matrices is defined by taking squared moduli of elements of random unitary (orthogonal) matrices distributed according to the Haar measure on U(N) (or O(N), respectively). An ensemble of symmetric unistochastic matrices is obtained with use of unitary symmetric matrices pertaining to the circular orthogonal ensemble. We study the distribution of complex eigenvalues of bistochastic, unistochastic and ortostochastic matrices in the complex p...

  1. A PREPROCESSING LS-CMA IN HIGHLY CORRUPTIVE ENVIRONMENT

    Institute of Scientific and Technical Information of China (English)

    Guo Yan; Fang Dagang; Thomas N.C.Wang; Liang Changhong

    2002-01-01

    A fast preprocessing Least Square-Constant Modulus Algorithm (LS-CMA) is proposed for blind adaptive beamforming. This new preprocessing method precludes noise capture caused by the original LS-CMA with the preprocessing procedure controlled by the static Constant Modulus Algorithm (CMA). The simulation results have shown that the proposed fast preprocessing LS-CMA can effectively reject the co-channel interference, and quickly lock onto the constant modulus desired signal with only one snapshot in a highly corruptive environment.

  2. The preprocessing of multispectral data. II. [of Landsat satellite

    Science.gov (United States)

    Quiel, F.

    1976-01-01

    It is pointed out that a correction of atmospheric effects is an important requirement for a full utilization of the possibilities provided by preprocessing techniques. The most significant characteristics of original and preprocessed data are considered, taking into account the solution of classification problems by means of the preprocessing procedure. Improvements obtainable with different preprocessing techniques are illustrated with the aid of examples involving Landsat data regarding an area in Colorado.

  3. Case management: unraveling the confusion.

    Science.gov (United States)

    Bower, K

    1998-01-01

    I'm going to close with some of my ideas about the characteristics that case managers exhibit. I have a great deal of professional respect for case managers. I think that you are a tenacious lot. One of the major things that case managers do is help create new alternatives to problems. You open doors; no ... you first build the door and then you open it. You're creative, persistent, and resourceful. You are sometimes asked to solve all of an organization's problems. I think that is a tremendous burden, and that you can get confused because of that role conflict and confusion. What model is best for my organization? Within that is my patient population. What is it that they need? What are the current issues that you are seeing? How is my case management role different from other roles? How large a scope of practice can I handle and be reasonably successful with the patients with whom I'm dealing? How many different kinds of approaches and models are needed within my organization? Look toward the future; think about the future in terms of your crystal balls. What trends do you see building in either the demographics or the health and social environments that are going to influence health care in the future? What effect will the aging of our population have on you and your case management practice? What issues are going to be related to those trends? How many more people do we have living in fragmented families? What's going to happens in terms of resources available for patients? How can case management influence those changes? I don't think we're going to see the pace of change in the health care industry slow down. We will continue to have health care organizations address social issues in addition to pathophysiologic ones. No matter what the role and how it evolves, case management will always be at the junction of change in health care. This will be difficult at times to deal with. It will also be a source of satisfaction for those in the role because of the

  4. Energy and the Confused Student II: Systems

    Science.gov (United States)

    Jewett, John W., Jr.

    2008-01-01

    Energy is a critical concept in physics problem-solving but is often a major source of confusion for students if the presentation is not carefully crafted by the instructor or the textbook. The first article in this series discussed student confusion generated by traditional treatments of work. In any discussion of work, it is important to state…

  5. Core Knowledge Confusions among University Students

    Science.gov (United States)

    Lindeman, Marjaana; Svedholm, Annika M.; Takada, Mikito; Lonnqvist, Jan-Erik; Verkasalo, Markku

    2011-01-01

    Previous studies have demonstrated that university students hold several paranormal beliefs and that paranormal beliefs can be best explained with core knowledge confusions. The aim of this study was to explore to what extent university students confuse the core ontological attributes of lifeless material objects (e.g. a house, a stone), living…

  6. Pre-processing Tasks in Indonesian Twitter Messages

    Science.gov (United States)

    Hidayatullah, A. F.; Ma’arif, M. R.

    2017-01-01

    Twitter text messages are very noisy. Moreover, tweet data are unstructured and complicated enough. The focus of this work is to investigate pre-processing technique for Twitter messages in Bahasa Indonesia. The main goal of this experiment is to clean the tweet data for further analysis. Thus, the objectives of this pre-processing task is simply removing all meaningless character and left valuable words. In this research, we divide our proposed pre-processing experiments into two parts. The first part is common pre-processing task. The second part is a specific pre-processing task for tweet data. From the experimental result we can conclude that by employing a specific pre-processing task related to tweet data characteristic we obtained more valuable result. The result obtained is better in terms of less meaningful word occurrence which is not significant in number comparing to the result obtained by just running common pre-processing tasks.

  7. GENERALIZED NEKRASOV MATRICES AND APPLICATIONS

    Institute of Scientific and Technical Information of China (English)

    Mingxian Pang; Zhuxiang Li

    2003-01-01

    In this paper, the concept of generalized Nekrasov matrices is introduced, some properties of these matrices are discussed, obtained equivalent representation of generalized diagonally dominant matrices.

  8. Introduction into Hierarchical Matrices

    KAUST Repository

    Litvinenko, Alexander

    2013-12-05

    Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.

  9. Acquisition and preprocessing of LANDSAT data

    Science.gov (United States)

    Horn, T. N.; Brown, L. E.; Anonsen, W. H. (Principal Investigator)

    1979-01-01

    The original configuration of the GSFC data acquisition, preprocessing, and transmission subsystem, designed to provide LANDSAT data inputs to the LACIE system at JSC, is described. Enhancements made to support LANDSAT -2, and modifications for LANDSAT -3 are discussed. Registration performance throughout the 3 year period of LACIE operations satisfied the 1 pixel root-mean-square requirements established in 1974, with more than two of every three attempts at data registration proving successful, notwithstanding cosmetic faults or content inadequacies to which the process is inherently susceptible. The cloud/snow rejection rate experienced throughout the last 3 years has approached 50%, as expected in most LANDSAT data use situations.

  10. BLOCK H-MATRICES AND SPECTRUM OF BLOCK MATRICES

    Institute of Scientific and Technical Information of China (English)

    黄廷祝; 黎稳

    2002-01-01

    The block H-matrices are studied by the concept of G-functions, several concepts of block matrices are introduced. Equivalent characters of block H-matrices are obtained. Spectrum localizations claracterized by Gfunctions for block matrices are got.

  11. Circulant conference matrices for new complex Hadamard matrices

    OpenAIRE

    Dita, Petre

    2011-01-01

    The circulant real and complex matrices are used to find new real and complex conference matrices. With them we construct Sylvester inverse orthogonal matrices by doubling the size of inverse complex conference matrices. When the free parameters take values on the unit circle the inverse orthogonal matrices transform into complex Hadamard matrices. The method is used for $n=6$ conference matrices and in this way we find new parametrisations of Hadamard matrices for dimension $ n=12$.

  12. Dichromatic confusion lines and color vision models.

    Science.gov (United States)

    Fry, G A

    1986-12-01

    An attempt has been made to explain how dichromatic confusion lines can be used in building a model for color vision. In the König color vision model the fundamental colors are located on the mixture diagram at the copunctal points for protanopes, deuteranopes, and tritanopes. In Fry's model the copunctal points fall on the alychne and cannot represent the fundamental colors. On a constant luminance diagram the confusion lines for the different dichromats are sets of parallel lines. This arrangement of the confusion lines can be explained in terms of a zone theory of color vision.

  13. Approximate Distance Oracles with Improved Preprocessing Time

    CERN Document Server

    Wulff-Nilsen, Christian

    2011-01-01

    Given an undirected graph $G$ with $m$ edges, $n$ vertices, and non-negative edge weights, and given an integer $k\\geq 1$, we show that for some universal constant $c$, a $(2k-1)$-approximate distance oracle for $G$ of size $O(kn^{1 + 1/k})$ can be constructed in $O(\\sqrt km + kn^{1 + c/\\sqrt k})$ time and can answer queries in $O(k)$ time. We also give an oracle which is faster for smaller $k$. Our results break the quadratic preprocessing time bound of Baswana and Kavitha for all $k\\geq 6$ and improve the $O(kmn^{1/k})$ time bound of Thorup and Zwick except for very sparse graphs and small $k$. When $m = \\Omega(n^{1 + c/\\sqrt k})$ and $k = O(1)$, our oracle is optimal w.r.t.\\ both stretch, size, preprocessing time, and query time, assuming a widely believed girth conjecture by Erd\\H{o}s.

  14. Random bistochastic matrices

    Energy Technology Data Exchange (ETDEWEB)

    Cappellini, Valerio [' Mark Kac' Complex Systems Research Centre, Uniwersytet Jagiellonski, ul. Reymonta 4, 30-059 Krakow (Poland); Sommers, Hans-Juergen [Fachbereich Physik, Universitaet Duisburg-Essen, Campus Duisburg, 47048 Duisburg (Germany); Bruzda, Wojciech; Zyczkowski, Karol [Instytut Fizyki im. Smoluchowskiego, Uniwersytet Jagiellonski, ul. Reymonta 4, 30-059 Krakow (Poland)], E-mail: valerio@ictp.it, E-mail: h.j.sommers@uni-due.de, E-mail: w.bruzda@uj.edu.pl, E-mail: karol@cft.edu.pl

    2009-09-11

    Ensembles of random stochastic and bistochastic matrices are investigated. While all columns of a random stochastic matrix can be chosen independently, the rows and columns of a bistochastic matrix have to be correlated. We evaluate the probability measure induced into the Birkhoff polytope of bistochastic matrices by applying the Sinkhorn algorithm to a given ensemble of random stochastic matrices. For matrices of order N = 2 we derive explicit formulae for the probability distributions induced by random stochastic matrices with columns distributed according to the Dirichlet distribution. For arbitrary N we construct an initial ensemble of stochastic matrices which allows one to generate random bistochastic matrices according to a distribution locally flat at the center of the Birkhoff polytope. The value of the probability density at this point enables us to obtain an estimation of the volume of the Birkhoff polytope, consistent with recent asymptotic results.

  15. Random Bistochastic Matrices

    CERN Document Server

    Cappellini, V; Bruzda, W; Zyczkowski, K

    2009-01-01

    Ensembles of random stochastic and bistochastic matrices are investigated. While all columns of a random stochastic matrix can be chosen independently, the rows and columns of a bistochastic matrix have to be correlated. We evaluate the probability measure induced into the Birkhoff polytope of bistochastic matrices by applying the Sinkhorn algorithm to a given ensemble of random stochastic matrices. For matrices of order N=2 we derive explicit formulae for the probability distributions induced by random stochastic matrices with columns distributed according to the Dirichlet distribution. For arbitrary $N$ we construct an initial ensemble of stochastic matrices which allows one to generate random bistochastic matrices according to a distribution locally flat at the center of the Birkhoff polytope. The value of the probability density at this point enables us to obtain an estimation of the volume of the Birkhoff polytope, consistent with recent asymptotic results.

  16. The Registration of Knee Joint Images with Preprocessing

    Directory of Open Access Journals (Sweden)

    Zhenyan Ji

    2011-06-01

    Full Text Available the registration of CT and MR images is important to analyze the effect of PCL and ACL deficiency on knee joint. Because CT and MR images have different limitations, we need register CT and MR images of knee joint and then build a model to do an analysis of the stress distribution on knee joint. In our project, we adopt image registration based on mutual information. In the knee joint images, the information about adipose, muscle and other soft tissue affects the registration accuracy. To eliminate the interference, we propose a combined preprocessing solution BEBDO, which consists of five steps, image blurring, image enhancement, image blurring, image edge detection and image outline preprocessing. We also designed the algorithm of image outline preprocessing. At the end of the paper, an experiment is done to compare the image registration results without the preprocessing and with the preprocessing. The results prove that the preprocessing can improve the image registration accuracy.

  17. Big Data: Big Confusion? Big Challenges?

    Science.gov (United States)

    2015-05-01

    12th Annual Acquisition Research Symposium 12th Annual Acquisition Research Symposium Big Data : Big Confusion? Big Challenges? Mary Maureen... Data : Big Confusion? Big Challenges? 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK...Acquisition Research Symposium • ~!& UNC CHARlD1TE 90% of the data in the world today was created in the last two years Big Data growth from

  18. An effective preprocessing method for finger vein recognition

    Science.gov (United States)

    Peng, JiaLiang; Li, Qiong; Wang, Ning; Abd El-Latif, Ahmed A.; Niu, Xiamu

    2013-07-01

    The image preprocessing plays an important role in finger vein recognition system. However, previous preprocessing schemes remind weakness to be resolved for the high finger vein recongtion performance. In this paper, we propose a new finger vein preprocessing that includes finger region localization, alignment, finger vein ROI segmentation and enhancement. The experimental results show that the proposed scheme is capable of enhancing the quality of finger vein image effectively and reliably.

  19. User microprogrammable processors for high data rate telemetry preprocessing

    Science.gov (United States)

    Pugsley, J. H.; Ogrady, E. P.

    1973-01-01

    The use of microprogrammable processors for the preprocessing of high data rate satellite telemetry is investigated. The following topics are discussed along with supporting studies: (1) evaluation of commercial microprogrammable minicomputers for telemetry preprocessing tasks; (2) microinstruction sets for telemetry preprocessing; and (3) the use of multiple minicomputers to achieve high data processing. The simulation of small microprogrammed processors is discussed along with examples of microprogrammed processors.

  20. Preprocessing and Analysis of Digitized ECGs

    Science.gov (United States)

    Villalpando, L. E. Piña; Kurmyshev, E.; Ramírez, S. Luna; Leal, L. Delgado

    2008-08-01

    In this work we propose a methodology and programs in MatlabTM that perform the preprocessing and analysis of the derivative D1 of ECGs. The program makes the correction to isoelectric line for each beat, calculates the average cardiac frequency and its standard deviation, generates a file of amplitude of P, Q and T waves, as well as the segments and intervals important of each beat. Software makes the normalization of beats to a standard rate of 80 beats per minute, the superposition of beats is done centering R waves, before and after normalizing the amplitude of each beat. The data and graphics provide relevant information to the doctor for diagnosis. In addition, some results are displayed similar to those presented by a Holter recording.

  1. Complex Hadamard matrices from Sylvester inverse orthogonal matrices

    OpenAIRE

    Dita, Petre

    2009-01-01

    A novel method to obtain parametrizations of complex inverse orthogonal matrices is provided. These matrices are natural generalizations of complex Hadamard matrices which depend on non zero complex parameters. The method we use is via doubling the size of inverse complex conference matrices. When the free parameters take values on the unit circle the inverse orthogonal matrices transform into complex Hadamard matrices, and in this way we find new parametrizations of Hadamard matrices for dim...

  2. Matrices and linear transformations

    CERN Document Server

    Cullen, Charles G

    1990-01-01

    ""Comprehensive . . . an excellent introduction to the subject."" - Electronic Engineer's Design Magazine.This introductory textbook, aimed at sophomore- and junior-level undergraduates in mathematics, engineering, and the physical sciences, offers a smooth, in-depth treatment of linear algebra and matrix theory. The major objects of study are matrices over an arbitrary field. Contents include Matrices and Linear Systems; Vector Spaces; Determinants; Linear Transformations; Similarity: Part I and Part II; Polynomials and Polynomial Matrices; Matrix Analysis; and Numerical Methods. The first

  3. A Simple Cocyclic Jacket Matrices

    Directory of Open Access Journals (Sweden)

    Moon Ho Lee

    2008-01-01

    Full Text Available We present a new class of cocyclic Jacket matrices over complex number field with any size. We also construct cocyclic Jacket matrices over the finite field. Such kind of matrices has close relation with unitary matrices which are a first hand tool in solving many problems in mathematical and theoretical physics. Based on the analysis of the relation between cocyclic Jacket matrices and unitary matrices, the common method for factorizing these two kinds of matrices is presented.

  4. Base resolution methylome profiling: considerations in platform selection, data preprocessing and analysis.

    Science.gov (United States)

    Sun, Zhifu; Cunningham, Julie; Slager, Susan; Kocher, Jean-Pierre

    2015-08-01

    Bisulfite treatment-based methylation microarray (mainly Illumina 450K Infinium array) and next-generation sequencing (reduced representation bisulfite sequencing, Agilent SureSelect Human Methyl-Seq, NimbleGen SeqCap Epi CpGiant or whole-genome bisulfite sequencing) are commonly used for base resolution DNA methylome research. Although multiple tools and methods have been developed and used for the data preprocessing and analysis, confusions remains for these platforms including how and whether the 450k array should be normalized; which platform should be used to better fit researchers' needs; and which statistical models would be more appropriate for differential methylation analysis. This review presents the commonly used platforms and compares the pros and cons of each in methylome profiling. We then discuss approaches to study design, data normalization, bias correction and model selection for differentially methylated individual CpGs and regions.

  5. The Effect of Preprocessing on Arabic Document Categorization

    Directory of Open Access Journals (Sweden)

    Abdullah Ayedh

    2016-04-01

    Full Text Available Preprocessing is one of the main components in a conventional document categorization (DC framework. This paper aims to highlight the effect of preprocessing tasks on the efficiency of the Arabic DC system. In this study, three classification techniques are used, namely, naive Bayes (NB, k-nearest neighbor (KNN, and support vector machine (SVM. Experimental analysis on Arabic datasets reveals that preprocessing techniques have a significant impact on the classification accuracy, especially with complicated morphological structure of the Arabic language. Choosing appropriate combinations of preprocessing tasks provides significant improvement on the accuracy of document categorization depending on the feature size and classification techniques. Findings of this study show that the SVM technique has outperformed the KNN and NB techniques. The SVM technique achieved 96.74% micro-F1 value by using the combination of normalization and stemming as preprocessing tasks.

  6. Forensic considerations for preprocessing effects on clinical MDCT scans.

    Science.gov (United States)

    Wade, Andrew D; Conlogue, Gerald J

    2013-05-01

    Manipulation of digital photographs destined for medico-legal inquiry must be thoroughly documented and presented with explanation of any manipulations. Unlike digital photography, computed tomography (CT) data must pass through an additional step before viewing. Reconstruction of raw data involves reconstruction algorithms to preprocess the raw information into display data. Preprocessing of raw data, although it occurs at the source, alters the images and must be accounted for in the same way as postprocessing. Repeated CT scans of a gunshot wound phantom were made using the Toshiba Aquilion 64-slice multidetector CT scanner. The appearance of fragments, high-density inclusion artifacts, and soft tissue were assessed. Preprocessing with different algorithms results in substantial differences in image output. It is important to appreciate that preprocessing affects the image, that it does so differently in the presence of high-density inclusions, and that preprocessing algorithms and scanning parameters may be used to overcome the resulting artifacts.

  7. When is Stacking Confusing?: The Impact of Confusion on Stacking in Deep HI Galaxy Surveys

    CERN Document Server

    Jones, Michael G; Giovanelli, Riccardo; Papastergis, Emmanouil

    2015-01-01

    We present an analytic model to predict the HI mass contributed by confused sources to a stacked spectrum in a generic HI survey. Based on the ALFALFA correlation function, this model is in agreement with the estimates of confusion present in stacked Parkes telescope data, and was used to predict how confusion will limit stacking in the deepest SKA-precursor HI surveys. Stacking with LADUMA and DINGO UDEEP data will only be mildly impacted by confusion if their target synthesised beam size of 10 arcsec can be achieved. Any beam size significantly above this will result in stacks that contain a mass in confused sources that is comparable to (or greater than) that which is detectable via stacking, at all redshifts. CHILES' 5 arcsec resolution is more than adequate to prevent confusion influencing stacking of its data, throughout its bandpass range. FAST will be the most impeded by confusion, with HI surveys likely becoming heavily confused much beyond z = 0.1. The largest uncertainties in our model are the reds...

  8. On greedy and submodular matrices

    NARCIS (Netherlands)

    Faigle, U.; Kern, Walter; Peis, Britta; Marchetti-Spaccamela, Alberto; Segal, Michael

    2011-01-01

    We characterize non-negative greedy matrices, i.e., 0-1 matrices $A$ such that max $\\{c^Tx|Ax \\le b,\\,x \\ge 0\\}$ can be solved greedily. We identify submodular matrices as a special subclass of greedy matrices. Finally, we extend the notion of greediness to $\\{-1,0,+1\\}$-matrices. We present

  9. Gaussian Fibonacci Circulant Type Matrices

    Directory of Open Access Journals (Sweden)

    Zhaolin Jiang

    2014-01-01

    Full Text Available Circulant matrices have become important tools in solving integrable system, Hamiltonian structure, and integral equations. In this paper, we prove that Gaussian Fibonacci circulant type matrices are invertible matrices for n>2 and give the explicit determinants and the inverse matrices. Furthermore, the upper bounds for the spread on Gaussian Fibonacci circulant and left circulant matrices are presented, respectively.

  10. Feature detection techniques for preprocessing proteomic data.

    Science.gov (United States)

    Sellers, Kimberly F; Miecznikowski, Jeffrey C

    2010-01-01

    Numerous gel-based and nongel-based technologies are used to detect protein changes potentially associated with disease. The raw data, however, are abundant with technical and structural complexities, making statistical analysis a difficult task. Low-level analysis issues (including normalization, background correction, gel and/or spectral alignment, feature detection, and image registration) are substantial problems that need to be addressed, because any large-level data analyses are contingent on appropriate and statistically sound low-level procedures. Feature detection approaches are particularly interesting due to the increased computational speed associated with subsequent calculations. Such summary data corresponding to image features provide a significant reduction in overall data size and structure while retaining key information. In this paper, we focus on recent advances in feature detection as a tool for preprocessing proteomic data. This work highlights existing and newly developed feature detection algorithms for proteomic datasets, particularly relating to time-of-flight mass spectrometry, and two-dimensional gel electrophoresis. Note, however, that the associated data structures (i.e., spectral data, and images containing spots) used as input for these methods are obtained via all gel-based and nongel-based methods discussed in this manuscript, and thus the discussed methods are likewise applicable.

  11. Invertible flexible matrices

    Science.gov (United States)

    Justino, Júlia

    2017-06-01

    Matrices with coefficients having uncertainties of type o (.) or O (.), called flexible matrices, are studied from the point of view of nonstandard analysis. The uncertainties of the afore-mentioned kind will be given in the form of the so-called neutrices, for instance the set of all infinitesimals. Since flexible matrices have uncertainties in their coefficients, it is not possible to define the identity matrix in an unique way and so the notion of spectral identity matrix arises. Not all nonsingular flexible matrices can be turned into a spectral identity matrix using Gauss-Jordan elimination method, implying that that not all nonsingular flexible matrices have the inverse matrix. Under certain conditions upon the size of the uncertainties appearing in a nonsingular flexible matrix, a general theorem concerning the boundaries of its minors is presented which guarantees the existence of the inverse matrix of a nonsingular flexible matrix.

  12. Is It Kingdom or Domains? Confusion & Solutions

    Science.gov (United States)

    Blackwell, Will H.

    2004-01-01

    A confusion regarding the number of kingdoms that should be recognized and the inclusion of domains in the traditional kingdom-based classification found in the higher levels of classification of organisms is presented. Hence, it is important to keep in mind future modifications that may occur in the classification systems and to recognize…

  13. Diminishing Chat Confusion by Multiple Visualizations

    NARCIS (Netherlands)

    Holmer, T.; Lukosch, S.G.; Kunz, V.

    2009-01-01

    In this article, we address the problem of confusion and co-text-loss in chat communication, identify requirements for a solution, discuss related work and present a new approach for addressing co-text loss in text-based chats. We report about first experiences with our solution and give an outlook

  14. On the tensor Permutation Matrices

    CERN Document Server

    Rakotonirina, Christian

    2011-01-01

    A property that tensor permutation matrices permutate tensor product of rectangle matrices is shown. Some examples, in the particular case of tensor commutation matrices, for studying some linear matricial equations are given.

  15. Enhanced bone structural analysis through pQCT image preprocessing.

    Science.gov (United States)

    Cervinka, T; Hyttinen, J; Sievanen, H

    2010-05-01

    Several factors, including preprocessing of the image, can affect the reliability of pQCT-measured bone traits, such as cortical area and trabecular density. Using repeated scans of four different liquid phantoms and repeated in vivo scans of distal tibiae from 25 subjects, the performance of two novel preprocessing methods, based on the down-sampling of grayscale intensity histogram and the statistical approximation of image data, was compared to 3 x 3 and 5 x 5 median filtering. According to phantom measurements, the signal to noise ratio in the raw pQCT images (XCT 3000) was low ( approximately 20dB) which posed a challenge for preprocessing. Concerning the cortical analysis, the reliability coefficient (R) was 67% for the raw image and increased to 94-97% after preprocessing without apparent preference for any method. Concerning the trabecular density, the R-values were already high ( approximately 99%) in the raw images leaving virtually no room for improvement. However, some coarse structural patterns could be seen in the preprocessed images in contrast to a disperse distribution of density levels in the raw image. In conclusion, preprocessing cannot suppress the high noise level to the extent that the analysis of mean trabecular density is essentially improved, whereas preprocessing can enhance cortical bone analysis and also facilitate coarse structural analyses of the trabecular region.

  16. When is stacking confusing? The impact of confusion on stacking in deep H I galaxy surveys

    Science.gov (United States)

    Jones, Michael G.; Haynes, Martha P.; Giovanelli, Riccardo; Papastergis, Emmanouil

    2016-01-01

    We present an analytic model to predict the H I mass contributed by confused sources to a stacked spectrum in a generic H I survey. Based on the ALFALFA (Arecibo Legacy Fast ALFA) correlation function, this model is in agreement with the estimates of confusion present in stacked Parkes telescope data, and was used to predict how confusion will limit stacking in the deepest Square Kilometre Array precursor H I surveys. Stacking with LADUMA (Looking At the Distant Universe with MeerKAT) and DINGO UDEEP (Deep Investigation of Neutral Gas Origins - Ultra Deep) data will only be mildly impacted by confusion if their target synthesized beam size of 10 arcsec can be achieved. Any beam size significantly above this will result in stacks that contain a mass in confused sources that is comparable to (or greater than) that which is detectable via stacking, at all redshifts. CHILES (COSMOS H I Large Extragalactic Survey) 5 arcsec resolution is more than adequate to prevent confusion influencing stacking of its data, throughout its bandpass range. FAST (Five hundred metre Aperture Spherical Telescope) will be the most impeded by confusion, with H I surveys likely becoming heavily confused much beyond z = 0.1. The largest uncertainties in our model are the redshift evolution of the H I density of the Universe and the H I correlation function. However, we argue that the two idealized cases we adopt should bracket the true evolution, and the qualitative conclusions are unchanged regardless of the model choice. The profile shape of the signal due to confusion (in the absence of any detection) was also modelled, revealing that it can take the form of a double Gaussian with a narrow and wide component.

  17. On free matrices

    DEFF Research Database (Denmark)

    Britz, Thomas

    Bipartite graphs and digraphs are used to describe algebraic operations on a free matrix, including Moore-Penrose inversion, finding Schur complements, and normalized LU factorization. A description of the structural properties of a free matrix and its Moore-Penrose inverse is proved, and necessa...... and sufficient conditions are given for the Moore-Penrose inverse of a free matrix to be free. Several of these results are generalized with respect to a family of matrices that contains both the free matrices and the nearly reducible matrices....

  18. On free matrices

    DEFF Research Database (Denmark)

    Britz, Thomas

    Bipartite graphs and digraphs are used to describe algebraic operations on a free matrix, including Moore-Penrose inversion, finding Schur complements, and normalized LU factorization. A description of the structural properties of a free matrix and its Moore-Penrose inverse is proved, and necessa...... and sufficient conditions are given for the Moore-Penrose inverse of a free matrix to be free. Several of these results are generalized with respect to a family of matrices that contains both the free matrices and the nearly reducible matrices....

  19. Hermitian quark matrices

    Indian Academy of Sciences (India)

    Narendra Singh

    2003-01-01

    Assuming a relation between the quark mass matrices of the two sectors a unique solution can be obtained for the CKM flavor mixing matrix. A numerical example is worked out which is in excellent agreement with experimental data.

  20. An adaptive preprocessing algorithm for low bitrate video coding

    Institute of Scientific and Technical Information of China (English)

    LI Mao-quan; XU Zheng-quan

    2006-01-01

    At low bitrate, all block discrete cosine transform (BDCT) based video coding algorithms suffer from visible blocking and ringing artifacts in the reconstructed images because the quantization is too coarse and high frequency DCT coefficients are inclined to be quantized to zeros. Preprocessing algorithms can enhance coding efficiency and thus reduce the likelihood of blocking artifacts and ringing artifacts generated in the video coding process by applying a low-pass filter before video encoding to remove some relatively insignificant high frequent components. In this paper, we introduce a new adaptive preprocessing algorithm, which employs an improved bilateral filter to provide adaptive edge-preserving low-pass filtering which is adjusted according to the quantization parameters. Whether at low or high bit rate, the preprocessing can provide proper filtering to make the video encoder more efficient and have better reconstructed image quality. Experimental results demonstrate that our proposed preprocessing algorithm can significantly improve both subjective and objective quality.

  1. Solid Earth ARISTOTELES mission data preprocessing simulation of gravity gradiometer

    Science.gov (United States)

    Avanzi, G.; Stolfa, R.; Versini, B.

    Data preprocessing of the ARISTOTELES mission, which measures the Earth gravity gradient in a near polar orbit, was studied. The mission measures the gravity field at sea level through indirect measurements performed on the orbit, so that the evaluation steps consist in processing data from GRADIO accelerometer measurements. Due to the physical phenomena involved in the data collection experiment, it is possible to isolate at an initial stage a preprocessing of the gradiometer data based only on GRADIO measurements and not needing a detailed knowledge of the attitude and attitude rate sensors output. This preprocessing produces intermediate quantities used in future stages of the reduction. Software was designed and run to evaluate for this level of data reduction the achievable accuracy as a function of knowledge on instrument and satellite status parameters. The architecture of this element of preprocessing is described.

  2. Preprocessing Algorithm for Deciphering Historical Inscriptions Using String Metric

    Directory of Open Access Journals (Sweden)

    Lorand Lehel Toth

    2016-07-01

    Full Text Available The article presents the improvements in the preprocessing part of the deciphering method (shortly preprocessing algorithm for historical inscriptions of unknown origin. Glyphs used in historical inscriptions changed through time; therefore, various versions of the same script may contain different glyphs for each grapheme. The purpose of the preprocessing algorithm is reducing the running time of the deciphering process by filtering out the less probable interpretations of the examined inscription. However, the first version of the preprocessing algorithm leads incorrect outcome or no result in the output in certain cases. Therefore, its improved version was developed to find the most similar words in the dictionary by relaying the search conditions more accurately, but still computationally effectively. Moreover, a sophisticated similarity metric used to determine the possible meaning of the unknown inscription is introduced. The results of the evaluations are also detailed.

  3. A review of statistical methods for preprocessing oligonucleotide microarrays.

    Science.gov (United States)

    Wu, Zhijin

    2009-12-01

    Microarrays have become an indispensable tool in biomedical research. This powerful technology not only makes it possible to quantify a large number of nucleic acid molecules simultaneously, but also produces data with many sources of noise. A number of preprocessing steps are therefore necessary to convert the raw data, usually in the form of hybridisation images, to measures of biological meaning that can be used in further statistical analysis. Preprocessing of oligonucleotide arrays includes image processing, background adjustment, data normalisation/transformation and sometimes summarisation when multiple probes are used to target one genomic unit. In this article, we review the issues encountered in each preprocessing step and introduce the statistical models and methods in preprocessing.

  4. Preprocessing for classification of thermograms in breast cancer detection

    Science.gov (United States)

    Neumann, Łukasz; Nowak, Robert M.; Okuniewski, Rafał; Oleszkiewicz, Witold; Cichosz, Paweł; Jagodziński, Dariusz; Matysiewicz, Mateusz

    2016-09-01

    Performance of binary classification of breast cancer suffers from high imbalance between classes. In this article we present the preprocessing module designed to negate the discrepancy in training examples. Preprocessing module is based on standardization, Synthetic Minority Oversampling Technique and undersampling. We show how each algorithm influences classification accuracy. Results indicate that described module improves overall Area Under Curve up to 10% on the tested dataset. Furthermore we propose other methods of dealing with imbalanced datasets in breast cancer classification.

  5. Persistent Confusion and Controversy Surrounding Gene Patents

    Science.gov (United States)

    Guerrini, Christi J.; Majumder, Mary A.; McGuire, Amy L.

    2016-01-01

    There is persistent confusion and controversy surrounding basic issues of patent law relevant to the genomics industry. Uncertainty and conflict can lead to the adoption of inefficient practices and exposure to liability. The development of patent-specific educational resources for industry members, as well as the prompt resolution of patentability rules unsettled by recent U.S. Supreme Court decisions, are therefore urgently needed. PMID:26849516

  6. Pentamidine Dosage: A Base/Salt Confusion

    OpenAIRE

    Dorlo, Thomas P. C.; Kager, Piet A.

    2008-01-01

    Pentamidine has a long history in the treatment of human African trypanosomiasis (HAT) and leishmaniasis. Early guidelines on the dosage of pentamidine were based on the base-moiety of the two different formulations available. Confusion on the dosage of pentamidine arose from a different labelling of the two available products, either based on the salt or base moiety available in the preparation. We provide an overview of the various guidelines concerning HAT and leishmaniasis over the past d...

  7. Analyzing Mode Confusion via Model Checking

    Science.gov (United States)

    Luettgen, Gerald; Carreno, Victor

    1999-01-01

    Mode confusion is one of the most serious problems in aviation safety. Today's complex digital flight decks make it difficult for pilots to maintain awareness of the actual states, or modes, of the flight deck automation. NASA Langley leads an initiative to explore how formal techniques can be used to discover possible sources of mode confusion. As part of this initiative, a flight guidance system was previously specified as a finite Mealy automaton, and the theorem prover PVS was used to reason about it. The objective of the present paper is to investigate whether state-exploration techniques, especially model checking, are better able to achieve this task than theorem proving and also to compare several verification tools for the specific application. The flight guidance system is modeled and analyzed in Murphi, SMV, and Spin. The tools are compared regarding their system description language, their practicality for analyzing mode confusion, and their capabilities for error tracing and for animating diagnostic information. It turns out that their strengths are complementary.

  8. 100 words almost everyone confuses and misuses

    CERN Document Server

    2004-01-01

    The 100 Words series continues to set the standard for measuring and improving vocabulary, with a new title focusing on words that are best known for getting people into linguistic trouble. 100 Words Almost Everyone Confuses and Misuses is the perfect book for anyone seeking clear and sensible guidance on avoiding the recognized pitfalls of the English language. Each word on the list is accompanied by a concise and authoritative usage note based on the renowned usage program of the American Heritage® Dictionaries. These notes discuss why a particular usage has been criticized and explain the r

  9. Matrices in Engineering Problems

    CERN Document Server

    Tobias, Marvin

    2011-01-01

    This book is intended as an undergraduate text introducing matrix methods as they relate to engineering problems. It begins with the fundamentals of mathematics of matrices and determinants. Matrix inversion is discussed, with an introduction of the well known reduction methods. Equation sets are viewed as vector transformations, and the conditions of their solvability are explored. Orthogonal matrices are introduced with examples showing application to many problems requiring three dimensional thinking. The angular velocity matrix is shown to emerge from the differentiation of the 3-D orthogo

  10. Evaluating the impact of image preprocessing on iris segmentation

    Directory of Open Access Journals (Sweden)

    José F. Valencia-Murillo

    2014-08-01

    Full Text Available Segmentation is one of the most important stages in iris recognition systems. In this paper, image preprocessing algorithms are applied in order to evaluate their impact on successful iris segmentation. The preprocessing algorithms are based on histogram adjustment, Gaussian filters and suppression of specular reflections in human eye images. The segmentation method introduced by Masek is applied on 199 images acquired under unconstrained conditions, belonging to the CASIA-irisV3 database, before and after applying the preprocessing algorithms. Then, the impact of image preprocessing algorithms on the percentage of successful iris segmentation is evaluated by means of a visual inspection of images in order to determine if circumferences of iris and pupil were detected correctly. An increase from 59% to 73% in percentage of successful iris segmentation is obtained with an algorithm that combine elimination of specular reflections, followed by the implementation of a Gaussian filter having a 5x5 kernel. The results highlight the importance of a preprocessing stage as a previous step in order to improve the performance during the edge detection and iris segmentation processes.

  11. Infinite matrices and sequence spaces

    CERN Document Server

    Cooke, Richard G

    2014-01-01

    This clear and correct summation of basic results from a specialized field focuses on the behavior of infinite matrices in general, rather than on properties of special matrices. Three introductory chapters guide students to the manipulation of infinite matrices, covering definitions and preliminary ideas, reciprocals of infinite matrices, and linear equations involving infinite matrices.From the fourth chapter onward, the author treats the application of infinite matrices to the summability of divergent sequences and series from various points of view. Topics include consistency, mutual consi

  12. Confusion and controversy in the stress field.

    Science.gov (United States)

    Selye, H

    1975-06-01

    An attempt is made to further clarify present areas of controversy in the stress field, in response to a two-part article by Dr. John W. Mason which concludes in this issue of the Journal of Human Stress. The author tries to elucidate each source of confusion enumerated by Dr. Mason. The continued use of the word "stress" for the nonspecific response to any demand is deemed most desirable. The once vague term can now be applied in a well-defined sense and is accepted in all foreign languages as well, including those in which no such word existed previously in any sense. Subdivision of the stress concept has become necessary as more recent work has led to such notions as "eustress," "distress," "systemic stress" and "local stress." Confusion between stress as both an agent and a result can be avoided only by the distinction between "stress" and "stressor". It is explained that the stress syndrome is--by definition--nonspecific in its causation. However, depending upon conditioning factors, which can selectively influence the reactivity of certain organs, the same stressor can elicit different manifestations in different individuals.

  13. Pentamidine dosage: a base/salt confusion.

    Science.gov (United States)

    Dorlo, Thomas P C; Kager, Piet A

    2008-05-28

    Pentamidine has a long history in the treatment of human African trypanosomiasis (HAT) and leishmaniasis. Early guidelines on the dosage of pentamidine were based on the base-moiety of the two different formulations available. Confusion on the dosage of pentamidine arose from a different labelling of the two available products, either based on the salt or base moiety available in the preparation. We provide an overview of the various guidelines concerning HAT and leishmaniasis over the past decades and show the confusion in the calculation of the dosage of pentamidine in these guidelines and the subsequent published reports on clinical trials and reviews. At present, only pentamidine isethionate is available, but the advised dosage for HAT and leishmaniasis is (historically) based on the amount of pentamidine base. In the treatment of leishmaniasis this is probably resulting in a subtherapeutic treatment. There is thus a need for a new, more transparent and concise guideline concerning the dosage of pentamidine, at least in the treatment of HAT and leishmaniasis.

  14. Pentamidine dosage: a base/salt confusion.

    Directory of Open Access Journals (Sweden)

    Thomas P C Dorlo

    Full Text Available Pentamidine has a long history in the treatment of human African trypanosomiasis (HAT and leishmaniasis. Early guidelines on the dosage of pentamidine were based on the base-moiety of the two different formulations available. Confusion on the dosage of pentamidine arose from a different labelling of the two available products, either based on the salt or base moiety available in the preparation. We provide an overview of the various guidelines concerning HAT and leishmaniasis over the past decades and show the confusion in the calculation of the dosage of pentamidine in these guidelines and the subsequent published reports on clinical trials and reviews. At present, only pentamidine isethionate is available, but the advised dosage for HAT and leishmaniasis is (historically based on the amount of pentamidine base. In the treatment of leishmaniasis this is probably resulting in a subtherapeutic treatment. There is thus a need for a new, more transparent and concise guideline concerning the dosage of pentamidine, at least in the treatment of HAT and leishmaniasis.

  15. The confusion technique untangled: its theoretical rationale and preliminary classification.

    Science.gov (United States)

    Otani, A

    1989-01-01

    This article examines the historical development of Milton H. Erickson's theoretical approach to hypnosis using confusion. Review of the literature suggests that the Confusion Technique, in principle, consists of a two-stage "confusion-restructuring" process. The article also attempts to categorize several examples of confusion suggestions by seven linguistic characteristics: (1) antonyms, (2) homonyms, (3) synonyms, (4) elaboration, (5) interruption, (6) echoing, and (7) uncommon words. The Confusion Technique is an important yet little studied strategy developed by Erickson. More work is urged to investigate its nature and properties.

  16. Effect of microaerobic fermentation in preprocessing fibrous lignocellulosic materials.

    Science.gov (United States)

    Alattar, Manar Arica; Green, Terrence R; Henry, Jordan; Gulca, Vitalie; Tizazu, Mikias; Bergstrom, Robby; Popa, Radu

    2012-06-01

    Amending soil with organic matter is common in agricultural and logging practices. Such amendments have benefits to soil fertility and crop yields. These benefits may be increased if material is preprocessed before introduction into soil. We analyzed the efficiency of microaerobic fermentation (MF), also referred to as Bokashi, in preprocessing fibrous lignocellulosic (FLC) organic materials using varying produce amendments and leachate treatments. Adding produce amendments increased leachate production and fermentation rates and decreased the biological oxygen demand of the leachate. Continuously draining leachate without returning it to the fermentors led to acidification and decreased concentrations of polysaccharides (PS) in leachates. PS fragmentation and the production of soluble metabolites and gases stabilized in fermentors in about 2-4 weeks. About 2 % of the carbon content was lost as CO(2). PS degradation rates, upon introduction of processed materials into soil, were similar to unfermented FLC. Our results indicate that MF is insufficient for adequate preprocessing of FLC material.

  17. Exploration, visualization, and preprocessing of high-dimensional data.

    Science.gov (United States)

    Wu, Zhijin; Wu, Zhiqiang

    2010-01-01

    The rapid advances in biotechnology have given rise to a variety of high-dimensional data. Many of these data, including DNA microarray data, mass spectrometry protein data, and high-throughput screening (HTS) assay data, are generated by complex experimental procedures that involve multiple steps such as sample extraction, purification and/or amplification, labeling, fragmentation, and detection. Therefore, the quantity of interest is not directly obtained and a number of preprocessing procedures are necessary to convert the raw data into the format with biological relevance. This also makes exploratory data analysis and visualization essential steps to detect possible defects, anomalies or distortion of the data, to test underlying assumptions and thus ensure data quality. The characteristics of the data structure revealed in exploratory analysis often motivate decisions in preprocessing procedures to produce data suitable for downstream analysis. In this chapter we review the common techniques in exploring and visualizing high-dimensional data and introduce the basic preprocessing procedures.

  18. Data Preprocessing in Cluster Analysis of Gene Expression

    Institute of Scientific and Technical Information of China (English)

    杨春梅; 万柏坤; 高晓峰

    2003-01-01

    Considering that the DNA microarray technology has generated explosive gene expression data and that it is urgent to analyse and to visualize such massive datasets with efficient methods, we investigate the data preprocessing methods used in cluster analysis, normalization or logarithm of the matrix, by using hierarchical clustering, principal component analysis (PCA) and self-organizing maps (SOMs). The results illustrate that when using the Euclidean distance as measuring metrics, logarithm of relative expression level is the best preprocessing method, while data preprocessed by normalization cannot attain the expected results because the data structure is ruined. If there are only a few principal components, the PCA is an effective method to extract the frame structure, while SOMs are more suitable for a specific structure.

  19. Co-occurrence Matrices and their Applications in Information Science: Extending ACA to the Web Environment

    CERN Document Server

    Leydesdorff, Loet

    2009-01-01

    Co-occurrence matrices, such as co-citation, co-word, and co-link matrices, have been used widely in the information sciences. However, confusion and controversy have hindered the proper statistical analysis of this data. The underlying problem, in our opinion, involved understanding the nature of various types of matrices. This paper discusses the difference between a symmetrical co-citation matrix and an asymmetrical citation matrix as well as the appropriate statistical techniques that can be applied to each of these matrices, respectively. Similarity measures (like the Pearson correlation coefficient or the cosine) should not be applied to the symmetrical co-citation matrix, but can be applied to the asymmetrical citation matrix to derive the proximity matrix. The argument is illustrated with examples. The study then extends the application of co-occurrence matrices to the Web environment where the nature of the available data and thus data collection methods are different from those of traditional databa...

  20. Introduction to matrices and vectors

    CERN Document Server

    Schwartz, Jacob T

    2001-01-01

    In this concise undergraduate text, the first three chapters present the basics of matrices - in later chapters the author shows how to use vectors and matrices to solve systems of linear equations. 1961 edition.

  1. Micro-Analyzer: automatic preprocessing of Affymetrix microarray data.

    Science.gov (United States)

    Guzzi, Pietro Hiram; Cannataro, Mario

    2013-08-01

    A current trend in genomics is the investigation of the cell mechanism using different technologies, in order to explain the relationship among genes, molecular processes and diseases. For instance, the combined use of gene-expression arrays and genomic arrays has been demonstrated as an effective instrument in clinical practice. Consequently, in a single experiment different kind of microarrays may be used, resulting in the production of different types of binary data (images and textual raw data). The analysis of microarray data requires an initial preprocessing phase, that makes raw data suitable for use on existing analysis platforms, such as the TIGR M4 (TM4) Suite. An additional challenge to be faced by emerging data analysis platforms is the ability to treat in a combined way those different microarray formats coupled with clinical data. In fact, resulting integrated data may include both numerical and symbolic data (e.g. gene expression and SNPs regarding molecular data), as well as temporal data (e.g. the response to a drug, time to progression and survival rate), regarding clinical data. Raw data preprocessing is a crucial step in analysis but is often performed in a manual and error prone way using different software tools. Thus novel, platform independent, and possibly open source tools enabling the semi-automatic preprocessing and annotation of different microarray data are needed. The paper presents Micro-Analyzer (Microarray Analyzer), a cross-platform tool for the automatic normalization, summarization and annotation of Affymetrix gene expression and SNP binary data. It represents the evolution of the μ-CS tool, extending the preprocessing to SNP arrays that were not allowed in μ-CS. The Micro-Analyzer is provided as a Java standalone tool and enables users to read, preprocess and analyse binary microarray data (gene expression and SNPs) by invoking TM4 platform. It avoids: (i) the manual invocation of external tools (e.g. the Affymetrix Power

  2. EBM, HTA, and CER: clearing the confusion.

    Science.gov (United States)

    Luce, Bryan R; Drummond, Michael; Jönsson, Bengt; Neumann, Peter J; Schwartz, J Sanford; Siebert, Uwe; Sullivan, Sean D

    2010-06-01

    The terms evidence-based medicine (EBM), health technology assessment (HTA), comparative effectiveness research (CER), and other related terms lack clarity and so could lead to miscommunication, confusion, and poor decision making. The objective of this article is to clarify their definitions and the relationships among key terms and concepts. This article used the relevant methods and policy literature as well as the websites of organizations engaged in evidence-based activities to develop a framework to explain the relationships among the terms EBM, HTA, and CER. This article proposes an organizing framework and presents a graphic demonstrating the differences and relationships among these terms and concepts. More specific terminology and concepts are necessary for an informed and clear public policy debate. They are even more important to inform decision making at all levels and to engender more accountability by the organizations and individuals responsible for these decisions.

  3. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  4. Paraunitary matrices and group rings

    Directory of Open Access Journals (Sweden)

    Barry Hurley

    2014-03-01

    Full Text Available Design methods for paraunitary matrices from complete orthogonal sets of idempotents and related matrix structuresare presented. These include techniques for designing non-separable multidimensional paraunitary matrices. Properties of the structures are obtained and proofs given. Paraunitary matrices play a central role in signal processing, inparticular in the areas of filterbanks and wavelets.

  5. Image preprocessing study on KPCA-based face recognition

    Science.gov (United States)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  6. Pre-Processing Rules for Triangulation of Probabilistic Networks

    NARCIS (Netherlands)

    Bodlaender, H.L.; Koster, A.M.C.A.; Eijkhof, F. van den

    2003-01-01

    The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a network’s graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum clique siz

  7. Pre-processing for Triangulation of Probabilistic Networks

    NARCIS (Netherlands)

    Bodlaender, H.L.; Koster, A.M.C.A.; Eijkhof, F. van den; Gaag, L.C. van der

    2001-01-01

    The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a networks graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum clique

  8. The minimal preprocessing pipelines for the Human Connectome Project.

    Science.gov (United States)

    Glasser, Matthew F; Sotiropoulos, Stamatios N; Wilson, J Anthony; Coalson, Timothy S; Fischl, Bruce; Andersson, Jesper L; Xu, Junqian; Jbabdi, Saad; Webster, Matthew; Polimeni, Jonathan R; Van Essen, David C; Jenkinson, Mark

    2013-10-15

    The Human Connectome Project (HCP) faces the challenging task of bringing multiple magnetic resonance imaging (MRI) modalities together in a common automated preprocessing framework across a large cohort of subjects. The MRI data acquired by the HCP differ in many ways from data acquired on conventional 3 Tesla scanners and often require newly developed preprocessing methods. We describe the minimal preprocessing pipelines for structural, functional, and diffusion MRI that were developed by the HCP to accomplish many low level tasks, including spatial artifact/distortion removal, surface generation, cross-modal registration, and alignment to standard space. These pipelines are specially designed to capitalize on the high quality data offered by the HCP. The final standard space makes use of a recently introduced CIFTI file format and the associated grayordinate spatial coordinate system. This allows for combined cortical surface and subcortical volume analyses while reducing the storage and processing requirements for high spatial and temporal resolution data. Here, we provide the minimum image acquisition requirements for the HCP minimal preprocessing pipelines and additional advice for investigators interested in replicating the HCP's acquisition protocols or using these pipelines. Finally, we discuss some potential future improvements to the pipelines.

  9. OPSN: The IMS COMSYS 1 and 2 Data Preprocessing System.

    Science.gov (United States)

    Yu, John

    The Instructional Management System (IMS) developed by the Southwest Regional Laboratory (SWRL) processes student and teacher-generated data through the use of an optical scanner that produces a magnetic tape (Scan Tape) for input to IMS. A series of computer routines, OPSN, preprocesses the Scan Tape and prepares the data for transmission to the…

  10. An effective measured data preprocessing method in electrical impedance tomography.

    Science.gov (United States)

    Yu, Chenglong; Yue, Shihong; Wang, Jianpei; Wang, Huaxiang

    2014-01-01

    As an advanced process detection technology, electrical impedance tomography (EIT) has widely been paid attention to and studied in the industrial fields. But the EIT techniques are greatly limited to the low spatial resolutions. This problem may result from the incorrect preprocessing of measuring data and lack of general criterion to evaluate different preprocessing processes. In this paper, an EIT data preprocessing method is proposed by all rooting measured data and evaluated by two constructed indexes based on all rooted EIT measured data. By finding the optimums of the two indexes, the proposed method can be applied to improve the EIT imaging spatial resolutions. In terms of a theoretical model, the optimal rooting times of the two indexes range in [0.23, 0.33] and in [0.22, 0.35], respectively. Moreover, these factors that affect the correctness of the proposed method are generally analyzed. The measuring data preprocessing is necessary and helpful for any imaging process. Thus, the proposed method can be generally and widely used in any imaging process. Experimental results validate the two proposed indexes.

  11. Pre-processing for Triangulation of Probabilistic Networks

    NARCIS (Netherlands)

    Bodlaender, H.L.; Koster, A.M.C.A.; Eijkhof, F. van den; Gaag, L.C. van der

    2001-01-01

    The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a networks graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum clique

  12. Stable lepton mass matrices

    CERN Document Server

    Domcke, Valerie

    2016-01-01

    We study natural lepton mass matrices, obtained assuming the stability of physical flavour observables with respect to the variations of individual matrix elements. We identify all four possible stable neutrino textures from algebraic conditions on their entries. Two of them turn out to be uniquely associated to specific neutrino mass patterns. We then concentrate on the semi-degenerate pattern, corresponding to an overall neutrino mass scale within the reach of future experiments. In this context we show that i) the neutrino and charged lepton mixings and mass matrices are largely constrained by the requirement of stability, ii) naturalness considerations give a mild preference for the Majorana phase most relevant for neutrinoless double-beta decay, $\\alpha \\sim \\pi/2$, and iii) SU(5) unification allows to extend the implications of stability to the down quark sector. The above considerations would benefit from an experimental determination of the PMNS ratio $|U_{32}/U_{31}|$, i.e. of the Dirac phase $\\delta...

  13. Graphs and matrices

    CERN Document Server

    Bapat, Ravindra B

    2014-01-01

    This new edition illustrates the power of linear algebra in the study of graphs. The emphasis on matrix techniques is greater than in other texts on algebraic graph theory. Important matrices associated with graphs (for example, incidence, adjacency and Laplacian matrices) are treated in detail. Presenting a useful overview of selected topics in algebraic graph theory, early chapters of the text focus on regular graphs, algebraic connectivity, the distance matrix of a tree, and its generalized version for arbitrary graphs, known as the resistance matrix. Coverage of later topics include Laplacian eigenvalues of threshold graphs, the positive definite completion problem and matrix games based on a graph. Such an extensive coverage of the subject area provides a welcome prompt for further exploration. The inclusion of exercises enables practical learning throughout the book. In the new edition, a new chapter is added on the line graph of a tree, while some results in Chapter 6 on Perron-Frobenius theory are reo...

  14. Singular Mueller matrices

    CERN Document Server

    Gil, José J; José, Ignacio San

    2015-01-01

    Singular Mueller matrices play an important role in polarization algebra and have peculiar properties that stem from the fact that either the medium exhibits maximum diattenuation and/or polarizance, or because its associated canonical depolarizer has the property of fully randomizing, the circular component (at least) of the states of polarization of light incident on it. The formal reasons for which the Mueller matrix M of a given medium is singular are systematically investigated, analyzed and interpreted in the framework of the serial decompositions and the characteristic ellipsoids of M. The analysis allows for a general classification and geometric representation of singular Mueller matrices, of potential usefulness to experimentalists dealing with such media.

  15. Nanoceramic Matrices: Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Willi Paul

    2006-01-01

    Full Text Available Natural bone consisted of calcium phosphate with nanometer-sized needle-like crystals of approximately 5-20 nm width by 60 nm length. Synthetic calcium phosphates and Bioglass are biocompatible and bioactive as they bond to bone and enhance bone tissue formation. This property is attributed to their similarity with the mineral phase of natural bone except its constituent particle size. Calcium phosphate ceramics have been used in dentistry and orthopedics for over 30 years because of these properties. Several studies indicated that incorporation of growth hormones into these ceramic matrices facilitated increased tissue regeneration. Nanophase calcium phosphates can mimic the dimensions of constituent components of natural tissues; can modulate enhanced osteoblast adhesion and resorption with long-term functionality of tissue engineered implants. This mini review discusses some of the recent developments in nanophase ceramic matrices utilized for bone tissue engineering.

  16. On Random Correlation Matrices

    Science.gov (United States)

    1988-10-28

    the spectral features of the resulting matrices are unknown. Method 2: Perturbation about a Mean This method is discussed by Marsaglia and Okin,10...complete regressor set. Finally, Marsaglia and Olkin (1984, Reference 10) give a rigorous mathematical description of Methods 2 through 4 described in the...short paper by Marsaglia 46 has a review of these early contributions, along with an improved method. More recent references are the pragmatic paper

  17. Concentration for noncommutative polynomials in random matrices

    OpenAIRE

    2011-01-01

    We present a concentration inequality for linear functionals of noncommutative polynomials in random matrices. Our hypotheses cover most standard ensembles, including Gaussian matrices, matrices with independent uniformly bounded entries and unitary or orthogonal matrices.

  18. Safety-relevant mode confusions-modelling and reducing them

    Energy Technology Data Exchange (ETDEWEB)

    Bredereke, Jan [Universitaet Bremen, FB 3, P.O. Box 330 440, D-28334 Bremen (Germany)]. E-mail: brederek@tzi.de; Lankenau, Axel [Universitaet Bremen, FB 3, P.O. Box 330 440, D-28334 Bremen (Germany)

    2005-06-01

    Mode confusions are a significant safety concern in safety-critical systems, for example in aircraft. A mode confusion occurs when the observed behaviour of a technical system is out of sync with the user's mental model of its behaviour. But the notion is described only informally in the literature. We present a rigorous way of modelling the user and the machine in a shared-control system. This enables us to propose precise definitions of 'mode' and 'mode confusion' for safety-critical systems. We then validate these definitions against the informal notions in the literature. A new classification of mode confusions by cause leads to a number of design recommendations for shared-control systems. These help in avoiding mode confusion problems. Our approach supports the automated detection of remaining mode confusion problems. We apply our approach practically to a wheelchair robot.

  19. [Acute confusion in the geriatric patient].

    Science.gov (United States)

    Zanocchi, M; Vallero, F; Norelli, L; Zaccagna, B; Spada, S; Fabris, F

    1998-05-01

    During 1996, 585 patients, aged 55 to 96, were admitted into hospital at the Geriatric Department of Ospedale Maggiore (Turin). Acute confusion was seen in 22.2% of these patients who tended to have more serious clinical condition, were more likely to have chronic cognitive impairment, were treated with a greater number of drugs and suffered more from immobility with pressure ulcer. The confusional state, manifested at admission to Geriatric department, was mostly related with the patient's clinical severity, while the one which developed during hospital stay was linked to situations of physical frailty, as pressure ulcer and low albumin values. The most frequent causes of acute confusional state were acute infectious diseases, heart failure, gastro-intestinal bleeding with secondary anaemia, stroke and dehydration. In many cases the very cause of the acute confusional state could not be identified. Falls, more than 31 days length of stay in hospital and death were more frequent in patients suffering from confusional state. Chronic cognitive impairment, functional dependence, clinical severity and treatment involving a great number of drugs, are the main contributing factors in this syndrome. Thus, a multi-dimensional evaluation which takes into account both clinical-functional and socio-economical aspects, is useful for a correct preventive and diagnostic approach of acute confusional state.

  20. Intraosseous haemangioma: semantic and medical confusion.

    Science.gov (United States)

    Kadlub, N; Dainese, L; Coulomb-L'Hermine, A; Galmiche, L; Soupre, V; Lepointe, H Ducou; Vazquez, M-P; Picard, A

    2015-06-01

    The literature is rich in case reports of intraosseous haemangioma, although most of these are actually cases of venous or capillary malformations. To illustrate this confusion in terminology, we present three cases of slow-flow vascular malformations misnamed as intraosseous haemangioma. A retrospective study of children diagnosed with intraosseous haemangioma was conducted. Clinical and radiological data were evaluated. Histopathological examinations and immunohistochemical studies were redone by three independent pathologists to classify the lesions according to the International Society for the Study of Vascular Anomalies (ISSVA) and World Health Organization (WHO) classifications. Three children who had presented with jaw haemangiomas were identified. Computed tomography scan patterns were not specific. All tumours were GLUT-1-negative and D2-40-negative. The lesions were classified as central haemangiomas according to the WHO, and as slow-flow malformations according to the ISSVA. The classification of vascular anomalies is based on clinical, radiological, and histological differences between vascular tumours and malformations. Based on this classification, the evolution of the lesion can be predicted and adequate treatment applied. The binary ISSVA classification is widely accepted and should be applied for all vascular lesions.

  1. Uncertainty and confusion in temporal masking

    Science.gov (United States)

    Formby, C.; Zhang, T.

    2001-05-01

    In a landmark study, Wright et al. [Nature 387, 176-178 (1997)] reported an apparent backward-masking deficit in language-impaired children. Subsequently, these controversial results have been influential in guiding treatments for childhood language problems. In this study we revisited Wright et al.'s temporal-masking paradigm to evaluate listener uncertainty effects. Masked detection was measured for 20-ms sinusoids (480, 1000, or 1680 Hz) presented at temporal positions before, during, or after a gated narrowband (W=600-1400 Hz) masker. Listener uncertainty was investigated by cueing various stimulus temporal properties with a 6000-Hz sinusoid presented either ipsi- or contra-lateral to the test ear or bilaterally. The primary cueing effect was measured in the backward-masking condition for a contralateral cue gated simultaneously with the on-frequency 1000-Hz signal. The resulting cued masked-detection threshold was reduced to quiet threshold. No significant cueing effects were obtained for other signal temporal positions in the masker nor for any off-frequency signal conditions. These results indicate that (1) uncertainty can be reduced or eliminated for on-frequency backward masking by cueing the signal and (2) the deficit reported by Wright et al. for language-impaired children may reflect uncertainty and confusion rather than a temporal-processing deficit per se. [Research supported by NIDCD.

  2. Matrices and linear algebra

    CERN Document Server

    Schneider, Hans

    1989-01-01

    Linear algebra is one of the central disciplines in mathematics. A student of pure mathematics must know linear algebra if he is to continue with modern algebra or functional analysis. Much of the mathematics now taught to engineers and physicists requires it.This well-known and highly regarded text makes the subject accessible to undergraduates with little mathematical experience. Written mainly for students in physics, engineering, economics, and other fields outside mathematics, the book gives the theory of matrices and applications to systems of linear equations, as well as many related t

  3. Universality of Covariance Matrices

    CERN Document Server

    Pillai, Natesh S

    2011-01-01

    We prove the universality of covariance matrices of the form $H_{N \\times N} = {1 \\over N} \\tp{X}X$ where $[X]_{M \\times N}$ is a rectangular matrix with independent real valued entries $[x_{ij}]$ satisfying $\\E \\,x_{ij} = 0$ and $\\E \\,x^2_{ij} = {1 \\over M}$, $N, M\\to \\infty$. Furthermore it is assumed that these entries have sub-exponential tails. We will study the asymptotics in the regime $N/M = d_N \\in (0,\\infty), \\lim_{N\\to \\infty}d_N \

  4. Lectures on matrices

    CERN Document Server

    M Wedderburn, J H

    1934-01-01

    It is the organization and presentation of the material, however, which make the peculiar appeal of the book. This is no mere compendium of results-the subject has been completely reworked and the proofs recast with the skill and elegance which come only from years of devotion. -Bulletin of the American Mathematical Society The very clear and simple presentation gives the reader easy access to the more difficult parts of the theory. -Jahrbuch über die Fortschritte der Mathematik In 1937, the theory of matrices was seventy-five years old. However, many results had only recently evolved from sp

  5. Research on pre-processing of QR Code

    Science.gov (United States)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  6. An Efficient and Configurable Preprocessing Algorithm to Improve Stability Analysis.

    Science.gov (United States)

    Sesia, Ilaria; Cantoni, Elena; Cernigliaro, Alice; Signorile, Giovanna; Fantino, Gianluca; Tavella, Patrizia

    2016-04-01

    The Allan variance (AVAR) is widely used to measure the stability of experimental time series. Specifically, AVAR is commonly used in space applications such as monitoring the clocks of the global navigation satellite systems (GNSSs). In these applications, the experimental data present some peculiar aspects which are not generally encountered when the measurements are carried out in a laboratory. Space clocks' data can in fact present outliers, jumps, and missing values, which corrupt the clock characterization. Therefore, an efficient preprocessing is fundamental to ensure a proper data analysis and improve the stability estimation performed with the AVAR or other similar variances. In this work, we propose a preprocessing algorithm and its implementation in a robust software code (in MATLAB language) able to deal with time series of experimental data affected by nonstationarities and missing data; our method is properly detecting and removing anomalous behaviors, hence making the subsequent stability analysis more reliable.

  7. Adaptive fingerprint image enhancement with emphasis on preprocessing of data.

    Science.gov (United States)

    Bartůnek, Josef Ström; Nilsson, Mikael; Sällberg, Benny; Claesson, Ingvar

    2013-02-01

    This article proposes several improvements to an adaptive fingerprint enhancement method that is based on contextual filtering. The term adaptive implies that parameters of the method are automatically adjusted based on the input fingerprint image. Five processing blocks comprise the adaptive fingerprint enhancement method, where four of these blocks are updated in our proposed system. Hence, the proposed overall system is novel. The four updated processing blocks are: 1) preprocessing; 2) global analysis; 3) local analysis; and 4) matched filtering. In the preprocessing and local analysis blocks, a nonlinear dynamic range adjustment method is used. In the global analysis and matched filtering blocks, different forms of order statistical filters are applied. These processing blocks yield an improved and new adaptive fingerprint image processing method. The performance of the updated processing blocks is presented in the evaluation part of this paper. The algorithm is evaluated toward the NIST developed NBIS software for fingerprint recognition on FVC databases.

  8. Linguistic Preprocessing and Tagging for Problem Report Trend Analysis

    Science.gov (United States)

    Beil, Robert J.; Malin, Jane T.

    2012-01-01

    Mr. Robert Beil, Systems Engineer at Kennedy Space Center (KSC), requested the NASA Engineering and Safety Center (NESC) develop a prototype tool suite that combines complementary software technology used at Johnson Space Center (JSC) and KSC for problem report preprocessing and semantic tag extraction, to improve input to data mining and trend analysis. This document contains the outcome of the assessment and the Findings, Observations and NESC Recommendations.

  9. Research on Digital Watermark Using Pre-Processing Technology

    Institute of Scientific and Technical Information of China (English)

    Ru Guo-bao; Ru Guo-bao; Niu Hui-fang; Niu Hui-fang; Yang Rui; Yang Rui; Sun Hong; Sun Hong; Shi Hong-ling; Shi Hong-ling; Huang Tian-xi; Huang Tian-xi

    2003-01-01

    We have realized a watermark embedding system based on audio perceptual masking and brought forward a watermark detection system using pre-processing technology.We can detect watermark from watermarked audio without original audio by using this method. The results have indicated that this embedding and detecting method is robust, on the premise of not affecting the hearing quality, it can resist those attacks such as MPEG compressing, filtering and adding white noise.

  10. Biosignal data preprocessing: a voice pathology detection application

    Directory of Open Access Journals (Sweden)

    Genaro Daza Santacoloma

    2010-05-01

    Full Text Available A methodology for biosignal data preprocessing is presented. Experiments were mainly carried out with voice signals for automa- tically detecting pathologies. The proposed methodology was structured on 3 elements: outlier detection, normality verification and distribution transformation. It improved classification performance if basic assumptions about data structure were met. This entailed a more accurate detection of voice pathologies and it reduced the computational complexity of classification algorithms. Classification performance improved by 15%.

  11. Integration of geometric modeling and advanced finite element preprocessing

    Science.gov (United States)

    Shephard, Mark S.; Finnigan, Peter M.

    1987-01-01

    The structure to a geometry based finite element preprocessing system is presented. The key features of the system are the use of geometric operators to support all geometric calculations required for analysis model generation, and the use of a hierarchic boundary based data structure for the major data sets within the system. The approach presented can support the finite element modeling procedures used today as well as the fully automated procedures under development.

  12. "Apparent Weight": A Concept that Is Confusing and Unnecessary

    Science.gov (United States)

    Bartlett, Albert A.

    2010-01-01

    Two recent articles make prominent use of the concept of "apparent weight." The concept of "apparent weight" leads to two confusing inconsistencies. We need to know that with very little change in our representations, we can give our students an improved understanding of "weight" without ever having to invent the appealing but confusing concept of…

  13. Advertising and Product Confusion: A Case Study of Grapefruit Juice

    OpenAIRE

    Mark G. Brown; Lee, Jonq-Ying; Behr, Robert M.

    1990-01-01

    Demand relationships for two closely related products -- grapefruit juice and grapefruit-juice cocktail -- were estimated from grocery-store scanner data to analyze the contention that consumer confusion exists between the two products. Results suggest confusion may exist, with grapefruit-juice advertising not only increasing the demand for grapefruit juice but also for grapefruit-juice cocktail.

  14. RTI: Court and Case Law--Confusion by Design

    Science.gov (United States)

    Daves, David P.; Walker, David W.

    2012-01-01

    Professional confusion, as well as case law confusion, exists concerning the fidelity and integrity of response to intervention (RTI) as a defensible procedure for identifying children as having a specific learning disability (SLD) under the Individuals with Disabilities Education Act (IDEA). Division is generated because of conflicting mandates…

  15. Meta-levels in design research: Resolving some confusions

    NARCIS (Netherlands)

    Stappers, P.J.; Sleeswijk Visser, F.

    2014-01-01

    Doing design and doing research are related activities. When doing design in a (PhD) research project, a number of confusions pop up. These confusions stem from the fact that most of the basic terms, such as ‘designer’, ‘research’, and ‘product’, have many connotations but not a shared definition. B

  16. Clustering, Seriation, and Subset Extraction of Confusion Data

    Science.gov (United States)

    Brusco, Michael J.; Steinley, Douglas

    2006-01-01

    The study of confusion data is a well established practice in psychology. Although many types of analytical approaches for confusion data are available, among the most common methods are the extraction of 1 or more subsets of stimuli, the partitioning of the complete stimulus set into distinct groups, and the ordering of the stimulus set. Although…

  17. RTI: Court and Case Law--Confusion by Design

    Science.gov (United States)

    Daves, David P.; Walker, David W.

    2012-01-01

    Professional confusion, as well as case law confusion, exists concerning the fidelity and integrity of response to intervention (RTI) as a defensible procedure for identifying children as having a specific learning disability (SLD) under the Individuals with Disabilities Education Act (IDEA). Division is generated because of conflicting mandates…

  18. False positives to confusable objects predict medial temporal lobe atrophy.

    Science.gov (United States)

    Kivisaari, Sasa L; Monsch, Andreas U; Taylor, Kirsten I

    2013-09-01

    Animal models agree that the perirhinal cortex plays a critical role in object recognition memory, but qualitative aspects of this mnemonic function are still debated. A recent model claims that the perirhinal cortex is required to recognize the novelty of confusable distractor stimuli, and that damage here results in an increased propensity to judge confusable novel objects as familiar (i.e., false positives). We tested this model in healthy participants and patients with varying degrees of perirhinal cortex damage, i.e., amnestic mild cognitive impairment and very early Alzheimer's disease (AD), with a recognition memory task with confusable and less confusable realistic object pictures, and from whom we acquired high-resolution anatomic MRI scans. Logistic mixed-model behavioral analyses revealed that both patient groups committed more false positives with confusable than less confusable distractors, whereas healthy participants performed comparably in both conditions. A voxel-based morphometry analysis demonstrated that this effect was associated with atrophy of the anteromedial temporal lobe, including the perirhinal cortex. These findings suggest that also the human perirhinal cortex recognizes the novelty of confusable objects, consistent with its border position between the hierarchical visual object processing and medial temporal lobe memory systems, and explains why AD patients exhibit a heightened propensity to commit false positive responses with inherently confusable stimuli.

  19. [Acute confusion syndrome in the hospitalized elderly].

    Science.gov (United States)

    Regazzoni, C J; Aduriz, M; Recondo, M

    2000-01-01

    Our purpose was to determine the in-hospital incidence of delirium among elderly patients, its relation to previous cognitive impairment and the time between admission and its development. We performed an observational study of follow-up in the internal medicine area of a university hospital. We included consecutively and prospectively every patient 70 years or older upon admission. Patients with delirium on admission were excluded, as also were those taking antipsychotic drugs, with severe language or audition impairment, or coming from other sites of internation. We subsequently eliminated patients whose follow-up had not ended by the time the study was concluded, and patients in whom psychosis was diagnosed. Clinical and laboratory data were collected, and patients were prospectively followed until discharge from the hospital, using the Confusion-Assessment-Method (CAM) for the diagnosis of delirium. We analyzed 61 patients of whom 13 developed delirium while hospitalized (in-hospital incidence: 21.31%--CI 95%: 11.03-31.59%). Patients with delirium had had lower scores on Mini Mental State upon admission (median 17 vs 22; p 0.001). During the first 4 days of hospitalization 58.3% of delirium cases occurred not modifying the duration of hospitalization (average: 10.22 days vs 14.38; p = NS). We conclude that the incidence of delirium is high among hospitalized elderly patients specially during the first days, and in those with previous cognitive impairment. We suggest that delirium could be an associated disorder in severe diseases among patients with previous cognitive damage.

  20. Review of feed forward neural network classification preprocessing techniques

    Science.gov (United States)

    Asadi, Roya; Kareem, Sameem Abdul

    2014-06-01

    The best feature of artificial intelligent Feed Forward Neural Network (FFNN) classification models is learning of input data through their weights. Data preprocessing and pre-training are the contributing factors in developing efficient techniques for low training time and high accuracy of classification. In this study, we investigate and review the powerful preprocessing functions of the FFNN models. Currently initialization of the weights is at random which is the main source of problems. Multilayer auto-encoder networks as the latest technique like other related techniques is unable to solve the problems. Weight Linear Analysis (WLA) is a combination of data pre-processing and pre-training to generate real weights through the use of normalized input values. The FFNN model by using the WLA increases classification accuracy and improve training time in a single epoch without any training cycle, the gradient of the mean square error function, updating the weights. The results of comparison and evaluation show that the WLA is a powerful technique in the FFNN classification area yet.

  1. A Survey on Preprocessing Methods for Web Usage Data

    Directory of Open Access Journals (Sweden)

    V.Chitraa

    2010-03-01

    Full Text Available World Wide Web is a huge repository of web pages and links. It provides abundance of information for the Internet users. The growth of web is tremendous as approximately one million pages are added daily. Users’ accesses are recorded in web logs. Because of the tremendous usage of web, the web log files are growing at a faster rate and the size is becoming huge. Web data mining is the application of data mining techniques in web data. Web Usage Mining applies mining techniques in log data to extract the behavior of users which is used in various applications like personalized services, adaptive web sites, customer profiling, prefetching, creating attractive web sites etc., Web usage mining consists of three phases preprocessing, pattern discovery and pattern analysis. Web log data is usually noisy and ambiguous and preprocessing is an important process before mining. For discovering patterns sessions are to be constructed efficiently. This paper reviews existing work done in the preprocessing stage. A brief overview of various data mining techniques for discovering patterns, and pattern analysis are discussed. Finally a glimpse of various applications of web usage mining is also presented.

  2. Optimization of miRNA-seq data preprocessing.

    Science.gov (United States)

    Tam, Shirley; Tsao, Ming-Sound; McPherson, John D

    2015-11-01

    The past two decades of microRNA (miRNA) research has solidified the role of these small non-coding RNAs as key regulators of many biological processes and promising biomarkers for disease. The concurrent development in high-throughput profiling technology has further advanced our understanding of the impact of their dysregulation on a global scale. Currently, next-generation sequencing is the platform of choice for the discovery and quantification of miRNAs. Despite this, there is no clear consensus on how the data should be preprocessed before conducting downstream analyses. Often overlooked, data preprocessing is an essential step in data analysis: the presence of unreliable features and noise can affect the conclusions drawn from downstream analyses. Using a spike-in dilution study, we evaluated the effects of several general-purpose aligners (BWA, Bowtie, Bowtie 2 and Novoalign), and normalization methods (counts-per-million, total count scaling, upper quartile scaling, Trimmed Mean of M, DESeq, linear regression, cyclic loess and quantile) with respect to the final miRNA count data distribution, variance, bias and accuracy of differential expression analysis. We make practical recommendations on the optimal preprocessing methods for the extraction and interpretation of miRNA count data from small RNA-sequencing experiments.

  3. A Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc

    2015-10-01

    Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.

  4. Truncations of random unitary matrices

    CERN Document Server

    Zyczkowski, K; Zyczkowski, Karol; Sommers, Hans-Juergen

    1999-01-01

    We analyze properties of non-hermitian matrices of size M constructed as square submatrices of unitary (orthogonal) random matrices of size N>M, distributed according to the Haar measure. In this way we define ensembles of random matrices and study the statistical properties of the spectrum located inside the unit circle. In the limit of large matrices, this ensemble is characterized by the ratio M/N. For the truncated CUE we derive analytically the joint density of eigenvalues from which easily all correlation functions are obtained. For N-M fixed and N--> infinity the universal resonance-width distribution with N-M open channels is recovered.

  5. Criteria of the Nonsingular H-Matrices

    Institute of Scientific and Technical Information of China (English)

    GAO jian; LIU Futi; HUANG Tingzhu

    2004-01-01

    The nonsingular H-matrices play an important role in the study of the matrix theory and the iterative method of systems of linear equations,etc.It has always been searched how to verify nonsingular H-matrices.In this paper,nonsingular H-matrices is studies by applying diagonally dominant matrices,irreducible diagonally dominant matrices and comparison matrices and several practical criteria for identifying nonsingular H-matrices are obtained.

  6. Localization of spatially distributed brain sources after a tensor-based preprocessing of interictal epileptic EEG data.

    Science.gov (United States)

    Albera, L; Becker, H; Karfoul, A; Gribonval, R; Kachenoura, A; Bensaid, S; Senhadji, L; Hernandez, A; Merlet, I

    2015-01-01

    This paper addresses the localization of spatially distributed sources from interictal epileptic electroencephalographic data after a tensor-based preprocessing. Justifying the Canonical Polyadic (CP) model of the space-time-frequency and space-time-wave-vector tensors is not an easy task when two or more extended sources have to be localized. On the other hand, the occurrence of several amplitude modulated spikes originating from the same epileptic region can be used to build a space-time-spike tensor from the EEG data. While the CP model of this tensor appears more justified, the exact computation of its loading matrices can be limited by the presence of highly correlated sources or/and a strong background noise. An efficient extended source localization scheme after the tensor-based preprocessing has then to be set up. Different strategies are thus investigated and compared on realistic simulated data: the "disk algorithm" using a precomputed dictionary of circular patches, a standardized Tikhonov regularization and a fused LASSO scheme.

  7. Comparison of multivariate preprocessing techniques as applied to electronic tongue based pattern classification for black tea

    Energy Technology Data Exchange (ETDEWEB)

    Palit, Mousumi [Department of Electronics and Telecommunication Engineering, Central Calcutta Polytechnic, Kolkata 700014 (India); Tudu, Bipan, E-mail: bt@iee.jusl.ac.in [Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata 700098 (India); Bhattacharyya, Nabarun [Centre for Development of Advanced Computing, Kolkata 700091 (India); Dutta, Ankur; Dutta, Pallab Kumar [Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata 700098 (India); Jana, Arun [Centre for Development of Advanced Computing, Kolkata 700091 (India); Bandyopadhyay, Rajib [Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata 700098 (India); Chatterjee, Anutosh [Department of Electronics and Communication Engineering, Heritage Institute of Technology, Kolkata 700107 (India)

    2010-08-18

    In an electronic tongue, preprocessing on raw data precedes pattern analysis and choice of the appropriate preprocessing technique is crucial for the performance of the pattern classifier. While attempting to classify different grades of black tea using a voltammetric electronic tongue, different preprocessing techniques have been explored and a comparison of their performances is presented in this paper. The preprocessing techniques are compared first by a quantitative measurement of separability followed by principle component analysis; and then two different supervised pattern recognition models based on neural networks are used to evaluate the performance of the preprocessing techniques.

  8. Comparison of multivariate preprocessing techniques as applied to electronic tongue based pattern classification for black tea.

    Science.gov (United States)

    Palit, Mousumi; Tudu, Bipan; Bhattacharyya, Nabarun; Dutta, Ankur; Dutta, Pallab Kumar; Jana, Arun; Bandyopadhyay, Rajib; Chatterjee, Anutosh

    2010-08-18

    In an electronic tongue, preprocessing on raw data precedes pattern analysis and choice of the appropriate preprocessing technique is crucial for the performance of the pattern classifier. While attempting to classify different grades of black tea using a voltammetric electronic tongue, different preprocessing techniques have been explored and a comparison of their performances is presented in this paper. The preprocessing techniques are compared first by a quantitative measurement of separability followed by principle component analysis; and then two different supervised pattern recognition models based on neural networks are used to evaluate the performance of the preprocessing techniques.

  9. Generalisations of Fisher Matrices

    Directory of Open Access Journals (Sweden)

    Alan Heavens

    2016-06-01

    Full Text Available Fisher matrices play an important role in experimental design and in data analysis. Their primary role is to make predictions for the inference of model parameters—both their errors and covariances. In this short review, I outline a number of extensions to the simple Fisher matrix formalism, covering a number of recent developments in the field. These are: (a situations where the data (in the form of ( x , y pairs have errors in both x and y; (b modifications to parameter inference in the presence of systematic errors, or through fixing the values of some model parameters; (c Derivative Approximation for LIkelihoods (DALI - higher-order expansions of the likelihood surface, going beyond the Gaussian shape approximation; (d extensions of the Fisher-like formalism, to treat model selection problems with Bayesian evidence.

  10. Generalisations of Fisher Matrices

    CERN Document Server

    Heavens, Alan

    2016-01-01

    Fisher matrices play an important role in experimental design and in data analysis. Their primary role is to make predictions for the inference of model parameters - both their errors and covariances. In this short review, I outline a number of extensions to the simple Fisher matrix formalism, covering a number of recent developments in the field. These are: (a) situations where the data (in the form of (x,y) pairs) have errors in both x and y; (b) modifications to parameter inference in the presence of systematic errors, or through fixing the values of some model parameters; (c) Derivative Approximation for LIkelihoods (DALI) - higher-order expansions of the likelihood surface, going beyond the Gaussian shape approximation; (d) extensions of the Fisher-like formalism, to treat model selection problems with Bayesian evidence.

  11. VanderLaan Circulant Type Matrices

    Directory of Open Access Journals (Sweden)

    Hongyan Pan

    2015-01-01

    Full Text Available Circulant matrices have become a satisfactory tools in control methods for modern complex systems. In the paper, VanderLaan circulant type matrices are presented, which include VanderLaan circulant, left circulant, and g-circulant matrices. The nonsingularity of these special matrices is discussed by the surprising properties of VanderLaan numbers. The exact determinants of VanderLaan circulant type matrices are given by structuring transformation matrices, determinants of well-known tridiagonal matrices, and tridiagonal-like matrices. The explicit inverse matrices of these special matrices are obtained by structuring transformation matrices, inverses of known tridiagonal matrices, and quasi-tridiagonal matrices. Three kinds of norms and lower bound for the spread of VanderLaan circulant and left circulant matrix are given separately. And we gain the spectral norm of VanderLaan g-circulant matrix.

  12. Incidence and cause of acute confusion in elderly patients

    Directory of Open Access Journals (Sweden)

    Rejeki A. Rahayu

    2002-03-01

    Full Text Available Acute confusion is a clinical syndrome in the elderly whose diagnosis is made by acute onset of disturbance of consciousness, impairment of cognition and fluctuating perception and has an underlying medical cause associated with usually serious medical illness. Acute confusion has a high morbidity and mortality, and patient need to stay longer in the hospital, have a higher risk for institutionalization and immobilization. The aim of this study is to recognize the incidence and most of medical illness, which cause acute confusion in elderly patients, a retrospective study based on medical record of elderly patients who were hospitalized in Dr Kariadi hospital since 1998 to 1999. 5407 elderly patients were hospitalized, but only 5191 were analyzed and included in this study. 35% (992 men and 846 women elderly patients had acute confusion on first arrival and 7% ( 197 men and 176 women acute confusion appears in the ward. Total acute confusion was 40.89%. The mortality rate was 29% (263 women and 381 men. Three most frequent cause of death were sepsis (10.04%; hemorrhagic stroke (5.11%; multifactor (4.16%. Top ten diseases, which cause acute confusion, were hepatic encephalopathy, hemorrhagic stroke, sepsis, moderate dehydration due to gastoenteritis, hyponatremia, acute myocardial infarction, pneumonia, urinary tract infection, congestive heart failure, and arrhythmia cordis. (Med J lndones 2002; 11: 30-35Keywords: acute confusional state, geriatric patients, hospital study

  13. Preprocessing Techniques for High-Efficiency Data Compression in Wireless Multimedia Sensor Networks

    Directory of Open Access Journals (Sweden)

    Junho Park

    2015-01-01

    Full Text Available We have proposed preprocessing techniques for high-efficiency data compression in wireless multimedia sensor networks. To do this, we analyzed the characteristics of multimedia data under the environment of wireless multimedia sensor networks. The proposed preprocessing techniques consider the characteristics of sensed multimedia data to perform the first stage preprocessing by deleting the low priority bits that do not affect the image quality. The second stage preprocessing is also performed for the undeleted high priority bits. By performing these two-stage preprocessing techniques, it is possible to reduce the multimedia data size in large. To show the superiority of our techniques, we simulated the existing multimedia data compression scheme with/without our preprocessing techniques. Our experimental results show that our proposed techniques increase compression ratio while reducing compression operations compared to the existing compression scheme without preprocessing techniques.

  14. Polynomial Fibonacci-Hessenberg matrices

    Energy Technology Data Exchange (ETDEWEB)

    Esmaeili, Morteza [Dept. of Mathematical Sciences, Isfahan University of Technology, 84156-83111 Isfahan (Iran, Islamic Republic of)], E-mail: emorteza@cc.iut.ac.ir; Esmaeili, Mostafa [Dept. of Electrical and Computer Engineering, Isfahan University of Technology, 84156-83111 Isfahan (Iran, Islamic Republic of)

    2009-09-15

    A Fibonacci-Hessenberg matrix with Fibonacci polynomial determinant is referred to as a polynomial Fibonacci-Hessenberg matrix. Several classes of polynomial Fibonacci-Hessenberg matrices are introduced. The notion of two-dimensional Fibonacci polynomial array is introduced and three classes of polynomial Fibonacci-Hessenberg matrices satisfying this property are given.

  15. Enhancing Understanding of Transformation Matrices

    Science.gov (United States)

    Dick, Jonathan; Childrey, Maria

    2012-01-01

    With the Common Core State Standards' emphasis on transformations, teachers need a variety of approaches to increase student understanding. Teaching matrix transformations by focusing on row vectors gives students tools to create matrices to perform transformations. This empowerment opens many doors: Students are able to create the matrices for…

  16. Enhancing Understanding of Transformation Matrices

    Science.gov (United States)

    Dick, Jonathan; Childrey, Maria

    2012-01-01

    With the Common Core State Standards' emphasis on transformations, teachers need a variety of approaches to increase student understanding. Teaching matrix transformations by focusing on row vectors gives students tools to create matrices to perform transformations. This empowerment opens many doors: Students are able to create the matrices for…

  17. Hierarchical matrices algorithms and analysis

    CERN Document Server

    Hackbusch, Wolfgang

    2015-01-01

    This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists ...

  18. Preprocessing and parameterizing bioimpedance spectroscopy measurements by singular value decomposition.

    Science.gov (United States)

    Nejadgholi, Isar; Caytak, Herschel; Bolic, Miodrag; Batkin, Izmail; Shirmohammadi, Shervin

    2015-05-01

    In several applications of bioimpedance spectroscopy, the measured spectrum is parameterized by being fitted into the Cole equation. However, the extracted Cole parameters seem to be inconsistent from one measurement session to another, which leads to a high standard deviation of extracted parameters. This inconsistency is modeled with a source of random variations added to the voltage measurement carried out in the time domain. These random variations may originate from biological variations that are irrelevant to the evidence that we are investigating. Yet, they affect the voltage measured by using a bioimpedance device based on which magnitude and phase of impedance are calculated.By means of simulated data, we showed that Cole parameters are highly affected by this type of variation. We further showed that singular value decomposition (SVD) is an effective tool for parameterizing bioimpedance measurements, which results in more consistent parameters than Cole parameters. We propose to apply SVD as a preprocessing method to reconstruct denoised bioimpedance measurements. In order to evaluate the method, we calculated the relative difference between parameters extracted from noisy and clean simulated bioimpedance spectra. Both mean and standard deviation of this relative difference are shown to effectively decrease when Cole parameters are extracted from preprocessed data in comparison to being extracted from raw measurements.We evaluated the performance of the proposed method in distinguishing three arm positions, for a set of experiments including eight subjects. It is shown that Cole parameters of different positions are not distinguishable when extracted from raw measurements. However, one arm position can be distinguished based on SVD scores. Moreover, all three positions are shown to be distinguished by two parameters, R0/R∞ and Fc, when Cole parameters are extracted from preprocessed measurements. These results suggest that SVD could be considered as an

  19. Contour extraction of echocardiographic images based on pre-processing

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana [Department of Multimedia, Faculty of Computer Science and Information Technology, Department of Computer and Communication Systems Engineering, Faculty of Engineering University Putra Malaysia 43400 Serdang, Selangor (Malaysia); Zamrin, D M [Department of Surgery, Faculty of Medicine, National University of Malaysia, 56000 Cheras, Kuala Lumpur (Malaysia); Saripan, M Iqbal

    2011-02-15

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  20. Preprocessing and Analysis of LC-MS-Based Proteomic Data.

    Science.gov (United States)

    Tsai, Tsung-Heng; Wang, Minkun; Ressom, Habtom W

    2016-01-01

    Liquid chromatography coupled with mass spectrometry (LC-MS) has been widely used for profiling protein expression levels. This chapter is focused on LC-MS data preprocessing, which is a crucial step in the analysis of LC-MS based proteomics. We provide a high-level overview, highlight associated challenges, and present a step-by-step example for analysis of data from LC-MS based untargeted proteomic study. Furthermore, key procedures and relevant issues with the subsequent analysis by multiple reaction monitoring (MRM) are discussed.

  1. Effects of preprocessing Landsat MSS data on derived features

    Science.gov (United States)

    Parris, T. M.; Cicone, R. C.

    1983-01-01

    Important to the use of multitemporal Landsat MSS data for earth resources monitoring, such as agricultural inventories, is the ability to minimize the effects of varying atmospheric and satellite viewing conditions, while extracting physically meaningful features from the data. In general, the approaches to the preprocessing problem have been derived from either physical or statistical models. This paper compares three proposed algorithms; XSTAR haze correction, Color Normalization, and Multiple Acquisition Mean Level Adjustment. These techniques represent physical, statistical, and hybrid physical-statistical models, respectively. The comparisons are made in the context of three feature extraction techniques; the Tasseled Cap, the Cate Color Cube. and Normalized Difference.

  2. Confusion between vascular malformations and hemangiomas-practical issues

    Directory of Open Access Journals (Sweden)

    Anca Chiriac

    2014-04-01

    Full Text Available A lot of confusion exists in daily practice regarding the terminology of vascular anomaly diagnosed in infants! Hemangioma is a vascular tumor and it is NOT a vascular malformation!

  3. ADDRESSING POLITICAL "CONFUSION SYNDROME" DISCOURSES: A CRITICAL APPLIED LINGUISTICS PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    Joseph Ernest Mambu

    2008-01-01

    Full Text Available This paper aims at extending our understanding of a problematizing practice in Critical Applied Linguistics by exploring issues pertaining to political "confusion syndrome" Discourses. Central to this practice is how EFL teachers and learners depart from their reluctance to explore political issues. Being scaffolded with a working model of such Discourses and a suggested simulation practice, they are hoped to learn how to sympathize with politicians' confusion.

  4. Right word making sense of the words that confuse

    CERN Document Server

    Morrison, Elizabeth

    2012-01-01

    'Affect' or 'effect'? 'Right', 'write' or 'rite'? English can certainly be a confusing language, whether you're a native speaker or learning it as a second language. 'The Right Word' is the essential reference to help people master its subtleties and avoid making mistakes. Divided into three sections, it first examines homophones - those tricky words that sound the same but are spelled differently - then looks at words that often confuse before providing a list of commonly misspelled words.

  5. Estimating sparse precision matrices

    Science.gov (United States)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  6. Generating random density matrices

    CERN Document Server

    Zyczkowski, Karol; Nechita, Ion; Collins, Benoit

    2010-01-01

    We study various methods to generate ensembles of quantum density matrices of a fixed size N and analyze the corresponding probability distributions P(x), where x denotes the rescaled eigenvalue, x=N\\lambda. Taking a random pure state of a two-partite system and performing the partial trace over one subsystem one obtains a mixed state represented by a Wishart--like matrix W=GG^{\\dagger}, distributed according to the induced measure and characterized asymptotically, as N -> \\infty, by the Marchenko-Pastur distribution. Superposition of k random maximally entangled states leads to another family of explicitly derived distributions, describing singular values of the sum of k independent random unitaries. Taking a larger system composed of 2s particles, constructing $s$ random bi-partite states, performing the measurement into a product of s-1 maximally entangled states and performing the partial trace over the remaining subsystem we arrive at a random state characterized by the Fuss-Catalan distribution of order...

  7. Graph-theoretical matrices in chemistry

    CERN Document Server

    Janezic, Dusanka; Nikolic, Sonja; Trinajstic, Nenad

    2015-01-01

    Graph-Theoretical Matrices in Chemistry presents a systematic survey of graph-theoretical matrices and highlights their potential uses. This comprehensive volume is an updated, extended version of a former bestseller featuring a series of mathematical chemistry monographs. In this edition, nearly 200 graph-theoretical matrices are included.This second edition is organized like the previous one-after an introduction, graph-theoretical matrices are presented in five chapters: The Adjacency Matrix and Related Matrices, Incidence Matrices, The Distance Matrix and Related Matrices, Special Matrices

  8. Preprocessing of GPR data for syntactic landmine detection and classification

    Science.gov (United States)

    Nasif, Ahmed O.; Hintz, Kenneth J.; Peixoto, Nathalia

    2010-04-01

    Syntactic pattern recognition is being used to detect and classify non-metallic landmines in terms of their range impedance discontinuity profile. This profile, extracted from the ground penetrating radar's return signal, constitutes a high-range-resolution and unique description of the inner structure of a landmine. In this paper, we discuss two preprocessing steps necessary to extract such a profile, namely, inverse filtering (deconvolving) and binarization. We validate the use of an inverse filter to effectively decompose the observed composite signal resulting from the different layers of dielectric materials of a landmine. It is demonstrated that the transmitted radar waveform undergoing multiple reflections with different materials does not change appreciably, and mainly depends on the transmit and receive processing chains of the particular radar being used. Then, a new inversion approach for the inverse filter is presented based on the cumulative contribution of the different frequency components to the original Fourier spectrum. We discuss the tradeoffs and challenges involved in such a filter design. The purpose of the binarization scheme is to localize the impedance discontinuities in range, by assigning a '1' to the peaks of the inverse filtered output, and '0' to all other values. The paper is concluded with simulation results showing the effectiveness of the proposed preprocessing technique.

  9. Hadamard Matrices and Their Applications

    CERN Document Server

    Horadam, K J

    2011-01-01

    In Hadamard Matrices and Their Applications, K. J. Horadam provides the first unified account of cocyclic Hadamard matrices and their applications in signal and data processing. This original work is based on the development of an algebraic link between Hadamard matrices and the cohomology of finite groups that was discovered fifteen years ago. The book translates physical applications into terms a pure mathematician will appreciate, and theoretical structures into ones an applied mathematician, computer scientist, or communications engineer can adapt and use. The first half of the book expl

  10. Source confusion as an explanation of cultivation: a test of the mechanisms underlying confusion of fiction with reality on television.

    Science.gov (United States)

    Koolstra, Cees M

    2007-02-01

    Cultivation studies have found evidence that heavy television viewers adopt a world view congruent with how the world is portrayed in fictional television programs. An explanation is that viewers may remember fictional TV stories as realistic stories or news (fiction-to-news confusion). Until now, fiction-to-news confusion was found only if at least a week evolved between watching TV and asking viewers what was remembered. The present study conducted with a purposive sample of students and employees of a college in The Netherlands (N=96; M age = 28.6 yr., SD = 10.9) indicates that fiction-to-news confusions can also occur almost immediately after watching. In addition, whereas earlier research suggests that fiction-to-news confusions are associated with heavy viewing, i.e., more confusion when more hours per day are spent on TV viewing in leisure time, and faulty memory, the present study more specifically suggests that participants make many fiction-to-news confusions when they are exposed to relatively many fictional TV fragments that contain threatening, violent events.

  11. Simple and Effective Way for Data Preprocessing Selection Based on Design of Experiments.

    Science.gov (United States)

    Gerretzen, Jan; Szymańska, Ewa; Jansen, Jeroen J; Bart, Jacob; van Manen, Henk-Jan; van den Heuvel, Edwin R; Buydens, Lutgarde M C

    2015-12-15

    The selection of optimal preprocessing is among the main bottlenecks in chemometric data analysis. Preprocessing currently is a burden, since a multitude of different preprocessing methods is available for, e.g., baseline correction, smoothing, and alignment, but it is not clear beforehand which method(s) should be used for which data set. The process of preprocessing selection is often limited to trial-and-error and is therefore considered somewhat subjective. In this paper, we present a novel, simple, and effective approach for preprocessing selection. The defining feature of this approach is a design of experiments. On the basis of the design, model performance of a few well-chosen preprocessing methods, and combinations thereof (called strategies) is evaluated. Interpretation of the main effects and interactions subsequently enables the selection of an optimal preprocessing strategy. The presented approach is applied to eight different spectroscopic data sets, covering both calibration and classification challenges. We show that the approach is able to select a preprocessing strategy which improves model performance by at least 50% compared to the raw data; in most cases, it leads to a strategy very close to the true optimum. Our approach makes preprocessing selection fast, insightful, and objective.

  12. The Pre-Processing of Images Technique for the Materia

    Directory of Open Access Journals (Sweden)

    Yevgeniy P. Putyatin

    2016-08-01

    Full Text Available The image processing analysis is one of the most powerful tool in various research fields, especially in material / polymer science. Therefore in the present article an attempt has been made for study of pre-processing of images technique of the material samples during the images taken out by Scanning Electron Microscope (SEM. First we prepared the material samples with coir fibre (natural and its polymer composite after that the image analysis has been performed by SEM technique and later on the said studies have been conducted. The results presented here were found satisfactory and also are in good agreement with our earlier work and some other worker in the same field.

  13. Pre-Processing and Modeling Tools for Bigdata

    Directory of Open Access Journals (Sweden)

    Hashem Hadi

    2016-09-01

    Full Text Available Modeling tools and operators help the user / developer to identify the processing field on the top of the sequence and to send into the computing module only the data related to the requested result. The remaining data is not relevant and it will slow down the processing. The biggest challenge nowadays is to get high quality processing results with a reduced computing time and costs. To do so, we must review the processing sequence, by adding several modeling tools. The existing processing models do not take in consideration this aspect and focus on getting high calculation performances which will increase the computing time and costs. In this paper we provide a study of the main modeling tools for BigData and a new model based on pre-processing.

  14. Pre-processing in AI based Prediction of QSARs

    CERN Document Server

    Patri, Om Prasad

    2009-01-01

    Machine learning, data mining and artificial intelligence (AI) based methods have been used to determine the relations between chemical structure and biological activity, called quantitative structure activity relationships (QSARs) for the compounds. Pre-processing of the dataset, which includes the mapping from a large number of molecular descriptors in the original high dimensional space to a small number of components in the lower dimensional space while retaining the features of the original data, is the first step in this process. A common practice is to use a mapping method for a dataset without prior analysis. This pre-analysis has been stressed in our work by applying it to two important classes of QSAR prediction problems: drug design (predicting anti-HIV-1 activity) and predictive toxicology (estimating hepatocarcinogenicity of chemicals). We apply one linear and two nonlinear mapping methods on each of the datasets. Based on this analysis, we conclude the nature of the inherent relationships betwee...

  15. Digital soil mapping: strategy for data pre-processing

    Directory of Open Access Journals (Sweden)

    Alexandre ten Caten

    2012-08-01

    Full Text Available The region of greatest variability on soil maps is along the edge of their polygons, causing disagreement among pedologists about the appropriate description of soil classes at these locations. The objective of this work was to propose a strategy for data pre-processing applied to digital soil mapping (DSM. Soil polygons on a training map were shrunk by 100 and 160 m. This strategy prevented the use of covariates located near the edge of the soil classes for the Decision Tree (DT models. Three DT models derived from eight predictive covariates, related to relief and organism factors sampled on the original polygons of a soil map and on polygons shrunk by 100 and 160 m were used to predict soil classes. The DT model derived from observations 160 m away from the edge of the polygons on the original map is less complex and has a better predictive performance.

  16. Real-Time Rendering of Teeth with No Preprocessing

    DEFF Research Database (Denmark)

    Larsen, Christian Thode; Frisvad, Jeppe Revall; Jensen, Peter Dahl Ejby

    2012-01-01

    We present a technique for real-time rendering of teeth with no need for computational or artistic preprocessing. Teeth constitute a translucent material consisting of several layers; a highly scattering material (dentine) beneath a semitransparent layer (enamel) with a transparent coating (saliva......). In this study we examine how light interacts with this multilayered structure. In the past, rendering of teeth has mostly been done using image-based texturing or volumetric scans. We work with surface scans and have therefore developed a simple way of estimating layer thicknesses. We use scattering properties...... based on measurements reported in the optics literature, and we compare rendered results qualitatively to images of ceramic teeth created by denturists....

  17. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  18. Preprocessing in a Tiered Sensor Network for Habitat Monitoring

    Directory of Open Access Journals (Sweden)

    Hanbiao Wang

    2003-03-01

    Full Text Available We investigate task decomposition and collaboration in a two-tiered sensor network for habitat monitoring. The system recognizes and localizes a specified type of birdcalls. The system has a few powerful macronodes in the first tier, and many less powerful micronodes in the second tier. Each macronode combines data collected by multiple micronodes for target classification and localization. We describe two types of lightweight preprocessing which significantly reduce data transmission from micronodes to macronodes. Micronodes classify events according to their cross-zero rates and discard irrelevant events. Data about events of interest is reduced and compressed before being transmitted to macronodes for target localization. Preliminary experiments illustrate the effectiveness of event filtering and data reduction at micronodes.

  19. Data acquisition and preprocessing techniques for remote sensing field research

    Science.gov (United States)

    Biehl, L. L.; Robinson, B. F.

    1983-01-01

    A crops and soils data base has been developed at Purdue University's Laboratory for Applications of Remote Sensing using spectral and agronomic measurements made by several government and university researchers. The data are being used to (1) quantitatively determine the relationships of spectral and agronomic characteristics of crops and soils, (2) define future sensor systems, and (3) develop advanced data analysis techniques. Researchers follow defined data acquisition and preprocessing techniques to provide fully annotated and calibrated sets of spectral, agronomic, and meteorological data. These procedures enable the researcher to combine his data with that acquired by other researchers for remote sensing research. The key elements or requirements for developing a field research data base of spectral data that can be transported across sites and years are appropriate experiment design, accurate spectral data calibration, defined field procedures, and through experiment documentation.

  20. Radar image preprocessing. [of SEASAT-A SAR data

    Science.gov (United States)

    Frost, V. S.; Stiles, J. A.; Holtzman, J. C.; Held, D. N.

    1980-01-01

    Standard image processing techniques are not applicable to radar images because of the coherent nature of the sensor. Therefore there is a need to develop preprocessing techniques for radar images which will then allow these standard methods to be applied. A random field model for radar image data is developed. This model describes the image data as the result of a multiplicative-convolved process. Standard techniques, those based on additive noise and homomorphic processing are not directly applicable to this class of sensor data. Therefore, a minimum mean square error (MMSE) filter was designed to treat this class of sensor data. The resulting filter was implemented in an adaptive format to account for changes in local statistics and edges. A radar image processing technique which provides the MMSE estimate inside homogeneous areas and tends to preserve edge structure was the result of this study. Digitally correlated Seasat-A synthetic aperture radar (SAR) imagery was used to test the technique.

  1. Multiple Criteria Decision-Making Preprocessing Using Data Mining Tools

    CERN Document Server

    Mosavi, A

    2010-01-01

    Real-life engineering optimization problems need Multiobjective Optimization (MOO) tools. These problems are highly nonlinear. As the process of Multiple Criteria Decision-Making (MCDM) is much expanded most MOO problems in different disciplines can be classified on the basis of it. Thus MCDM methods have gained wide popularity in different sciences and applications. Meanwhile the increasing number of involved components, variables, parameters, constraints and objectives in the process, has made the process very complicated. However the new generation of MOO tools has made the optimization process more automated, but still initializing the process and setting the initial value of simulation tools and also identifying the effective input variables and objectives in order to reach the smaller design space are still complicated. In this situation adding a preprocessing step into the MCDM procedure could make a huge difference in terms of organizing the input variables according to their effects on the optimizati...

  2. Preprocessing Solar Images while Preserving their Latent Structure

    CERN Document Server

    Stein, Nathan M; Kashyap, Vinay L

    2015-01-01

    Telescopes such as the Atmospheric Imaging Assembly aboard the Solar Dynamics Observatory, a NASA satellite, collect massive streams of high resolution images of the Sun through multiple wavelength filters. Reconstructing pixel-by-pixel thermal properties based on these images can be framed as an ill-posed inverse problem with Poisson noise, but this reconstruction is computationally expensive and there is disagreement among researchers about what regularization or prior assumptions are most appropriate. This article presents an image segmentation framework for preprocessing such images in order to reduce the data volume while preserving as much thermal information as possible for later downstream analyses. The resulting segmented images reflect thermal properties but do not depend on solving the ill-posed inverse problem. This allows users to avoid the Poisson inverse problem altogether or to tackle it on each of $\\sim$10 segments rather than on each of $\\sim$10$^7$ pixels, reducing computing time by a facto...

  3. Prediction of speech intelligibility based on an auditory preprocessing model

    DEFF Research Database (Denmark)

    Christiansen, Claus Forup Corlin; Pedersen, Michael Syskind; Dau, Torsten

    2010-01-01

    Classical speech intelligibility models, such as the speech transmission index (STI) and the speech intelligibility index (SII) are based on calculations on the physical acoustic signals. The present study predicts speech intelligibility by combining a psychoacoustically validated model of auditory...... preprocessing [Dau et al., 1997. J. Acoust. Soc. Am. 102, 2892-2905] with a simple central stage that describes the similarity of the test signal with the corresponding reference signal at a level of the internal representation of the signals. The model was compared with previous approaches, whereby a speech...... in noise experiment was used for training and an ideal binary mask experiment was used for evaluation. All three models were able to capture the trends in the speech in noise training data well, but the proposed model provides a better prediction of the binary mask test data, particularly when the binary...

  4. Bayes linear adjustment for variance matrices

    CERN Document Server

    Wilkinson, Darren J

    2008-01-01

    We examine the problem of covariance belief revision using a geometric approach. We exhibit an inner-product space where covariance matrices live naturally --- a space of random real symmetric matrices. The inner-product on this space captures aspects of our beliefs about the relationship between covariance matrices of interest to us, providing a structure rich enough for us to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability specifications.

  5. Addressing the Philosophical Confusion Regarding Constructivism in Chemical Education

    Science.gov (United States)

    Bernal, Pedro J.

    2006-02-01

    In the Chemical Education Today section of the May 2003 issue of this Journal , Eric Scerri wrote about the consequences of what he regards as a philosophical confusion in the work of constructivist chemical education researchers. This issue has important implications for both the teaching and practice of science. I offer a view of the confusion that places the emphasis on the careless use of philosophical terms that Scerri noted and on the tendency of psychological constructivists to go from psychological premises to unwarranted epistemological conclusions.

  6. Performance of Pre-processing Schemes with Imperfect Channel State Information

    DEFF Research Database (Denmark)

    Christensen, Søren Skovgaard; Kyritsi, Persa; De Carvalho, Elisabeth

    2006-01-01

    Pre-processing techniques have several benefits when the CSI is perfect. In this work we investigate three linear pre-processing filters, assuming imperfect CSI caused by noise degradation and channel temporal variation. Results indicate, that the LMMSE filter achieves the lowest BER and the high...

  7. A New Indicator for Optimal Preprocessing and Wavelengths Selection of Near-Infrared Spectra

    NARCIS (Netherlands)

    Skibsted, E.; Boelens, H.F.M.; Westerhuis, J.A.; Witte, D.T.; Smilde, A.K.

    2004-01-01

    Preprocessing of near-infrared spectra to remove unwanted, i.e., non-related spectral variation and selection of informative wavelengths is considered to be a crucial step prior to the construction of a quantitative calibration model. The standard methodology when comparing various preprocessing

  8. A New Indicator for Optimal Preprocessing and Wavelengths Selection of Near-Infrared Spectra

    NARCIS (Netherlands)

    Skibsted, E.; Boelens, H.F.M.; Westerhuis, J.A.; Witte, D.T.; Smilde, A.K.

    2004-01-01

    Preprocessing of near-infrared spectra to remove unwanted, i.e., non-related spectral variation and selection of informative wavelengths is considered to be a crucial step prior to the construction of a quantitative calibration model. The standard methodology when comparing various preprocessing tec

  9. Ensemble preprocessing of near-infrared (NIR) spectra for multivariate calibration.

    Science.gov (United States)

    Xu, Lu; Zhou, Yan-Ping; Tang, Li-Juan; Wu, Hai-Long; Jiang, Jian-Hui; Shen, Guo-Li; Yu, Ru-Qin

    2008-06-01

    Preprocessing of raw near-infrared (NIR) spectral data is indispensable in multivariate calibration when the measured spectra are subject to significant noises, baselines and other undesirable factors. However, due to the lack of sufficient prior information and an incomplete knowledge of the raw data, NIR spectra preprocessing in multivariate calibration is still trial and error. How to select a proper method depends largely on both the nature of the data and the expertise and experience of the practitioners. This might limit the applications of multivariate calibration in many fields, where researchers are not very familiar with the characteristics of many preprocessing methods unique in chemometrics and have difficulties to select the most suitable methods. Another problem is many preprocessing methods, when used alone, might degrade the data in certain aspects or lose some useful information while improving certain qualities of the data. In order to tackle these problems, this paper proposes a new concept of data preprocessing, ensemble preprocessing method, where partial least squares (PLSs) models built on differently preprocessed data are combined by Monte Carlo cross validation (MCCV) stacked regression. Little or no prior information of the data and expertise are required. Moreover, fusion of complementary information obtained by different preprocessing methods often leads to a more stable and accurate calibration model. The investigation of two real data sets has demonstrated the advantages of the proposed method.

  10. A New Indicator for Optimal Preprocessing and Wavelengths Selection of Near-Infrared Spectra

    NARCIS (Netherlands)

    E. Skibsted; H.F.M. Boelens; J.A. Westerhuis; D.T. Witte; A.K. Smilde

    2004-01-01

    Preprocessing of near-infrared spectra to remove unwanted, i.e., non-related spectral variation and selection of informative wavelengths is considered to be a crucial step prior to the construction of a quantitative calibration model. The standard methodology when comparing various preprocessing tec

  11. Multiplicative equations over commuting matrices

    Energy Technology Data Exchange (ETDEWEB)

    Babai, L. [Univ. of Chicago, IL (United States)]|[Eotvos Univ., Budapest (Hungary); Beals, R. [Rutgers Univ., Piscataway, NJ (United States); Cai, Jin-Yi [SUNY, Buffalo, NY (United States)] [and others

    1996-12-31

    We consider the solvability of the equation and generalizations, where the A{sub i} and B are given commuting matrices over an algebraic number field F. In the semigroup membership problem, the variables x{sub i} are constrained to be nonnegative integers. While this problem is NP-complete for variable k, we give a polynomial time algorithm if k is fixed. In the group membership problem, the matrices are assumed to be invertible, and the variables x{sub i} may take on negative values. In this case we give a polynomial time algorithm for variable k and give an explicit description of the set of all solutions (as an affine lattice). The special case of 1 x 1 matrices was recently solved by Guoqiang Ge; we heavily rely on his results.

  12. Free probability and random matrices

    CERN Document Server

    Mingo, James A

    2017-01-01

    This volume opens the world of free probability to a wide variety of readers. From its roots in the theory of operator algebras, free probability has intertwined with non-crossing partitions, random matrices, applications in wireless communications, representation theory of large groups, quantum groups, the invariant subspace problem, large deviations, subfactors, and beyond. This book puts a special emphasis on the relation of free probability to random matrices, but also touches upon the operator algebraic, combinatorial, and analytic aspects of the theory. The book serves as a combination textbook/research monograph, with self-contained chapters, exercises scattered throughout the text, and coverage of important ongoing progress of the theory. It will appeal to graduate students and all mathematicians interested in random matrices and free probability from the point of view of operator algebras, combinatorics, analytic functions, or applications in engineering and statistical physics.

  13. Automatic selection of preprocessing methods for improving predictions on mass spectrometry protein profiles.

    Science.gov (United States)

    Pelikan, Richard C; Hauskrecht, Milos

    2010-11-13

    Mass spectrometry proteomic profiling has potential to be a useful clinical screening tool. One obstacle is providing a standardized method for preprocessing the noisy raw data. We have developed a system for automatically determining a set of preprocessing methods among several candidates. Our system's automated nature relieves the analyst of the need to be knowledgeable about which methods to use on any given dataset. Each stage of preprocessing is approached with many competing methods. We introduce metrics which are used to balance each method's attempts to correct noise versus preserving valuable discriminative information. We demonstrate the benefit of our preprocessing system on several SELDI and MALDI mass spectrometry datasets. Downstream classification is improved when using our system to preprocess the data.

  14. Immanant Conversion on Symmetric Matrices

    Directory of Open Access Journals (Sweden)

    Purificação Coelho M.

    2014-01-01

    Full Text Available Letr Σn(C denote the space of all n χ n symmetric matrices over the complex field C. The main objective of this paper is to prove that the maps Φ : Σn(C -> Σn (C satisfying for any fixed irre- ducible characters X, X' -SC the condition dx(A +aB = dχ·(Φ(Α + αΦ(Β for all matrices A,В ε Σ„(С and all scalars a ε C are automatically linear and bijective. As a corollary of the above result we characterize all such maps Φ acting on ΣИ(С.

  15. Iterative methods for Toeplitz-like matrices

    Energy Technology Data Exchange (ETDEWEB)

    Huckle, T. [Universitaet Wurzburg (Germany)

    1994-12-31

    In this paper the author will give a survey on iterative methods for solving linear equations with Toeplitz matrices, Block Toeplitz matrices, Toeplitz plus Hankel matrices, and matrices with low displacement rank. He will treat the following subjects: (1) optimal (w)-circulant preconditioners is a generalization of circulant preconditioners; (2) Optimal implementation of circulant-like preconditioners in the complex and real case; (3) preconditioning of near-singular matrices; what kind of preconditioners can be used in this case; (4) circulant preconditioning for more general classes of Toeplitz matrices; what can be said about matrices with coefficients that are not l{sub 1}-sequences; (5) preconditioners for Toeplitz least squares problems, for block Toeplitz matrices, and for Toeplitz plus Hankel matrices.

  16. Towards less confusing terminology in reproductive medicine: a proposal.

    NARCIS (Netherlands)

    J.D.F. Habbema (Dik); J.A. Collins (John); H. Leridon (Henri); J.L.H. Evers (Johannes); B. LunenFeld; E.R. te Velde (Egbert)

    2004-01-01

    textabstractThe use of the term "infertility" and related terms in reproductive medicine is reviewed. Current terminology is found to be ambiguous, confusing and misleading. We recommend that the fertility investigation report of a couple should consist of statements concerning des

  17. Towards less confusing terminology in reproductive medicine : a proposal

    NARCIS (Netherlands)

    Habbema, JDF; Collins, J; Leridon, H; Evers, JLH; Lunenfeld, B; te Velde, ER

    2004-01-01

    The use of the term 'infertility' and related terms in reproductive medicine is reviewed. Current terminology is found to be ambiguous, confusing and misleading. We recommend that the fertility investigation report of a couple should consist of statements concerning description, diagnosis and progno

  18. Confusion in the Periodic Table of the Elements.

    Science.gov (United States)

    Fernelius, W. C.; Powell, W. H.

    1982-01-01

    Discusses long (expanded), short (condensed), and pyramidal periodic table formats and documents events leading to a periodic table in which subgroups (families) are designated with the letters A and B, suggesting that this format is confusing for those consulting the table. (JN)

  19. RTI Confusion in the Case Law and the Legal Commentary

    Science.gov (United States)

    Zirkel, Perry A.

    2011-01-01

    This article expresses the position that the current legal commentary and cases do not sufficiently differentiate response to intervention (RTI) from the various forms of general education interventions that preceded it, thus compounding confusion in professional practice as to legally defensible procedures for identifying children as having a…

  20. Isolated-Word Confusion Metrics and the PGPfone Alphabet

    CERN Document Server

    Juola, P

    1996-01-01

    Although the confusion of individual phonemes and features have been studied and analyzed since (Miller and Nicely, 1955), there has been little work done on extending this to a predictive theory of word-level confusions. The PGPfone alphabet is a good touchstone problem for developing such word-level confusion metrics. This paper presents some difficulties incurred, along with their proposed solutions, in the extension of phonetic confusion results to a theoretical whole-word phonetic distance metric. The proposed solutions have been used, in conjunction with a set of selection filters, in a genetic algorithm to automatically generate appropriate word lists for a radio alphabet. This work illustrates some principles and pitfalls that should be addressed in any numeric theory of isolated word perception. From no-reply@xxx.lanl.gov Wed Oct 27 10:19 MET 1999 Received: from newmint.cern.ch (newmint.cern.ch [137.138.26.94]) by sundh98.cern.ch (8.8.5/8.8.5) with ESMTP id KAA09017 for ; Wed, 27 Oct 1999 10:19:14 +0...

  1. Proper comparison among methods using a confusion matrix

    CSIR Research Space (South Africa)

    Salmon, BP

    2015-07-01

    Full Text Available -1 IGARSS 2015, Milan, Italy, 26-31 July 2015 Proper comparison among methods using a confusion matrix 1,2 B.P. Salmon, 2,3 W. Kleynhans, 2,3 C.P. Schwegmann and 1J.C. Olivier 1School of Engineering and ICT, University of Tasmania, Australia 2...

  2. Calvarial Mass Confused With Trichilemmal Cyst: Hepatocellular Cancer Metastasis.

    Science.gov (United States)

    Polat, Gökhan; Sade, Recep

    2017-03-01

    The hepatocellular cancer calvarial metastasis is a rare condition that commonly presents cranial swelling. Therefore, calvarial swelling may confuse with frequent lesions of the scalp. The authors' patient was operated as trichilemmal cyst. But, intracranial extension was seen in operation. Calvarial metastasis of hepatocellular cancer was observed by examination of the patient.

  3. Confusion in the Periodic Table of the Elements.

    Science.gov (United States)

    Fernelius, W. C.; Powell, W. H.

    1982-01-01

    Discusses long (expanded), short (condensed), and pyramidal periodic table formats and documents events leading to a periodic table in which subgroups (families) are designated with the letters A and B, suggesting that this format is confusing for those consulting the table. (JN)

  4. Sign pattern matrices that admit M-, N-, P- or inverse M-matrices

    OpenAIRE

    Araújo, C. Mendes; Torregrosa, Juan R.

    2009-01-01

    In this paper we identify the sign pattern matrices that occur among the N–matrices, the P–matrices and the M–matrices. We also address to the class of inverse M–matrices and the related admissibility of sign pattern matrices problem. Fundação para a Ciência e a Tecnologia (FCT) Spanish DGI grant number MTM2007-64477

  5. Hamiltonian formalism and symplectic matrices; Formalisme Hamiltonien et Matrices symplectiques

    Energy Technology Data Exchange (ETDEWEB)

    Bertrand, P. [Project SPIRAL, Grand Accelerateur National d`Ions Lourds, BP 5027, Bd. H. Becquerel, 14076 Caen cedex 5 (France)

    1997-12-31

    This work consists of five sections. The first one introduces the Lagrangian formalism starting from the fundamental equation of the dynamics. The sections 2 to 4 are devoted to the Hamiltonian formalism and to symplectic matrices. Lie algebra and groups were avoided, although these notions are very useful if higher order effects have to be investigated. The paper is dealing with the properties of the transfer matrices describing different electromagnetic objects like, for instance: dipoles, quadrupoles, cyclotrons, electrostatic deflectors, spiral inflectors, etc. A remarkable property of the first order exact transfer matrices, is the symplecticity which in case of a 3-D object, described in 6-D phase space, provides 15 non-linear equations relating the matrix coefficients. The symplectic matrix ensemble forms an multiplication non-commuting group, consequently the product of n symplectic matrices is still a symplectic matrix. This permits the global description of a system of n objects. Thus, the notion symplecticity is fundamental for the selection of a given electromagnetic object, for its optimization and insertion in a line of beam transfer. The symplectic relations indicate actually that if a given beam characteristic is modified, then another characteristic will be affected and as a result the spurious effects can be limited when a line is to be adjusted. The last section is devoted to the application of the elaborated procedure to describe the drift of non-relativistic and relativistic particles, the dipole and the Muller inflector. Hopefully, this elementary Hamiltonian formalism will help in the familiarization with the symplectic matrices extensively utilized at GANIL 10 refs.

  6. Preprocessing: A Step in Automating Early Detection of Cervical Cancer

    CERN Document Server

    Das, Abhishek; Bhattacharyya, Debasis

    2011-01-01

    Uterine Cervical Cancer is one of the most common forms of cancer in women worldwide. Most cases of cervical cancer can be prevented through screening programs aimed at detecting precancerous lesions. During Digital Colposcopy, colposcopic images or cervigrams are acquired in raw form. They contain specular reflections which appear as bright spots heavily saturated with white light and occur due to the presence of moisture on the uneven cervix surface and. The cervix region occupies about half of the raw cervigram image. Other parts of the image contain irrelevant information, such as equipment, frames, text and non-cervix tissues. This irrelevant information can confuse automatic identification of the tissues within the cervix. Therefore we focus on the cervical borders, so that we have a geometric boundary on the relevant image area. Our novel technique eliminates the SR, identifies the region of interest and makes the cervigram ready for segmentation algorithms.

  7. Preprocessing for Automating Early Detection of Cervical Cancer

    CERN Document Server

    Das, Abhishek; Bhattacharyya, Debasis

    2011-01-01

    Uterine Cervical Cancer is one of the most common forms of cancer in women worldwide. Most cases of cervical cancer can be prevented through screening programs aimed at detecting precancerous lesions. During Digital Colposcopy, colposcopic images or cervigrams are acquired in raw form. They contain specular reflections which appear as bright spots heavily saturated with white light and occur due to the presence of moisture on the uneven cervix surface and. The cervix region occupies about half of the raw cervigram image. Other parts of the image contain irrelevant information, such as equipment, frames, text and non-cervix tissues. This irrelevant information can confuse automatic identification of the tissues within the cervix. Therefore we focus on the cervical borders, so that we have a geometric boundary on the relevant image area. Our novel technique eliminates the SR, identifies the region of interest and makes the cervigram ready for segmentation algorithms.

  8. ASAP: an environment for automated preprocessing of sequencing data

    Directory of Open Access Journals (Sweden)

    Torstenson Eric S

    2013-01-01

    Full Text Available Abstract Background Next-generation sequencing (NGS has yielded an unprecedented amount of data for genetics research. It is a daunting task to process the data from raw sequence reads to variant calls and manually processing this data can significantly delay downstream analysis and increase the possibility for human error. The research community has produced tools to properly prepare sequence data for analysis and established guidelines on how to apply those tools to achieve the best results, however, existing pipeline programs to automate the process through its entirety are either inaccessible to investigators, or web-based and require a certain amount of administrative expertise to set up. Findings Advanced Sequence Automated Pipeline (ASAP was developed to provide a framework for automating the translation of sequencing data into annotated variant calls with the goal of minimizing user involvement without the need for dedicated hardware or administrative rights. ASAP works both on computer clusters and on standalone machines with minimal human involvement and maintains high data integrity, while allowing complete control over the configuration of its component programs. It offers an easy-to-use interface for submitting and tracking jobs as well as resuming failed jobs. It also provides tools for quality checking and for dividing jobs into pieces for maximum throughput. Conclusions ASAP provides an environment for building an automated pipeline for NGS data preprocessing. This environment is flexible for use and future development. It is freely available at http://biostat.mc.vanderbilt.edu/ASAP.

  9. Breast image pre-processing for mammographic tissue segmentation.

    Science.gov (United States)

    He, Wenda; Hogg, Peter; Juette, Arne; Denton, Erika R E; Zwiggelaar, Reyer

    2015-12-01

    During mammographic image acquisition, a compression paddle is used to even the breast thickness in order to obtain optimal image quality. Clinical observation has indicated that some mammograms may exhibit abrupt intensity change and low visibility of tissue structures in the breast peripheral areas. Such appearance discrepancies can affect image interpretation and may not be desirable for computer aided mammography, leading to incorrect diagnosis and/or detection which can have a negative impact on sensitivity and specificity of screening mammography. This paper describes a novel mammographic image pre-processing method to improve image quality for analysis. An image selection process is incorporated to better target problematic images. The processed images show improved mammographic appearances not only in the breast periphery but also across the mammograms. Mammographic segmentation and risk/density classification were performed to facilitate a quantitative and qualitative evaluation. When using the processed images, the results indicated more anatomically correct segmentation in tissue specific areas, and subsequently better classification accuracies were achieved. Visual assessments were conducted in a clinical environment to determine the quality of the processed images and the resultant segmentation. The developed method has shown promising results. It is expected to be useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.

  10. Adaptive preprocessing algorithms of corneal topography in polar coordinate system

    Institute of Scientific and Technical Information of China (English)

    郭雁文

    2014-01-01

    New adaptive preprocessing algorithms based on the polar coordinate system were put forward to get high-precision corneal topography calculation results. Adaptive locating algorithms of concentric circle center were created to accurately capture the circle center of original Placido-based image, expand the image into matrix centered around the circle center, and convert the matrix into the polar coordinate system with the circle center as pole. Adaptive image smoothing treatment was followed and the characteristics of useful circles were extracted via horizontal edge detection, based on useful circles presenting approximate horizontal lines while noise signals presenting vertical lines or different angles. Effective combination of different operators of morphology were designed to remedy data loss caused by noise disturbances, get complete image about circle edge detection to satisfy the requests of precise calculation on follow-up parameters. The experimental data show that the algorithms meet the requirements of practical detection with characteristics of less data loss, higher data accuracy and easier availability.

  11. Multimodal image fusion with SIMS: Preprocessing with image registration.

    Science.gov (United States)

    Tarolli, Jay Gage; Bloom, Anna; Winograd, Nicholas

    2016-06-14

    In order to utilize complementary imaging techniques to supply higher resolution data for fusion with secondary ion mass spectrometry (SIMS) chemical images, there are a number of aspects that, if not given proper consideration, could produce results which are easy to misinterpret. One of the most critical aspects is that the two input images must be of the same exact analysis area. With the desire to explore new higher resolution data sources that exists outside of the mass spectrometer, this requirement becomes even more important. To ensure that two input images are of the same region, an implementation of the insight segmentation and registration toolkit (ITK) was developed to act as a preprocessing step before performing image fusion. This implementation of ITK allows for several degrees of movement between two input images to be accounted for, including translation, rotation, and scale transforms. First, the implementation was confirmed to accurately register two multimodal images by supplying a known transform. Once validated, two model systems, a copper mesh grid and a group of RAW 264.7 cells, were used to demonstrate the use of the ITK implementation to register a SIMS image with a microscopy image for the purpose of performing image fusion.

  12. Software for Preprocessing Data from Rocket-Engine Tests

    Science.gov (United States)

    Cheng, Chiu-Fu

    2004-01-01

    Three computer programs have been written to preprocess digitized outputs of sensors during rocket-engine tests at Stennis Space Center (SSC). The programs apply exclusively to the SSC E test-stand complex and utilize the SSC file format. The programs are the following: Engineering Units Generator (EUGEN) converts sensor-output-measurement data to engineering units. The inputs to EUGEN are raw binary test-data files, which include the voltage data, a list identifying the data channels, and time codes. EUGEN effects conversion by use of a file that contains calibration coefficients for each channel. QUICKLOOK enables immediate viewing of a few selected channels of data, in contradistinction to viewing only after post-test processing (which can take 30 minutes to several hours depending on the number of channels and other test parameters) of data from all channels. QUICKLOOK converts the selected data into a form in which they can be plotted in engineering units by use of Winplot (a free graphing program written by Rick Paris). EUPLOT provides a quick means for looking at data files generated by EUGEN without the necessity of relying on the PV-WAVE based plotting software.

  13. Nonlinear preprocessing method for detecting peaks from gas chromatograms

    Directory of Open Access Journals (Sweden)

    Min Hyeyoung

    2009-11-01

    Full Text Available Abstract Background The problem of locating valid peaks from data corrupted by noise frequently arises while analyzing experimental data. In various biological and chemical data analysis tasks, peak detection thus constitutes a critical preprocessing step that greatly affects downstream analysis and eventual quality of experiments. Many existing techniques require the users to adjust parameters by trial and error, which is error-prone, time-consuming and often leads to incorrect analysis results. Worse, conventional approaches tend to report an excessive number of false alarms by finding fictitious peaks generated by mere noise. Results We have designed a novel peak detection method that can significantly reduce parameter sensitivity, yet providing excellent peak detection performance and negligible false alarm rates from gas chromatographic data. The key feature of our new algorithm is the successive use of peak enhancement algorithms that are deliberately designed for a gradual improvement of peak detection quality. We tested our approach with real gas chromatograms as well as intentionally contaminated spectra that contain Gaussian or speckle-type noise. Conclusion Our results demonstrate that the proposed method can achieve near perfect peak detection performance while maintaining very small false alarm probabilities in case of gas chromatograms. Given the fact that biological signals appear in the form of peaks in various experimental data and that the propose method can easily be extended to such data, our approach will be a useful and robust tool that can help researchers highlight valid signals in their noisy measurements.

  14. Visualisation and pre-processing of peptide microarray data.

    Science.gov (United States)

    Reilly, Marie; Valentini, Davide

    2009-01-01

    The data files produced by digitising peptide microarray images contain detailed information on the location, feature, response parameters and quality of each spot on each array. In this chapter, we will describe how such peptide microarray data can be read into the R statistical package and pre-processed in preparation for subsequent comparative or predictive analysis. We illustrate how the information in the data can be visualised using images and graphical displays that highlight the main features, enabling the quality of the data to be assessed and invalid data points to be identified and excluded. The log-ratio of the foreground to background signal is used as a response index. Negative control responses serve as a reference against which "detectable" responses can be defined, and slides incubated with only buffer and secondary antibody help identify false-positive responses from peptides. For peptides that have a detectable response on at least one subarray, and no false-positive response, we use linear mixed models to remove artefacts due to the arrays and their architecture. The resulting normalized responses provide the input data for further analysis.

  15. Fractal Structure of Random Matrices

    CERN Document Server

    Hussein, M S

    2000-01-01

    A multifractal analysis is performed on the universality classes of random matrices and the transition ones.Our results indicate that the eigenvector probability distribution is a linear sum of two chi-squared distribution throughout the transition between the universality ensembles of random matrix theory and Poisson .

  16. Open string fields as matrices

    Science.gov (United States)

    Kishimoto, Isao; Masuda, Toru; Takahashi, Tomohiko; Takemoto, Shoko

    2015-03-01

    We show that the action expanded around Erler-Maccaferri's N D-brane solution describes the N+1 D-brane system where one D-brane disappears due to tachyon condensation. String fields on multi-branes can be regarded as block matrices of a string field on a single D-brane in the same way as matrix theories.

  17. Open String Fields as Matrices

    CERN Document Server

    Kishimoto, Isao; Takahashi, Tomohiko; Takemoto, Shoko

    2014-01-01

    We show that the action expanded around Erler-Maccaferri's N D-branes solution describes the N+1 D-branes system where one D-brane disappears due to tachyon condensation. String fields on the multi-branes can be regarded as block matrices of a string field on a single D-brane in the same way as matrix theories.

  18. Arnold's Projective Plane and -Matrices

    Directory of Open Access Journals (Sweden)

    K. Uchino

    2010-01-01

    Full Text Available We will explain Arnold's 2-dimensional (shortly, 2D projective geometry (Arnold, 2005 by means of lattice theory. It will be shown that the projection of the set of nontrivial triangular -matrices is the pencil of tangent lines of a quadratic curve on Arnold's projective plane.

  19. Fibonacci Identities, Matrices, and Graphs

    Science.gov (United States)

    Huang, Danrun

    2005-01-01

    General strategies used to help discover, prove, and generalize identities for Fibonacci numbers are described along with some properties about the determinants of square matrices. A matrix proof for identity (2) that has received immense attention from many branches of mathematics, like linear algebra, dynamical systems, graph theory and others…

  20. Scattering matrices with block symmetries

    OpenAIRE

    Życzkowski, Karol

    1997-01-01

    Scattering matrices with block symmetry, which corresponds to scattering process on cavities with geometrical symmetry, are analyzed. The distribution of transmission coefficient is computed for different number of channels in the case of a system with or without the time reversal invariance. An interpolating formula for the case of gradual time reversal symmetry breaking is proposed.

  1. Making almost commuting matrices commute

    Energy Technology Data Exchange (ETDEWEB)

    Hastings, Matthew B [Los Alamos National Laboratory

    2008-01-01

    Suppose two Hermitian matrices A, B almost commute ({parallel}[A,B]{parallel} {<=} {delta}). Are they close to a commuting pair of Hermitian matrices, A', B', with {parallel}A-A'{parallel},{parallel}B-B'{parallel} {<=} {epsilon}? A theorem of H. Lin shows that this is uniformly true, in that for every {epsilon} > 0 there exists a {delta} > 0, independent of the size N of the matrices, for which almost commuting implies being close to a commuting pair. However, this theorem does not specifiy how {delta} depends on {epsilon}. We give uniform bounds relating {delta} and {epsilon}. The proof is constructive, giving an explicit algorithm to construct A' and B'. We provide tighter bounds in the case of block tridiagonal and tridiagnonal matrices. Within the context of quantum measurement, this implies an algorithm to construct a basis in which we can make a projective measurement that approximately measures two approximately commuting operators simultaneously. Finally, we comment briefly on the case of approximately measuring three or more approximately commuting operators using POVMs (positive operator-valued measures) instead of projective measurements.

  2. Skills Underlying Coloured Progressive Matrices

    Science.gov (United States)

    Kirby, J. R.; Das, J. P.

    1978-01-01

    Raven's Coloured Progressive Matrices and a battery of ability tests were administered to a sample of 104 male fourth graders for purposes of investigating the relationships between 2 previously identified subscales of the Raven and the ability tests. Results indicated use of a spatial strategy and to a lesser extent, use of reasoning, indicating…

  3. The diagonalization of cubic matrices

    Science.gov (United States)

    Cocolicchio, D.; Viggiano, M.

    2000-08-01

    This paper is devoted to analysing the problem of the diagonalization of cubic matrices. We extend the familiar algebraic approach which is based on the Cardano formulae. We rewrite the complex roots of the associated resolvent secular equation in terms of transcendental functions and we derive the diagonalizing matrix.

  4. Spectral problems for operator matrices

    NARCIS (Netherlands)

    Bátkai, A.; Binding, P.; Dijksma, A.; Hryniv, R.; Langer, H.

    2005-01-01

    We study spectral properties of 2 × 2 block operator matrices whose entries are unbounded operators between Banach spaces and with domains consisting of vectors satisfying certain relations between their components. We investigate closability in the product space, essential spectra and generation of

  5. 数据挖掘中的数据预处理%Data Preprocessing in Ddta Mining

    Institute of Scientific and Technical Information of China (English)

    刘明吉; 王秀峰; 黄亚楼

    2000-01-01

    Data Mining (DM) is a new hot research point in database area. Because the real-world data is not ideal,it is necessary to do some data preprocessing to meet the requirement of DM algorithms. In this paper,we discuss the procedure of data preprocessing and present the work of data preprocessing in details. We also discuss the methods and technologies used in data preprocessing.

  6. F-matrices%F-矩阵

    Institute of Scientific and Technical Information of China (English)

    张晓东; 杨尚骏

    2001-01-01

    本文探讨矩阵的一个重要子类(F-矩阵)的性质.F-矩阵包含以下在理论及应用中都很重要的三个矩阵类:对称正半定矩阵,M-矩阵和完全非负矩阵.我们首先证明F-矩阵的一些有趣性,特别是给出n-阶F-矩阵A满足detA=an…ann的充分必要条件.接着研究逆F-矩阵的性质,特别是证明逆M-矩阵和逆完全非负矩阵都是F-矩阵,从而满足Fischer不等式.最后我们引入F-矩阵一个子类:W-矩阵并证明逆W-矩阵也是F-矩阵.%We investigate a class of P0-matrices, called F-matrices, whichcontains well known three important classes of matrices satisfying Hadamard's inequality and Fischer's inequality-positive semidefinite symmetric matrices, M-matrices and totally nonnegative matrices. Firstly we prove some interesting properties of F-matrices and give the necessary and sufficient condition for an n×n F-matrix to satisfy det A=a11…ann. Then we investigate inverse F-matrices and prove both inverse M-matrices and inverse totally nonnegative matrices are F-matrices. Finally we introduce a new class of F-matrices, i.e. W-matrices and prove both W-matrices and inverse W-matrices are also F-matrices.

  7. STABILITY FOR SEVERAL TYPES OF INTERVAL MATRICES

    Institute of Scientific and Technical Information of China (English)

    NianXiaohong; GaoJintai

    1999-01-01

    The robust stability for some types of tlme-varying interval raatrices and nonlineartime-varying interval matrices is considered and some sufficient conditions for robust stability of such interval matrices are given, The main results of this paper are only related to the verticesset of a interval matrices, and therefore, can be easily applied to test robust stability of interval matrices. Finally, some examples are given to illustrate the results.

  8. Eigenvalue variance bounds for covariance matrices

    OpenAIRE

    Dallaporta, Sandrine

    2013-01-01

    This work is concerned with finite range bounds on the variance of individual eigenvalues of random covariance matrices, both in the bulk and at the edge of the spectrum. In a preceding paper, the author established analogous results for Wigner matrices and stated the results for covariance matrices. They are proved in the present paper. Relying on the LUE example, which needs to be investigated first, the main bounds are extended to complex covariance matrices by means of the Tao, Vu and Wan...

  9. The Bessel Numbers and Bessel Matrices

    Institute of Scientific and Technical Information of China (English)

    Sheng Liang YANG; Zhan Ke QIAO

    2011-01-01

    In this paper,using exponential Riordan arrays,we investigate the Bessel numbers and Bessel matrices.By exploring links between the Bessel matrices,the Stirling matrices and the degenerate Stirling matrices,we show that the Bessel numbers are special case of the degenerate Stirling numbers,and derive explicit formulas for the Bessel numbers in terms of the Stirling numbers and binomial coefficients.

  10. Automated Pre-processing for NMR Assignments with Reduced Tedium

    Energy Technology Data Exchange (ETDEWEB)

    2004-05-11

    An important rate-limiting step in the reasonance asignment process is accurate identification of resonance peaks in MNR spectra. NMR spectra are noisy. Hence, automatic peak-picking programs must navigate between the Scylla of reliable but incomplete picking, and the Charybdis of noisy but complete picking. Each of these extremes complicates the assignment process: incomplete peak-picking results in the loss of essential connectivities, while noisy picking conceals the true connectivities under a combinatiorial explosion of false positives. Intermediate processing can simplify the assignment process by preferentially removing false peaks from noisy peak lists. This is accomplished by requiring consensus between multiple NMR experiments, exploiting a priori information about NMR spectra, and drawing on empirical statistical distributions of chemical shift extracted from the BioMagResBank. Experienced NMR practitioners currently apply many of these techniques "by hand", which is tedious, and may appear arbitrary to the novice. To increase efficiency, we have created a systematic and automated approach to this process, known as APART. Automated pre-processing has three main advantages: reduced tedium, standardization, and pedagogy. In the hands of experienced spectroscopists, the main advantage is reduced tedium (a rapid increase in the ratio of true peaks to false peaks with minimal effort). When a project is passed from hand to hand, the main advantage is standardization. APART automatically documents the peak filtering process by archiving its original recommendations, the accompanying justifications, and whether a user accepted or overrode a given filtering recommendation. In the hands of a novice, this tool can reduce the stumbling block of learning to differentiate between real peaks and noise, by providing real-time examples of how such decisions are made.

  11. Spatial-spectral preprocessing for endmember extraction on GPU's

    Science.gov (United States)

    Jimenez, Luis I.; Plaza, Javier; Plaza, Antonio; Li, Jun

    2016-10-01

    Spectral unmixing is focused in the identification of spectrally pure signatures, called endmembers, and their corresponding abundances in each pixel of a hyperspectral image. Mainly focused on the spectral information contained in the hyperspectral images, endmember extraction techniques have recently included spatial information to achieve more accurate results. Several algorithms have been developed for automatic or semi-automatic identification of endmembers using spatial and spectral information, including the spectral-spatial endmember extraction (SSEE) where, within a preprocessing step in the technique, both sources of information are extracted from the hyperspectral image and equally used for this purpose. Previous works have implemented the SSEE technique in four main steps: 1) local eigenvectors calculation in each sub-region in which the original hyperspectral image is divided; 2) computation of the maxima and minima projection of all eigenvectors over the entire hyperspectral image in order to obtain a candidates pixels set; 3) expansion and averaging of the signatures of the candidate set; 4) ranking based on the spectral angle distance (SAD). The result of this method is a list of candidate signatures from which the endmembers can be extracted using various spectral-based techniques, such as orthogonal subspace projection (OSP), vertex component analysis (VCA) or N-FINDR. Considering the large volume of data and the complexity of the calculations, there is a need for efficient implementations. Latest- generation hardware accelerators such as commodity graphics processing units (GPUs) offer a good chance for improving the computational performance in this context. In this paper, we develop two different implementations of the SSEE algorithm using GPUs. Both are based on the eigenvectors computation within each sub-region of the first step, one using the singular value decomposition (SVD) and another one using principal component analysis (PCA). Based

  12. Spectral Line De-confusion in an Intensity Mapping Survey

    CERN Document Server

    Cheng, Yun-Ting; Bock, James; Bradford, C Matt; Cooray, Asantha

    2016-01-01

    Spectral line intensity mapping has been proposed as a promising tool to efficiently probe the cosmic reionization and the large-scale structure. Without detecting individual sources, line intensity mapping makes use of all available photons and measures the integrated light in the source confusion limit, to efficiently map the three-dimensional matter distribution on large scales as traced by a given emission line. One particular challenge is the separation of desired signals from astrophysical continuum foregrounds and line interlopers. Here we present a technique to extract large-scale structure information traced by emission lines from different redshifts, embedded in a three-dimensional intensity mapping data cube. The line redshifts are distinguished by the anisotropic shape of the power spectra when projected onto a common coordinate frame. We consider the case where high-redshift [CII] lines are confused with multiple low-redshift CO rotational lines. We present a semi-analytic model for [CII] and CO ...

  13. Quantum Hilbert matrices and orthogonal polynomials

    DEFF Research Database (Denmark)

    Andersen, Jørgen Ellegaard; Berg, Christian

    2009-01-01

    Using the notion of quantum integers associated with a complex number q≠0 , we define the quantum Hilbert matrix and various extensions. They are Hankel matrices corresponding to certain little q -Jacobi polynomials when |q|matrices...... of reciprocal Fibonacci numbers called Filbert matrices. We find a formula for the entries of the inverse quantum Hilbert matrix....

  14. Simultaneous diagonalization of two quaternion matrices

    Institute of Scientific and Technical Information of China (English)

    ZhouJianhua

    2003-01-01

    The simultaneous diagonalization by congruence of pairs of Hermitian quatemion matrices is discussed. The problem is reduced to a parallel one on complex matrices by using the complex adjoint matrix related to each quatemion matrix. It is proved that any two semi-positive definite Hermitian quatemion matrices can be simultaneously diagonalized by congruence.

  15. A Confusing Coincidence: Neonatal Hypoglycemic Seizures and Hyperekplexia

    Directory of Open Access Journals (Sweden)

    Nihat Demir

    2014-01-01

    Full Text Available Hyperekplexia is a rare, nonepileptic, genetic, or sporadic neurologic disorder characterized by startle responses to acoustic, optic, or tactile stimuli. Genetic defects in glycine receptors as well as encephalitis, tumors, inflammation, and disgenesis are among the etiologic causes of the disease. The main problem in hyperekplexia is the incomplete development of inhibitory mechanisms or exaggerated stimulation of excitatory mediators. Hyperekplexia is often confused with epileptic seizures. Here we present a case with hypoglycemic convulsions coexisting with hyperekplexia, causing diagnostic difficulty.

  16. Spectral Confusion for Cosmological Surveys of Redshifted C II Emission

    Science.gov (United States)

    Kogut, A.; Dwek, E.; Moseley, S. H.

    2015-01-01

    Far-infrared cooling lines are ubiquitous features in the spectra of star-forming galaxies. Surveys of redshifted fine-structure lines provide a promising new tool to study structure formation and galactic evolution at redshifts including the epoch of reionization as well as the peak of star formation. Unlike neutral hydrogen surveys, where the 21 cm line is the only bright line, surveys of redshifted fine-structure lines suffer from confusion generated by line broadening, spectral overlap of different lines, and the crowding of sources with redshift. We use simulations to investigate the resulting spectral confusion and derive observing parameters to minimize these effects in pencilbeam surveys of redshifted far-IR line emission. We generate simulated spectra of the 17 brightest far-IR lines in galaxies, covering the 150-1300 µm wavelength region corresponding to redshifts 0 C II] line and other lines. Although the [C II] line is a principal coolant for the interstellar medium, the assumption that the brightest observed lines in a given line of sight are always [C II] lines is a poor approximation to the simulated spectra once other lines are included. Blind line identification requires detection of fainter companion lines from the same host galaxies, driving survey sensitivity requirements. The observations require moderate spectral resolution 700 < R < 4000 with angular resolution between 20? and 10', sufficiently narrow to minimize confusion yet sufficiently large to include a statistically meaningful number of sources.

  17. Spectral Line De-confusion in an Intensity Mapping Survey

    Science.gov (United States)

    Cheng, Yun-Ting; Chang, Tzu-Ching; Bock, James; Bradford, C. Matt; Cooray, Asantha

    2016-12-01

    Spectral line intensity mapping (LIM) has been proposed as a promising tool to efficiently probe the cosmic reionization and the large-scale structure. Without detecting individual sources, LIM makes use of all available photons and measures the integrated light in the source confusion limit to efficiently map the three-dimensional matter distribution on large scales as traced by a given emission line. One particular challenge is the separation of desired signals from astrophysical continuum foregrounds and line interlopers. Here we present a technique to extract large-scale structure information traced by emission lines from different redshifts, embedded in a three-dimensional intensity mapping data cube. The line redshifts are distinguished by the anisotropic shape of the power spectra when projected onto a common coordinate frame. We consider the case where high-redshift [C ii] lines are confused with multiple low-redshift CO rotational lines. We present a semi-analytic model for [C ii] and CO line estimates based on the cosmic infrared background measurements, and show that with a modest instrumental noise level and survey geometry, the large-scale [C ii] and CO power spectrum amplitudes can be successfully extracted from a confusion-limited data set, without external information. We discuss the implications and limits of this technique for possible LIM experiments.

  18. S-matrices and integrability

    Science.gov (United States)

    Bombardelli, Diego

    2016-08-01

    In these notes we review the S-matrix theory in (1+1)-dimensional integrable models, focusing mainly on the relativistic case. Once the main definitions and physical properties are introduced, we discuss the factorization of scattering processes due to integrability. We then focus on the analytic properties of the two-particle scattering amplitude and illustrate the derivation of the S-matrices for all the possible bound states using the so-called bootstrap principle. General algebraic structures underlying the S-matrix theory and its relation with the form factors axioms are briefly mentioned. Finally, we discuss the S-matrices of sine-Gordon and SU(2), SU(3) chiral Gross-Neveu models. In loving memory of Lilia Grandi.

  19. Ultrafast spectroscopy of free-base N-confused tetraphenylporphyrins.

    Science.gov (United States)

    Alemán, Elvin A; Rajesh, Cheruvallil S; Ziegler, Christopher J; Modarelli, David A

    2006-07-20

    The photophysical characterization of the two tautomers (1e and 1i) of 5,10,15,20-tetraphenyl N-confused free-base porphyrin, as well as the tautomer-locked 2-methyl 5,10,15,20-tetraphenyl N-confused free-base porphyrin, was carried out using a combination of steady state and time-resolved optical techniques. N-Confused porphyrins, alternatively called 2-aza-21-carba-porphyrins or inverted porphyrins, are of great interest for their potential as building blocks in assemblies designed for artificial photosynthesis, and understanding their excited-state properties is paramount to future studies in multicomponent arrays. Femtosecond resolved transient absorption experiments reveal spectra that are similar to those of tetraphenylporphyrin (H2TPP) with either Soret or Q-band excitation, with an extinction coefficient for the major absorbing band of 1e that was about a factor of 5 larger than that of H2TPP. The lifetime of the S1 state was determined at a variety of absorption wavelengths for each compound and was found to be consistent with time-resolved fluorescence experiments. These experiments reveal that the externally protonated tautomer (1e) is longer lived (tau = 1.84 ns) than the internally protonated form (1i, tau = 1.47 ns) by approximately 369 ps and that the N-methyl N-confused porphyrin was shorter lived than the tautomeric forms by approximately 317 ps (DMAc) and approximately 396 ps (benzene). Steady-state fluorescence experiments on tautomers 1e and 1i and the N-methyl analogues corroborate these results, with fluorescence quantum yields (Phi(Fl)) of 0.046 (1e, DMAc) and 0.023 (1i, benzene), and 0.025 (DMAc) and 0.018 (benzene) for the N-methyl N-confused porphyrin. The lifetime and quantum yield data was interpreted in terms of structural changes that influence the rate of internal conversion. The absorption and transient absorption spectra of these porphyrins were also examined in the context of DFT calculations at the B3LYP/6-31G(d)//B3LYP/3-21G

  20. Taste Quality Confusions: Influences of Age, Smoking, PTC Taster Status, and other Subject Characteristics.

    Science.gov (United States)

    Doty, Richard L; Chen, Jonathan H; Overend, Jane

    2017-01-01

    Many persons misidentify the quality of taste stimuli, a phenomenon termed "taste confusion." In this study of 1000 persons, we examined the influences of age, sex, causes of chemosensory disturbances, and genetically determined phenylthiocarbamide (PTC) taster status on taste quality confusions for four tastants (sucrose, citric acid, sodium chloride, caffeine). Overall, sour-bitter confusions were most common (19.3%), followed by bitter-sour (11.4%), salty-bitter (7.3%), salty-sour (7.0%), bitter-salty (3.5%), bitter-sweet (3.4), and sour-salty (2.4%) confusions. Confusions for sweet were PTC tasters had fewer confusions than non-tasters except for salty-bitter confusions. Confusions typically increased monotonically with age. Current smokers exhibited more sour-bitter confusions than never smokers (48.9% vs. 32.2%), whereas past smokers had more bitter-sour confusions than never smokers (23.8% vs. 14.2%). Previous head trauma was associated with higher bitter-salty and salty-bitter confusions relative to those of some other etiologies. This study demonstrates, for the first time, that multiple subject factors influence taste confusions and, along with literature accounts, supports the view that there are both biological and psychological determinants of taste quality confusions.

  1. Rotationally invariant ensembles of integrable matrices.

    Science.gov (United States)

    Yuzbashyan, Emil A; Shastry, B Sriram; Scaramazza, Jasen A

    2016-05-01

    We construct ensembles of random integrable matrices with any prescribed number of nontrivial integrals and formulate integrable matrix theory (IMT)-a counterpart of random matrix theory (RMT) for quantum integrable models. A type-M family of integrable matrices consists of exactly N-M independent commuting N×N matrices linear in a real parameter. We first develop a rotationally invariant parametrization of such matrices, previously only constructed in a preferred basis. For example, an arbitrary choice of a vector and two commuting Hermitian matrices defines a type-1 family and vice versa. Higher types similarly involve a random vector and two matrices. The basis-independent formulation allows us to derive the joint probability density for integrable matrices, similar to the construction of Gaussian ensembles in the RMT.

  2. Rotationally invariant ensembles of integrable matrices

    Science.gov (United States)

    Yuzbashyan, Emil A.; Shastry, B. Sriram; Scaramazza, Jasen A.

    2016-05-01

    We construct ensembles of random integrable matrices with any prescribed number of nontrivial integrals and formulate integrable matrix theory (IMT)—a counterpart of random matrix theory (RMT) for quantum integrable models. A type-M family of integrable matrices consists of exactly N -M independent commuting N ×N matrices linear in a real parameter. We first develop a rotationally invariant parametrization of such matrices, previously only constructed in a preferred basis. For example, an arbitrary choice of a vector and two commuting Hermitian matrices defines a type-1 family and vice versa. Higher types similarly involve a random vector and two matrices. The basis-independent formulation allows us to derive the joint probability density for integrable matrices, similar to the construction of Gaussian ensembles in the RMT.

  3. An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI.

    Directory of Open Access Journals (Sweden)

    Nathan W Churchill

    Full Text Available BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the "pipeline" significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard "fixed" preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each, demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets.

  4. An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI.

    Science.gov (United States)

    Churchill, Nathan W; Spring, Robyn; Afshin-Pour, Babak; Dong, Fan; Strother, Stephen C

    2015-01-01

    BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the "pipeline") significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard "fixed" preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each), demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets.

  5. Projection Matrices, Generalized Inverse Matrices, and Singular Value Decomposition

    CERN Document Server

    Yanai, Haruo; Takane, Yoshio

    2011-01-01

    Aside from distribution theory, projections and the singular value decomposition (SVD) are the two most important concepts for understanding the basic mechanism of multivariate analysis. The former underlies the least squares estimation in regression analysis, which is essentially a projection of one subspace onto another, and the latter underlies principal component analysis, which seeks to find a subspace that captures the largest variability in the original space. This book is about projections and SVD. A thorough discussion of generalized inverse (g-inverse) matrices is also given because

  6. The Effects of Pre-processing Strategies for Pediatric Cochlear Implant Recipients

    Science.gov (United States)

    Rakszawski, Bernadette; Wright, Rose; Cadieux, Jamie H.; Davidson, Lisa S.; Brenner, Christine

    2016-01-01

    Background Cochlear implants (CIs) have been shown to improve children’s speech recognition over traditional amplification when severe to profound sensorineural hearing loss is present. Despite improvements, understanding speech at low-level intensities or in the presence of background noise remains difficult. In an effort to improve speech understanding in challenging environments, Cochlear Ltd. offers pre-processing strategies that apply various algorithms prior to mapping the signal to the internal array. Two of these strategies include Autosensitivity Control™ (ASC) and Adaptive Dynamic Range Optimization (ADRO®). Based on previous research, the manufacturer’s default pre-processing strategy for pediatrics’ everyday programs combines ASC+ADRO®. Purpose The purpose of this study is to compare pediatric speech perception performance across various pre-processing strategies while applying a specific programming protocol utilizing increased threshold (T) levels to ensure access to very low-level sounds. Research Design This was a prospective, cross-sectional, observational study. Participants completed speech perception tasks in four pre-processing conditions: no pre-processing, ADRO®, ASC, ASC+ADRO®. Study Sample Eleven pediatric Cochlear Ltd. cochlear implant users were recruited: six bilateral, one unilateral, and four bimodal. Intervention Four programs, with the participants’ everyday map, were loaded into the processor with different pre-processing strategies applied in each of the four positions: no pre-processing, ADRO®, ASC, and ASC+ADRO®. Data Collection and Analysis Participants repeated CNC words presented at 50 and 70 dB SPL in quiet and HINT sentences presented adaptively with competing R-Space noise at 60 and 70 dB SPL. Each measure was completed as participants listened with each of the four pre-processing strategies listed above. Test order and condition were randomized. A repeated-measures analysis of variance (ANOVA) was used to

  7. How to Confuse with Statistics or: The Use and Misuse of Conditional Probabilities

    OpenAIRE

    Gigerenzer, Gerd; Krämer, Walter

    2004-01-01

    The article shows by various examples how consumers of statistical information may be confused when this information is presented in terms of conditional probabilities. It also shows how this confusion helps others to lie with statistics, and it suggests how either confusion or lies can be avoided by using alternative modes of conveying statistical information.

  8. SPECTRAL CONFUSION FOR COSMOLOGICAL SURVEYS OF REDSHIFTED C II EMISSION

    Energy Technology Data Exchange (ETDEWEB)

    Kogut, A.; Dwek, E.; Moseley, S. H., E-mail: Alan.J.Kogut@nasa.gov [Code 665, Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2015-06-20

    Far-infrared cooling lines are ubiquitous features in the spectra of star-forming galaxies. Surveys of redshifted fine-structure lines provide a promising new tool to study structure formation and galactic evolution at redshifts including the epoch of reionization as well as the peak of star formation. Unlike neutral hydrogen surveys, where the 21 cm line is the only bright line, surveys of redshifted fine-structure lines suffer from confusion generated by line broadening, spectral overlap of different lines, and the crowding of sources with redshift. We use simulations to investigate the resulting spectral confusion and derive observing parameters to minimize these effects in pencil-beam surveys of redshifted far-IR line emission. We generate simulated spectra of the 17 brightest far-IR lines in galaxies, covering the 150–1300 μm wavelength region corresponding to redshifts 0 < z < 7, and develop a simple iterative algorithm that successfully identifies the 158 μm [C ii] line and other lines. Although the [C ii] line is a principal coolant for the interstellar medium, the assumption that the brightest observed lines in a given line of sight are always [C ii] lines is a poor approximation to the simulated spectra once other lines are included. Blind line identification requires detection of fainter companion lines from the same host galaxies, driving survey sensitivity requirements. The observations require moderate spectral resolution 700 < R < 4000 with angular resolution between 20″ and 10′, sufficiently narrow to minimize confusion yet sufficiently large to include a statistically meaningful number of sources.

  9. Spectral Confusion for Cosmological Surveys of Redshifted C II Emission

    Science.gov (United States)

    Kogut, A.; Dwek, E.; Moseley, S. H.

    2015-01-01

    Far-infrared cooling lines are ubiquitous features in the spectra of star-forming galaxies. Surveys of redshifted fine-structure lines provide a promising new tool to study structure formation and galactic evolution at redshifts including the epoch of reionization as well as the peak of star formation. Unlike neutral hydrogen surveys, where the 21 cm line is the only bright line, surveys of redshifted fine-structure lines suffer from confusion generated by line broadening, spectral overlap of different lines, and the crowding of sources with redshift. We use simulations to investigate the resulting spectral confusion and derive observing parameters to minimize these effects in pencilbeam surveys of redshifted far-IR line emission. We generate simulated spectra of the 17 brightest far-IR lines in galaxies, covering the 150-1300 µm wavelength region corresponding to redshifts 0 < z < 7, and develop a simple iterative algorithm that successfully identifies the 158 µm [C II] line and other lines. Although the [C II] line is a principal coolant for the interstellar medium, the assumption that the brightest observed lines in a given line of sight are always [C II] lines is a poor approximation to the simulated spectra once other lines are included. Blind line identification requires detection of fainter companion lines from the same host galaxies, driving survey sensitivity requirements. The observations require moderate spectral resolution 700 < R < 4000 with angular resolution between 20? and 10', sufficiently narrow to minimize confusion yet sufficiently large to include a statistically meaningful number of sources.

  10. MALPOSITIONED LMA CONFUSED AS FOREIGN BODY IN NASAL CAVITY.

    Science.gov (United States)

    Verma, Sidharth; Mehta, Nitika; Mehta, Nandita; Mehta, Satish; Verma, Jayeeta

    2015-10-01

    We present a case of confusing white foreign body in the nasal cavity detected during Endoscopic Sinus Surgery (ESS) in a 35-yr-old male which turned out to be a malposition of classic laryngeal mask airway (LMA). Although malposition of LMA is a known entity to the anesthesiologist, if ventilation is adequate, back folded LMA in nasal cavity might not be recognized by the surgeon and lead to catastrophic consequences during endoscopic sinus surgery. In principle, misfolding and malpositioning can be reduced by pre usage testing, using appropriate sizes, minimizing cuff volume, and early identification and correction of malposition.

  11. Random matrices and Riemann hypothesis

    CERN Document Server

    Pierre, Christian

    2011-01-01

    The curious connection between the spacings of the eigenvalues of random matrices and the corresponding spacings of the non trivial zeros of the Riemann zeta function is analyzed on the basis of the geometric dynamical global program of Langlands whose fundamental structures are shifted quantized conjugacy class representatives of bilinear algebraic semigroups.The considered symmetry behind this phenomenology is the differential bilinear Galois semigroup shifting the product,right by left,of automorphism semigroups of cofunctions and functions on compact transcendental quanta.

  12. Sparse Matrices in Frame Theory

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Krahmer, Felix; Kutyniok, Gitta

    2014-01-01

    Frame theory is closely intertwined with signal processing through a canon of methodologies for the analysis of signals using (redundant) linear measurements. The canonical dual frame associated with a frame provides a means for reconstruction by a least squares approach, but other dual frames...... yield alternative reconstruction procedures. The novel paradigm of sparsity has recently entered the area of frame theory in various ways. Of those different sparsity perspectives, we will focus on the situations where frames and (not necessarily canonical) dual frames can be written as sparse matrices...

  13. Cosmetic crossings and Seifert matrices

    CERN Document Server

    Balm, Cheryl; Kalfagianni, Efstratia; Powell, Mark

    2011-01-01

    We study cosmetic crossings in knots of genus one and obtain obstructions to such crossings in terms of knot invariants determined by Seifert matrices. In particular, we prove that for genus one knots the Alexander polynomial and the homology of the double cover branching over the knot provide obstructions to cosmetic crossings. As an application we prove the nugatory crossing conjecture for twisted Whitehead doubles of non-cable knots. We also verify the conjecture for several families of pretzel knots and all genus one knots with up to 12 crossings.

  14. Superalgebraic representation of Dirac matrices

    Science.gov (United States)

    Monakhov, V. V.

    2016-01-01

    We consider a Clifford extension of the Grassmann algebra in which operators are constructed from products of Grassmann variables and derivatives with respect to them. We show that this algebra contains a subalgebra isomorphic to a matrix algebra and that it additionally contains operators of a generalized matrix algebra that mix states with different numbers of Grassmann variables. We show that these operators are extensions of spin-tensors to the case of superspace. We construct a representation of Dirac matrices in the form of operators of a generalized matrix algebra.

  15. Orthogonal polynomials and random matrices

    CERN Document Server

    Deift, Percy

    2000-01-01

    This volume expands on a set of lectures held at the Courant Institute on Riemann-Hilbert problems, orthogonal polynomials, and random matrix theory. The goal of the course was to prove universality for a variety of statistical quantities arising in the theory of random matrix models. The central question was the following: Why do very general ensembles of random n {\\times} n matrices exhibit universal behavior as n {\\rightarrow} {\\infty}? The main ingredient in the proof is the steepest descent method for oscillatory Riemann-Hilbert problems.

  16. The Cook Mountain problem: Stratigraphic reality and semantic confusion

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, T.E. [Frontera Exploration Consultants, San Antonio, TX (United States)]|[Venus Oil Co., San Antonio, TX (United States)

    1994-12-31

    Historical inconsistency as to what constitutes the Cook Mountain Formation illustrates the semantic confusion resulting from extending surface-derived stratigraphic names into the subsurface without a full understanding of basin architecture. At the surface, the Cook Mountain Formation consists of fossilerous marine shale, glaucony and marl, and marginal-marine sandstone and shale between the nonmarine Sparta Formation sandstones below and the nonmarine Yegua Formation sandstones and lignitic shales above. Fossils are abundant, including the benthic foraminifer Ceratobulimina eximia. As subsurface exploration began, the first occurrence of Ceratobulimina eximia {open_quotes}Cerat{close_quotes} was used as the top of the marine {open_quotes}Cook Mountain Shale{close_quotes} below the Yegua section. Downdip, the overlying Yegua was found to become a sequence of marine shales and marginal-marine sandstones, the lower part of which yielded {open_quotes}Cerat{close_quotes}. Because of this, the lower sandstones were called {open_quotes}Cook Mountain{close_quotes} in many fields. At the Yegua shelf margin, {open_quotes}Cerat{close_quotes} is absent. Different exploration teams have used their own definitions for {open_quotes}Cook Mountain{close_quotes}, leading to substantial confusion.

  17. Countering Climate Confusion in the Classroom: New Methods and Initiatives

    Science.gov (United States)

    McCaffrey, M.; Berbeco, M.; Reid, A. H.

    2014-12-01

    Politicians and ideologues blocking climate education through legislative manipulation. Free marketeers promoting the teaching of doubt and controversy to head off regulation. Education standards and curricula that skim over, omit, or misrepresent the causes, effects, risks and possible responses to climate change. Teachers who unknowingly foster confusion by presenting "both sides" of a phony scientific controversy. All of these contribute to dramatic differences in the quality and quantity of climate education received by U.S. students. Most U.S. adults and teens fail basic quizzes on energy and climate basics, in large part, because climate science has never been fully accepted as a vital component of a 21st-century science education. Often skipped or skimmed over, human contributions to climate change are sometimes taught as controversy or through debate, perpetuating a climate of confusion in many classrooms. This paper will review recent history of opposition to climate science education, and explore initial findings from a new survey of science teachers on whether, where and how climate change is being taught. It will highlight emerging effective pedagogical practices identified in McCaffrey's Climate Smart & Energy Wise, including the role of new initiatives such as the Next Generation Science Standards and Green Schools, and detail efforts of the Science League of America in countering denial and doubt so that educators can teach consistently and confidently about climate change.

  18. Characterizing source confusion in HI spectral line stacking experiments

    Science.gov (United States)

    Baker, Andrew J.; Elson, Edward C.; Blyth, Sarah

    2017-01-01

    Forthcoming studies like the Looking At the Distant Universe with the MeerKAT Array (LADUMA) deep HI survey will rely in part on stacking experiments to detect the mean level of HI emission from populations of galaxies that are too faint to be detected individually. Preparations for such experiments benefit from the use of synthetic data cubes built from mock galaxy catalogs and containing model galaxies with realistic spatial and spectral HI distributions over large cosmological volumes. I will present a new set of such synthetic data cubes and show the results of stacking experiments with them. Because the stacked spectra can be accurately decomposed into contributions from target and non-target galaxies, it is possible to characterize the large fractions of contaminant mass that are included in stacked totals due to source confusion. Consistent with estimates extrapolated from z = 0 observational data, we find that the amount of confused mass in a stacked spectrum grows almost linearly with the size of the observational beam, suggesting potential overestimates of the cosmic neutral gas density by some recent HI stacking experiments.

  19. Searching for partial Hadamard matrices

    CERN Document Server

    Álvarez, Víctor; Frau, María-Dolores; Gudiel, Félix; Güemes, María-Belén; Martín, Elena; Osuna, Amparo

    2012-01-01

    Three algorithms looking for pretty large partial Hadamard matrices are described. Here "large" means that hopefully about a third of a Hadamard matrix (which is the best asymptotic result known so far, [dLa00]) is achieved. The first one performs some kind of local exhaustive search, and consequently is expensive from the time consuming point of view. The second one comes from the adaptation of the best genetic algorithm known so far searching for cliques in a graph, due to Singh and Gupta [SG06]. The last one consists in another heuristic search, which prioritizes the required processing time better than the final size of the partial Hadamard matrix to be obtained. In all cases, the key idea is characterizing the adjacency properties of vertices in a particular subgraph G_t of Ito's Hadamard Graph Delta (4t) [Ito85], since cliques of order m in G_t can be seen as (m+3)*4t partial Hadamard matrices.

  20. A concise guide to complex Hadamard matrices

    CERN Document Server

    Tadej, W; Tadej, Wojciech; Zyczkowski, Karol

    2005-01-01

    Complex Hadamard matrices, consisting of unimodular entries with arbitrary phases, play an important role in the theory of quantum information. We review basic properties of complex Hadamard matrices and present a catalogue of inequivalent cases known for dimension N=2,...,16. In particular, we explicitly write down some families of complex Hadamard matrices for N=12,14 and 16, which we could not find in the existing literature.

  1. Lambda-matrices and vibrating systems

    CERN Document Server

    Lancaster, Peter; Stark, M; Kahane, J P

    1966-01-01

    Lambda-Matrices and Vibrating Systems presents aspects and solutions to problems concerned with linear vibrating systems with a finite degrees of freedom and the theory of matrices. The book discusses some parts of the theory of matrices that will account for the solutions of the problems. The text starts with an outline of matrix theory, and some theorems are proved. The Jordan canonical form is also applied to understand the structure of square matrices. Classical theorems are discussed further by applying the Jordan canonical form, the Rayleigh quotient, and simple matrix pencils with late

  2. Matrices with totally positive powers and their generalizations

    OpenAIRE

    Kushel, Olga Y.

    2013-01-01

    In this paper, eventually totally positive matrices (i.e. matrices all whose powers starting with some point are totally positive) are studied. We present a new approach to eventual total positivity which is based on the theory of eventually positive matrices. We mainly focus on the spectral properties of such matrices. We also study eventually J-sign-symmetric matrices and matrices, whose powers are P-matrices.

  3. A NOTE ON THE STOCHASTIC ROOTS OF STOCHASTIC MATRICES

    Institute of Scientific and Technical Information of China (English)

    Qi-Ming HE; Eldon GUNN

    2003-01-01

    In this paper, we study the stochastic root matrices of stochastic matrices. All stochastic roots of 2×2 stochastic matrices are found explicitly. A method based on characteristic polynomial of matrix is developed to find all real root matrices that are functions of the original 3×3 matrix, including all possible (function) stochastic root matrices. In addition, we comment on some numerical methods for computing stochastic root matrices of stochastic matrices.

  4. Examination of Speed Contribution of Parallelization for Several Fingerprint Pre-Processing Algorithms

    Directory of Open Access Journals (Sweden)

    GORGUNOGLU, S.

    2014-05-01

    Full Text Available In analysis of minutiae based fingerprint systems, fingerprints needs to be pre-processed. The pre-processing is carried out to enhance the quality of the fingerprint and to obtain more accurate minutiae points. Reducing the pre-processing time is important for identification and verification in real time systems and especially for databases holding large fingerprints information. Parallel processing and parallel CPU computing can be considered as distribution of processes over multi core processor. This is done by using parallel programming techniques. Reducing the execution time is the main objective in parallel processing. In this study, pre-processing of minutiae based fingerprint system is implemented by parallel processing on multi core computers using OpenMP and on graphics processor using CUDA to improve execution time. The execution times and speedup ratios are compared with the one that of single core processor. The results show that by using parallel processing, execution time is substantially improved. The improvement ratios obtained for different pre-processing algorithms allowed us to make suggestions on the more suitable approaches for parallelization.

  5. On the orders of transformation matrices (mod n) and two types of generalized Arnold transformation matrices

    Institute of Scientific and Technical Information of China (English)

    YANG Lizhen; CHEN Kefei

    2004-01-01

    In this paper, we analyze the structure of the orders of matrices (mod n), and present the relation between the orders of matrices over finite fields and their Jordan normal forms. Then we generalize 2-dimensional Arnold transformation matrix to two types of n-dimensional Arnold transformation matrices: A-type Arnold transformation matrix and B-type transformation matrix, and analyze their orders and other properties based on our earlier results about the orders of matrices.

  6. The lower bounds for the rank of matrices and some sufficient conditions for nonsingular matrices.

    Science.gov (United States)

    Wang, Dafei; Zhang, Xumei

    2017-01-01

    The paper mainly discusses the lower bounds for the rank of matrices and sufficient conditions for nonsingular matrices. We first present a new estimation for [Formula: see text] ([Formula: see text] is an eigenvalue of a matrix) by using the partitioned matrices. By using this estimation and inequality theory, the new and more accurate estimations for the lower bounds for the rank are deduced. Furthermore, based on the estimation for the rank, some sufficient conditions for nonsingular matrices are obtained.

  7. A note on "Block H-matrices and spectrum of block matrices"

    Institute of Scientific and Technical Information of China (English)

    LIU Jian-zhou; HUANG Ze-jun

    2008-01-01

    In this paper, we make further discussions and improvements on the results presented in the previously published work "Block H-matrices and spectrum of block matrices". Furthermore, a new bound for eigenvalues of block matrices is given with examples to show advantages of the new result.

  8. A partial classification of primes in the positive matrices and in the doubly stochastic matrices

    NARCIS (Netherlands)

    G. Picci; J.M. van den Hof; J.H. van Schuppen (Jan)

    1995-01-01

    textabstractThe algebraic structure of the set of square positive matrices is that of a semi-ring. The concept of a prime in the positive matrices has been introduced. A few examples of primes in the positive matrices are known but there is no general classification. In this paper a partial

  9. Pathological rate matrices: from primates to pathogens

    Directory of Open Access Journals (Sweden)

    Knight Rob

    2008-12-01

    Full Text Available Abstract Background Continuous-time Markov models allow flexible, parametrically succinct descriptions of sequence divergence. Non-reversible forms of these models are more biologically realistic but are challenging to develop. The instantaneous rate matrices defined for these models are typically transformed into substitution probability matrices using a matrix exponentiation algorithm that employs eigendecomposition, but this algorithm has characteristic vulnerabilities that lead to significant errors when a rate matrix possesses certain 'pathological' properties. Here we tested whether pathological rate matrices exist in nature, and consider the suitability of different algorithms to their computation. Results We used concatenated protein coding gene alignments from microbial genomes, primate genomes and independent intron alignments from primate genomes. The Taylor series expansion and eigendecomposition matrix exponentiation algorithms were compared to the less widely employed, but more robust, Padé with scaling and squaring algorithm for nucleotide, dinucleotide, codon and trinucleotide rate matrices. Pathological dinucleotide and trinucleotide matrices were evident in the microbial data set, affecting the eigendecomposition and Taylor algorithms respectively. Even using a conservative estimate of matrix error (occurrence of an invalid probability, both Taylor and eigendecomposition algorithms exhibited substantial error rates: ~100% of all exonic trinucleotide matrices were pathological to the Taylor algorithm while ~10% of codon positions 1 and 2 dinucleotide matrices and intronic trinucleotide matrices, and ~30% of codon matrices were pathological to eigendecomposition. The majority of Taylor algorithm errors derived from occurrence of multiple unobserved states. A small number of negative probabilities were detected from the Pad�� algorithm on trinucleotide matrices that were attributable to machine precision. Although the Pad

  10. Dynamical invariance for random matrices

    CERN Document Server

    Unterberger, Jeremie

    2016-01-01

    We consider a general Langevin dynamics for the one-dimensional N-particle Coulomb gas with confining potential $V$ at temperature $\\beta$. These dynamics describe for $\\beta=2$ the time evolution of the eigenvalues of $N\\times N$ random Hermitian matrices. The equilibrium partition function -- equal to the normalization constant of the Laughlin wave function in fractional quantum Hall effect -- is known to satisfy an infinite number of constraints called Virasoro or loop constraints. We introduce here a dynamical generating function on the space of random trajectories which satisfies a large class of constraints of geometric origin. We focus in this article on a subclass induced by the invariance under the Schr\\"odinger-Virasoro algebra.

  11. Study on preprocessing of surface defect images of cold steel strip

    Directory of Open Access Journals (Sweden)

    Xiaoye GE

    2016-06-01

    Full Text Available The image preprocessing is an important part in the field of digital image processing, and it’s also the premise for the image detection of cold steel strip surface defects. The factors including the complicated on-site environment and the distortion of the optical system will cause image degradation, which will directly affects the feature extraction and classification of the images. Aiming at these problems, a method combining the adaptive median filter and homomorphic filter is proposed to preprocess the image. The adaptive median filter is effective for image denoising, and the Gaussian homomorphic filter can steadily remove the nonuniform illumination of images. Finally, the original and preprocessed images and their features are analyzed and compared. The results show that this method can improve the image quality effectively.

  12. High levels of confusion for cholesterol awareness campaigns.

    Science.gov (United States)

    Hall, Danika V

    2008-09-15

    Earlier this year, two industry-sponsored advertising campaigns for cholesterol awareness that target the general public were launched in Australia. These campaigns aimed to alert the public to the risks associated with having high cholesterol and encouraged cholesterol testing for wider groups than those specified by the National Heart Foundation. General practitioners should be aware of the potential for the two campaigns to confuse the general public as to who should be tested, and where. The campaign sponsors (Unilever Australasia and Pfizer) each have the potential to benefit by increased market share for their products, and increased profits. These disease awareness campaigns are examples of what is increasingly being termed "condition branding" by pharmaceutical marketing experts.

  13. A subconjunctival foreign body confused with uveal prolapse

    Directory of Open Access Journals (Sweden)

    Young Min Park

    2014-01-01

    Full Text Available There are cases in which the presence of a foreign body (FB is difficult to diagnose based on history taking or clinical examination. We report a case of subconjunctival FB confused with uveal prolapse. A 68-year-old man, who had the history of pterygium excision in his right eye, complained of irritation and congestion in that same eye. He also had the history of growing vegetables in a plastic greenhouse. It seemed to be a suspected uveal mass bulging through a focal scleral thinning site. On the basis of slit-lamp magnification, the lesion was presumed to be a hard and black keratinized mass embedded under the conjunctiva. Histopathologically, the removed mass was revealed to be a seed of the dicotyledones. Patients who show signs of prolapsed uvea or scleral thinning, possibility of a subconjunctival FB should be considered as differential diagnosis. In addition, a removed unknown FB should be examined histopathologically.

  14. [Pleasure and confusion. A footnote to Freud's translations of Mill].

    Science.gov (United States)

    Molnar, Michael

    2014-01-01

    In 1863 Theodor Gomperz came to England to propose to Helen Taylor Mill, step-daughter of J. S. Mill. For several months he delayed the proposal while studying transcripts of the Philodemus papyri in the Bodleian Library. There a threatening note, supposedly left on his desk, triggered an attack of paranoia. My study of this incident, initially a mere footnote, expanded into an examination of the obscure causes of this attack. The philosophical question of the nature of desire and the researcher's passion to reconstruct a fragmented classical text are related to Gomperz's unfocussed relationship with both Mill and his step-daughter, and his ensuing confusion between reality and fantasy. The incident is considered paradigmatic of the perils of scholarly research, when the desire to possess knowledge becomes entangled with transferential relationships.

  15. Persistent Confusions about Hypothesis Testing in the Social Sciences

    Directory of Open Access Journals (Sweden)

    Christopher Thron

    2015-05-01

    Full Text Available This paper analyzes common confusions involving basic concepts in statistical hypothesis testing. One-third of the social science statistics textbooks examined in the study contained false statements about significance level and/or p-value. We infer that a large proportion of social scientists are being miseducated about these concepts. We analyze the causes of these persistent misunderstandings, and conclude that the conventional terminology is prone to abuse because it does not clearly represent the conditional nature of probabilities and events involved. We argue that modifications in terminology, as well as the explicit introduction of conditional probability concepts and notation into the statistics curriculum in the social sciences, are necessary to prevent the persistence of these errors.

  16. Psychiatric diagnoses are not mental processes: Wittgenstein on conceptual confusion.

    Science.gov (United States)

    Rosenman, Stephen; Nasti, Julian

    2012-11-01

    Empirical explanation and treatment repeatedly fail for psychiatric diagnoses. Diagnosis is mired in conceptual confusion that is illuminated by Ludwig Wittgenstein's later critique of philosophy (Philosophical Investigations). This paper examines conceptual confusions in the foundation of psychiatric diagnosis from some of Wittgenstein's important critical viewpoints. Diagnostic terms are words whose meanings are given by usages not definitions. Diagnoses, by Wittgenstein's analogy with 'games', have various and evolving usages that are connected by family relationships, and no essence or core phenomenon connects them. Their usages will change according to the demands and contexts in which they are employed. Diagnoses, like many psychological terms, such as 'reading' or 'understanding', are concepts that refer not to fixed behavioural or mental states but to complex apprehensions of the relationship of a variety of behavioural phenomena with the world. A diagnosis is a sort of concept that cannot be located in or explained by a mental process. A diagnosis is an exercise in language and its usage changes according to the context and the needs it addresses. Diagnoses have important uses but they are irreducibly heterogeneous and cannot be identified with or connected to particular mental processes or even with a unity of phenomena that can be addressed empirically. This makes understandable not only the repeated failure of empirical science to replicate or illuminate genetic, neurophysiologic, psychic or social processes underlying diagnoses but also the emptiness of a succession of explanatory theories and treatment effects that cannot be repeated or stubbornly regress to the mean. Attempts to fix the meanings of diagnoses to allow empirical explanation will and should fail as there is no foundation on which a fixed meaning can be built and it can only be done at the cost of the relevance and usefulness of diagnosis.

  17. Optimization of Preprocessing and Densification of Sorghum Stover at Full-scale Operation

    Energy Technology Data Exchange (ETDEWEB)

    Neal A. Yancey; Jaya Shankar Tumuluru; Craig C. Conner; Christopher T. Wright

    2011-08-01

    Transportation costs can be a prohibitive step in bringing biomass to a preprocessing location or biofuel refinery. One alternative to transporting biomass in baled or loose format to a preprocessing location, is to utilize a mobile preprocessing system that can be relocated to various locations where biomass is stored, preprocess and densify the biomass, then ship it to the refinery as needed. The Idaho National Laboratory has a full scale 'Process Demonstration Unit' PDU which includes a stage 1 grinder, hammer mill, drier, pellet mill, and cooler with the associated conveyance system components. Testing at bench and pilot scale has been conducted to determine effects of moisture on preprocessing, crop varieties on preprocessing efficiency and product quality. The INLs PDU provides an opportunity to test the conclusions made at the bench and pilot scale on full industrial scale systems. Each component of the PDU is operated from a central operating station where data is collected to determine power consumption rates for each step in the process. The power for each electrical motor in the system is monitored from the control station to monitor for problems and determine optimal conditions for the system performance. The data can then be viewed to observe how changes in biomass input parameters (moisture and crop type for example), mechanical changes (screen size, biomass drying, pellet size, grinding speed, etc.,), or other variations effect the power consumption of the system. Sorgum in four foot round bales was tested in the system using a series of 6 different screen sizes including: 3/16 in., 1 in., 2 in., 3 in., 4 in., and 6 in. The effect on power consumption, product quality, and production rate were measured to determine optimal conditions.

  18. Tensor Products of Random Unitary Matrices

    CERN Document Server

    Tkocz, Tomasz; Kus, Marek; Zeitouni, Ofer; Zyczkowski, Karol

    2012-01-01

    Tensor products of M random unitary matrices of size N from the circular unitary ensemble are investigated. We show that the spectral statistics of the tensor product of random matrices becomes Poissonian if M=2, N become large or M become large and N=2.

  19. Products of Generalized Stochastic Sarymsakov Matrices

    NARCIS (Netherlands)

    Xia, Weiguo; Liu, Ji; Cao, Ming; Johansson, Karl; Basar, Tamer

    2015-01-01

    In the set of stochastic, indecomposable, aperiodic (SIA) matrices, the class of stochastic Sarymsakov matrices is the largest known subset (i) that is closed under matrix multiplication and (ii) the infinitely long left-product of the elements from a compact subset converges to a rank-one matrix. In

  20. Boosting model performance and interpretation by entangling preprocessing selection and variable selection.

    Science.gov (United States)

    Gerretzen, Jan; Szymańska, Ewa; Bart, Jacob; Davies, Antony N; van Manen, Henk-Jan; van den Heuvel, Edwin R; Jansen, Jeroen J; Buydens, Lutgarde M C

    2016-09-28

    The aim of data preprocessing is to remove data artifacts-such as a baseline, scatter effects or noise-and to enhance the contextually relevant information. Many preprocessing methods exist to deliver one or more of these benefits, but which method or combination of methods should be used for the specific data being analyzed is difficult to select. Recently, we have shown that a preprocessing selection approach based on Design of Experiments (DoE) enables correct selection of highly appropriate preprocessing strategies within reasonable time frames. In that approach, the focus was solely on improving the predictive performance of the chemometric model. This is, however, only one of the two relevant criteria in modeling: interpretation of the model results can be just as important. Variable selection is often used to achieve such interpretation. Data artifacts, however, may hamper proper variable selection by masking the true relevant variables. The choice of preprocessing therefore has a huge impact on the outcome of variable selection methods and may thus hamper an objective interpretation of the final model. To enhance such objective interpretation, we here integrate variable selection into the preprocessing selection approach that is based on DoE. We show that the entanglement of preprocessing selection and variable selection not only improves the interpretation, but also the predictive performance of the model. This is achieved by analyzing several experimental data sets of which the true relevant variables are available as prior knowledge. We show that a selection of variables is provided that complies more with the true informative variables compared to individual optimization of both model aspects. Importantly, the approach presented in this work is generic. Different types of models (e.g. PCR, PLS, …) can be incorporated into it, as well as different variable selection methods and different preprocessing methods, according to the taste and experience of

  1. Genetic Algorithm for Optimization: Preprocessing with n Dimensional Bisection and Error Estimation

    Science.gov (United States)

    Sen, S. K.; Shaykhian, Gholam Ali

    2006-01-01

    A knowledge of the appropriate values of the parameters of a genetic algorithm (GA) such as the population size, the shrunk search space containing the solution, crossover and mutation probabilities is not available a priori for a general optimization problem. Recommended here is a polynomial-time preprocessing scheme that includes an n-dimensional bisection and that determines the foregoing parameters before deciding upon an appropriate GA for all problems of similar nature and type. Such a preprocessing is not only fast but also enables us to get the global optimal solution and its reasonably narrow error bounds with a high degree of confidence.

  2. Performance of Pre-processing Schemes with Imperfect Channel State Information

    DEFF Research Database (Denmark)

    Christensen, Søren Skovgaard; Kyritsi, Persa; De Carvalho, Elisabeth

    2006-01-01

    Pre-processing techniques have several benefits when the CSI is perfect. In this work we investigate three linear pre-processing filters, assuming imperfect CSI caused by noise degradation and channel temporal variation. Results indicate, that the LMMSE filter achieves the lowest BER and the high...... and the highest SINR when the CSI is perfect, whereas the simple matched filter may be a good choice when the CSI is imperfect. Additionally the results give insight into the inherent trade-off between robustness against CSI imperfections and spatial focusing ability....

  3. ACTS (Advanced Communications Technology Satellite) Propagation Experiment: Preprocessing Software User's Manual

    Science.gov (United States)

    Crane, Robert K.; Wang, Xuhe; Westenhaver, David

    1996-01-01

    The preprocessing software manual describes the Actspp program originally developed to observe and diagnose Advanced Communications Technology Satellite (ACTS) propagation terminal/receiver problems. However, it has been quite useful for automating the preprocessing functions needed to convert the terminal output to useful attenuation estimates. Prior to having data acceptable for archival functions, the individual receiver system must be calibrated and the power level shifts caused by ranging tone modulation must be received. Actspp provides three output files: the daylog, the diurnal coefficient file, and the file that contains calibration information.

  4. Data acquisition, preprocessing and analysis for the Virginia Tech OLYMPUS experiment

    Science.gov (United States)

    Remaklus, P. Will

    1991-01-01

    Virginia Tech is conducting a slant path propagation experiment using the 12, 20, and 30 GHz OLYMPUS beacons. Beacon signal measurements are made using separate terminals for each frequency. In addition, short baseline diversity measurements are collected through a mobile 20 GHz terminal. Data collection is performed with a custom data acquisition and control system. Raw data are preprocessed to remove equipment biases and discontinuities prior to analysis. Preprocessed data are then statistically analyzed to investigate parameters such as frequency scaling, fade slope and duration, and scintillation intensity.

  5. Preprocessing of Tandem Mass Spectrometric Data Based on Decision Tree Classification

    Institute of Scientific and Technical Information of China (English)

    Jing-Fen Zhang; Si-Min He; Jin-Jin Cai; Xing-Jun Cao; Rui-Xiang Sun; Yan Fu; Rong Zeng; Wen Gao

    2005-01-01

    In this study, we present a preprocessing method for quadrupole time-of-flight(Q-TOF) tandem mass spectra to increase the accuracy of database searching for peptide (protein) identification. Based on the natural isotopic information inherent in tandem mass spectra, we construct a decision tree after feature selection to classify the noise and ion peaks in tandem spectra. Furthermore, we recognize overlapping peaks to find the monoisotopic masses of ions for the following identification process. The experimental results show that this preprocessing method increases the search speed and the reliability of peptide identification.

  6. Influence of Hemp Fibers Pre-processing on Low Density Polyethylene Matrix Composites Properties

    Science.gov (United States)

    Kukle, S.; Vidzickis, R.; Zelca, Z.; Belakova, D.; Kajaks, J.

    2016-04-01

    In present research with short hemp fibres reinforced LLDPE matrix composites with fibres content in a range from 30 to 50 wt% subjected to four different pre-processing technologies were produced and such their properties as tensile strength and elongation at break, tensile modulus, melt flow index, micro hardness and water absorption dynamics were investigated. Capillary viscosimetry was used for fluidity evaluation and melt flow index (MFI) evaluated for all variants. MFI of fibres of two pre-processing variants were high enough to increase hemp fibres content from 30 to 50 wt% with moderate increase of water sorption capability.

  7. Abel-Grassmann's Groupoids of Modulo Matrices

    Directory of Open Access Journals (Sweden)

    Muhammad Rashad

    2016-01-01

    Full Text Available The binary operation of usual addition is associative in all matrices over R. However, a binary operation of addition in matrices over Z n of a nonassociative structures of AG-groupoids and AG-groups are defined and investigated here. It is shown that both these structures exist for every integer n > 3. Various properties of these structures are explored like: (i Every AG-groupoid of matrices over Z n is transitively commutative AG-groupoid and is a cancellative AG-groupoid ifn is prime. (ii Every AG-groupoid of matrices over Z n of Type-II is a T3-AG-groupoid. (iii An AG-groupoid of matrices over Z n ; G nAG(t,u, is an AG-band, ift+ u=1(mod n.

  8. On Decompositions of Matrices over Distributive Lattices

    Directory of Open Access Journals (Sweden)

    Yizhi Chen

    2014-01-01

    Full Text Available Let L be a distributive lattice and Mn,q (L(Mn(L, resp. the semigroup (semiring, resp. of n × q (n × n, resp. matrices over L. In this paper, we show that if there is a subdirect embedding from distributive lattice L to the direct product ∏i=1m‍Li of distributive lattices L1,L2, …,Lm, then there will be a corresponding subdirect embedding from the matrix semigroup Mn,q(L (semiring Mn(L, resp. to semigroup ∏i=1m‍Mn,q(Li (semiring ∏i=1m‍Mn(Li, resp.. Further, it is proved that a matrix over a distributive lattice can be decomposed into the sum of matrices over some of its special subchains. This generalizes and extends the decomposition theorems of matrices over finite distributive lattices, chain semirings, fuzzy semirings, and so forth. Finally, as some applications, we present a method to calculate the indices and periods of the matrices over a distributive lattice and characterize the structures of idempotent and nilpotent matrices over it. We translate the characterizations of idempotent and nilpotent matrices over a distributive lattice into the corresponding ones of the binary Boolean cases, which also generalize the corresponding structures of idempotent and nilpotent matrices over general Boolean algebras, chain semirings, fuzzy semirings, and so forth.

  9. Compressed Adjacency Matrices: Untangling Gene Regulatory Networks.

    Science.gov (United States)

    Dinkla, K; Westenberg, M A; van Wijk, J J

    2012-12-01

    We present a novel technique-Compressed Adjacency Matrices-for visualizing gene regulatory networks. These directed networks have strong structural characteristics: out-degrees with a scale-free distribution, in-degrees bound by a low maximum, and few and small cycles. Standard visualization techniques, such as node-link diagrams and adjacency matrices, are impeded by these network characteristics. The scale-free distribution of out-degrees causes a high number of intersecting edges in node-link diagrams. Adjacency matrices become space-inefficient due to the low in-degrees and the resulting sparse network. Compressed adjacency matrices, however, exploit these structural characteristics. By cutting open and rearranging an adjacency matrix, we achieve a compact and neatly-arranged visualization. Compressed adjacency matrices allow for easy detection of subnetworks with a specific structure, so-called motifs, which provide important knowledge about gene regulatory networks to domain experts. We summarize motifs commonly referred to in the literature, and relate them to network analysis tasks common to the visualization domain. We show that a user can easily find the important motifs in compressed adjacency matrices, and that this is hard in standard adjacency matrix and node-link diagrams. We also demonstrate that interaction techniques for standard adjacency matrices can be used for our compressed variant. These techniques include rearrangement clustering, highlighting, and filtering.

  10. Limits of noise and confusion in the MWA GLEAM year 1 survey

    CERN Document Server

    Franzen, T M O; Callingham, J R; Ekers, R D; Hancock, P J; Hurley-Walker, N; Morgan, J; Seymour, N; Wayth, R B; White, S V; Bell, M E; Dwarakanath, K S; For, B; Gaensler, B M; Hindson, L; Johnston-Hollitt, M; Kapinska, A D; Lenc, E; McKinley, B; Offringa, A R; Procopio, P; Staveley-Smith, L; Wu, C; Zheng, Q

    2016-01-01

    The GaLactic and Extragalactic All-sky MWA survey (GLEAM) is a new relatively low resolution, contiguous 72-231 MHz survey of the entire sky south of declination +25 deg. In this paper, we outline one approach to determine the relative contribution of system noise, classical confusion and sidelobe confusion in GLEAM images. An understanding of the noise and confusion properties of GLEAM is essential if we are to fully exploit GLEAM data and improve the design of future low-frequency surveys. Our early results indicate that sidelobe confusion dominates over the entire frequency range, implying that enhancements in data processing have the potential to further reduce the noise.

  11. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...

  12. Evaluation of Microarray Preprocessing Algorithms Based on Concordance with RT-PCR in Clinical Samples

    DEFF Research Database (Denmark)

    Hansen, Kasper Lage; Szallasi, Zoltan Imre; Eklund, Aron Charles

    2009-01-01

    evaluated consistency using the Pearson correlation between measurements obtained on the two platforms. Also, we introduce the log-ratio discrepancy as a more relevant measure of discordance between gene expression platforms. Of nine preprocessing algorithms tested, PLIER+16 produced expression values...

  13. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  14. A New Endmember Preprocessing Method for the Hyperspectral Unmixing of Imagery Containing Marine Oil Spills

    Directory of Open Access Journals (Sweden)

    Can Cui

    2017-09-01

    Full Text Available The current methods that use hyperspectral remote sensing imagery to extract and monitor marine oil spills are quite popular. However, the automatic extraction of endmembers from hyperspectral imagery remains a challenge. This paper proposes a data field-spectral preprocessing (DSPP algorithm for endmember extraction. The method first derives a set of extreme points from the data field of an image. At the same time, it identifies a set of spectrally pure points in the spectral space. Finally, the preprocessing algorithm fuses the data field with the spectral calculation to generate a new subset of endmember candidates for the following endmember extraction. The processing time is greatly shortened by directly using endmember extraction algorithms. The proposed algorithm provides accurate endmember detection, including the detection of anomalous endmembers. Therefore, it has a greater accuracy, stronger noise resistance, and is less time-consuming. Using both synthetic hyperspectral images and real airborne hyperspectral images, we utilized the proposed preprocessing algorithm in combination with several endmember extraction algorithms to compare the proposed algorithm with the existing endmember extraction preprocessing algorithms. The experimental results show that the proposed method can effectively extract marine oil spill data.

  15. affyPara-a Bioconductor Package for Parallelized Preprocessing Algorithms of Affymetrix Microarray Data.

    Science.gov (United States)

    Schmidberger, Markus; Vicedo, Esmeralda; Mansmann, Ulrich

    2009-07-22

    Microarray data repositories as well as large clinical applications of gene expression allow to analyse several hundreds of microarrays at one time. The preprocessing of large amounts of microarrays is still a challenge. The algorithms are limited by the available computer hardware. For example, building classification or prognostic rules from large microarray sets will be very time consuming. Here, preprocessing has to be a part of the cross-validation and resampling strategy which is necessary to estimate the rule's prediction quality honestly.This paper proposes the new Bioconductor package affyPara for parallelized preprocessing of Affymetrix microarray data. Partition of data can be applied on arrays and parallelization of algorithms is a straightforward consequence. The partition of data and distribution to several nodes solves the main memory problems and accelerates preprocessing by up to the factor 20 for 200 or more arrays.affyPara is a free and open source package, under GPL license, available form the Bioconductor project at www.bioconductor.org. A user guide and examples are provided with the package.

  16. Pre-processing filter design at transmitters for IBI mitigation in an OFDM system

    Institute of Scientific and Technical Information of China (English)

    Xia Wang; Lei Wang

    2013-01-01

    In order to meet the demands for high transmission rates and high service quality in broadband wireless communica-tion systems, orthogonal frequency division multiplexing (OFDM) has been adopted in some standards. However, the inter-block interference (IBI) and inter-carrier interference (ICI) in an OFDM system affect the performance. To mitigate IBI and ICI, some pre-processing approaches have been proposed based on ful channel state information (CSI), which improved the system per-formance. A pre-processing filter based on partial CSI at the trans-mitter is designed and investigated. The filter coefficient is given by the optimization processing, the symbol error rate (SER) is tested, and the computation complexity of the proposed scheme is analyzed. Computer simulation results show that the proposed pre-processing filter can effectively mitigate IBI and ICI and the performance can be improved. Compared with pre-processing approaches at the transmitter based on ful CSI, the proposed scheme has high spectral efficiency, limited CSI feedback and low computation complexity.

  17. ZPC Matrices and Zero Cycles

    Directory of Open Access Journals (Sweden)

    Marina Arav

    2009-01-01

    Full Text Available Let H be an m×n real matrix and let Zi be the set of column indices of the zero entries of row i of H. Then the conditions |Zk∩(∪i=1k−1Zi|≤1 for all k  (2≤k≤m are called the (row Zero Position Conditions (ZPCs. If H satisfies the ZPC, then H is said to be a (row ZPC matrix. If HT satisfies the ZPC, then H is said to be a column ZPC matrix. The real matrix H is said to have a zero cycle if H has a sequence of at least four zero entries of the form hi1j1,hi1j2,hi2j2,hi2j3,…,hikjk,hikj1 in which the consecutive entries alternatively share the same row or column index (but not both, and the last entry has one common index with the first entry. Several connections between the ZPC and the nonexistence of zero cycles are established. In particular, it is proved that a matrix H has no zero cycle if and only if there are permutation matrices P and Q such that PHQ is a row ZPC matrix and a column ZPC matrix.

  18. Inter-Rater Reliability of Preprocessing EEG Data: Impact of Subjective Artifact Removal on Associative Memory Task ERP Results

    Directory of Open Access Journals (Sweden)

    Steven D. Shirk

    2017-06-01

    Full Text Available The processing of EEG data routinely involves subjective removal of artifacts during a preprocessing stage. Preprocessing inter-rater reliability (IRR and how differences in preprocessing may affect outcomes of primary event-related potential (ERP analyses has not been previously assessed. Three raters independently preprocessed EEG data of 16 cognitively healthy adult participants (ages 18–39 years who performed a memory task. Using intraclass correlations (ICCs, IRR was assessed for Early-frontal, Late-frontal, and Parietal Old/new memory effects contrasts across eight regions of interest (ROIs. IRR was good to excellent for all ROIs; 22 of 26 ICCs were above 0.80. Raters were highly consistent in preprocessing across ROIs, although the frontal pole ROI (ICC range 0.60–0.90 showed less consistency. Old/new parietal effects had highest ICCs with the lowest variability. Rater preprocessing differences did not alter primary ERP results. IRR for EEG preprocessing was good to excellent, and subjective rater-removal of EEG artifacts did not alter primary memory-task ERP results. Findings provide preliminary support for robustness of cognitive/memory task-related ERP results against significant inter-rater preprocessing variability and suggest reliability of EEG to assess cognitive-neurophysiological processes multiple preprocessors are involved.

  19. Predictive modeling of colorectal cancer using a dedicated pre-processing pipeline on routine electronic medical records

    NARCIS (Netherlands)

    Kop, Reinier; Hoogendoorn, Mark; Teije, Annette Ten; Büchner, Frederike L; Slottje, Pauline; Moons, Leon M G; Numans, Mattijs E

    2016-01-01

    Over the past years, research utilizing routine care data extracted from Electronic Medical Records (EMRs) has increased tremendously. Yet there are no straightforward, standardized strategies for pre-processing these data. We propose a dedicated medical pre-processing pipeline aimed at taking on

  20. Random Matrices and Lyapunov Coefficients Regularity

    Science.gov (United States)

    Gallavotti, Giovanni

    2017-02-01

    Analyticity and other properties of the largest or smallest Lyapunov exponent of a product of real matrices with a "cone property" are studied as functions of the matrices entries, as long as they vary without destroying the cone property. The result is applied to stability directions, Lyapunov coefficients and Lyapunov exponents of a class of products of random matrices and to dynamical systems. The results are not new and the method is the main point of this work: it is is based on the classical theory of the Mayer series in Statistical Mechanics of rarefied gases.

  1. Statistical properties of random density matrices

    CERN Document Server

    Sommers, H J; Sommers, Hans-Juergen; Zyczkowski, Karol

    2004-01-01

    Statistical properties of ensembles of random density matrices are investigated. We compute traces and von Neumann entropies averaged over ensembles of random density matrices distributed according to the Bures measure. The eigenvalues of the random density matrices are analyzed: we derive the eigenvalue distribution for the Bures ensemble which is shown to be broader then the quarter--circle distribution characteristic of the Hilbert--Schmidt ensemble. For measures induced by partial tracing over the environment we compute exactly the two-point eigenvalue correlation function.

  2. Statistical properties of random density matrices

    Energy Technology Data Exchange (ETDEWEB)

    Sommers, Hans-Juergen [Fachbereich Physik, Universitaet Duisburg-Essen, Campus Essen, 45117 Essen (Germany); Zyczkowski, Karol [Instytut Fizyki im. Smoluchowskiego, Uniwersytet Jagiellonski, ul. Reymonta 4, 30-059 Cracow (Poland)

    2004-09-03

    Statistical properties of ensembles of random density matrices are investigated. We compute traces and von Neumann entropies averaged over ensembles of random density matrices distributed according to the Bures measure. The eigenvalues of the random density matrices are analysed: we derive the eigenvalue distribution for the Bures ensemble which is shown to be broader then the quarter-circle distribution characteristic of the Hilbert-Schmidt ensemble. For measures induced by partial tracing over the environment we compute exactly the two-point eigenvalue correlation function.

  3. Direct dialling of Haar random unitary matrices

    Science.gov (United States)

    Russell, Nicholas J.; Chakhmakhchyan, Levon; O’Brien, Jeremy L.; Laing, Anthony

    2017-03-01

    Random unitary matrices find a number of applications in quantum information science, and are central to the recently defined boson sampling algorithm for photons in linear optics. We describe an operationally simple method to directly implement Haar random unitary matrices in optical circuits, with no requirement for prior or explicit matrix calculations. Our physically motivated and compact representation directly maps independent probability density functions for parameters in Haar random unitary matrices, to optical circuit components. We go on to extend the results to the case of random unitaries for qubits.

  4. A method for generating realistic correlation matrices

    CERN Document Server

    Garcia, Stephan Ramon

    2011-01-01

    Simulating sample correlation matrices is important in many areas of statistics. Approaches such as generating normal data and finding their sample correlation matrix or generating random uniform $[-1,1]$ deviates as pairwise correlations both have drawbacks. We develop an algorithm for adding noise, in a highly controlled manner, to general correlation matrices. In many instances, our method yields results which are superior to those obtained by simply simulating normal data. Moreover, we demonstrate how our general algorithm can be tailored to a number of different correlation models. Finally, using our results with an existing clustering algorithm, we show that simulating correlation matrices can help assess statistical methodology.

  5. The Antitriangular Factorization of Saddle Point Matrices

    KAUST Repository

    Pestana, J.

    2014-01-01

    Mastronardi and Van Dooren [SIAM J. Matrix Anal. Appl., 34 (2013), pp. 173-196] recently introduced the block antitriangular ("Batman") decomposition for symmetric indefinite matrices. Here we show the simplification of this factorization for saddle point matrices and demonstrate how it represents the common nullspace method. We show that rank-1 updates to the saddle point matrix can be easily incorporated into the factorization and give bounds on the eigenvalues of matrices important in saddle point theory. We show the relation of this factorization to constraint preconditioning and how it transforms but preserves the structure of block diagonal and block triangular preconditioners. © 2014 Society for Industrial and Applied Mathematics.

  6. Reproducible cancer biomarker discovery in SELDI-TOF MS using different pre-processing algorithms.

    Directory of Open Access Journals (Sweden)

    Jinfeng Zou

    Full Text Available BACKGROUND: There has been much interest in differentiating diseased and normal samples using biomarkers derived from mass spectrometry (MS studies. However, biomarker identification for specific diseases has been hindered by irreproducibility. Specifically, a peak profile extracted from a dataset for biomarker identification depends on a data pre-processing algorithm. Until now, no widely accepted agreement has been reached. RESULTS: In this paper, we investigated the consistency of biomarker identification using differentially expressed (DE peaks from peak profiles produced by three widely used average spectrum-dependent pre-processing algorithms based on SELDI-TOF MS data for prostate and breast cancers. Our results revealed two important factors that affect the consistency of DE peak identification using different algorithms. One factor is that some DE peaks selected from one peak profile were not detected as peaks in other profiles, and the second factor is that the statistical power of identifying DE peaks in large peak profiles with many peaks may be low due to the large scale of the tests and small number of samples. Furthermore, we demonstrated that the DE peak detection power in large profiles could be improved by the stratified false discovery rate (FDR control approach and that the reproducibility of DE peak detection could thereby be increased. CONCLUSIONS: Comparing and evaluating pre-processing algorithms in terms of reproducibility can elucidate the relationship among different algorithms and also help in selecting a pre-processing algorithm. The DE peaks selected from small peak profiles with few peaks for a dataset tend to be reproducibly detected in large peak profiles, which suggests that a suitable pre-processing algorithm should be able to produce peaks sufficient for identifying useful and reproducible biomarkers.

  7. Data preprocessing methods of FT-NIR spectral data for the classification cooking oil

    Science.gov (United States)

    Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli

    2014-12-01

    This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.

  8. Value of Distributed Preprocessing of Biomass Feedstocks to a Bioenergy Industry

    Energy Technology Data Exchange (ETDEWEB)

    Christopher T Wright

    2006-07-01

    Biomass preprocessing is one of the primary operations in the feedstock assembly system and the front-end of a biorefinery. Its purpose is to chop, grind, or otherwise format the biomass into a suitable feedstock for conversion to ethanol and other bioproducts. Many variables such as equipment cost and efficiency, and feedstock moisture content, particle size, bulk density, compressibility, and flowability affect the location and implementation of this unit operation. Previous conceptual designs show this operation to be located at the front-end of the biorefinery. However, data are presented that show distributed preprocessing at the field-side or in a fixed preprocessing facility can provide significant cost benefits by producing a higher value feedstock with improved handling, transporting, and merchandising potential. In addition, data supporting the preferential deconstruction of feedstock materials due to their bio-composite structure identifies the potential for significant improvements in equipment efficiencies and compositional quality upgrades. Theses data are collected from full-scale low and high capacity hammermill grinders with various screen sizes. Multiple feedstock varieties with a range of moisture values were used in the preprocessing tests. The comparative values of the different grinding configurations, feedstock varieties, and moisture levels are assessed through post-grinding analysis of the different particle fractions separated with a medium-scale forage particle separator and a Rototap separator. The results show that distributed preprocessing produces a material that has bulk flowable properties and fractionation benefits that can improve the ease of transporting, handling and conveying the material to the biorefinery and improve the biochemical and thermochemical conversion processes.

  9. NUTRITIONAL STUDIES ON THE CONFUSED FLOUR BEETLE, TRIBOLIUM CONFUSUM DUVAL

    Science.gov (United States)

    Chapman, Royal N.

    1924-01-01

    The confused flour beetle (Tribolium confusum) was chosen for this study because it lives in a food which ordinarily contains no living organisms. The death rates are greater in cultures which are handled daily than in those which are not handled but when all are handled alike the results are comparable. The results from experiments with individual beetles in various kinds of flour were plotted with instars (larval stages) on the ordinate and time in days on the abscissa, using the results from control experiments in wheat flour to determine the length of the various instars from an "x = y" formula. The curves of development were found to be straight lines throughout all but the last instar. The curve for the last instar during which the larva transformed deviated from the straight line in certain foods, notably rice flour. When mass cultures were used the death and transformation curves were plotted for each synthetic food. A comparison of the curves from wheat flour and the synthetic foods shows that the first parts of the curves are very much alike in all cases and that a few resemble the control in every respect except that the transformation curve has been moved back for a considerable time. The death curves for the mass cultures are not smooth but show sudden increase in death at approximately the times of molting. These curves may therefore be compared with the records from individual beetles. PMID:19872096

  10. A antropologia numa era de confusão

    Directory of Open Access Journals (Sweden)

    Maybury-Lewis David

    2002-01-01

    Full Text Available A Antropologia sempre procurou entender a natureza humana e as variedades da cultura humana. Esta tarefa ambiciosa enfrentou constantemente dificuldades teóricas e metodológicas. A teoria da evolução social foi apresentada como sendo preconceituosa e baseada em poucas evidências, inclusive com uma inferência racista em uma época de dominação européia. Os antídotos - rigoroso trabalho de campo inspirado no funcionalismo, estruturalismo ou culturalismo - eram também vistos como contaminados por hierarquias de uma ordem mundial colonialista. A atenção pós-moderna a este "orientalismo" em um mundo pós-colonial produziu textos no sentido de atentar os antropólogos para estas questões, o que acarretou um declínio na produção e no entendimento antropológico. Esta conferência considera estes dilemas, os correntes debates sobre "cultura" e "sobrevivência cultural" e como antropólogos deveriam proceder nesta nova era de confusão, produzida pela globalização e pelo aparecimento do Estado-nação.

  11. Delusional confusion of dreaming and reality in narcolepsy.

    Science.gov (United States)

    Wamsley, Erin; Donjacour, Claire E H M; Scammell, Thomas E; Lammers, Gert Jan; Stickgold, Robert

    2014-02-01

    We investigated a generally unappreciated feature of the sleep disorder narcolepsy, in which patients mistake the memory of a dream for a real experience and form sustained delusions about significant events. We interviewed patients with narcolepsy and healthy controls to establish the prevalence of this complaint and identify its predictors. Academic medical centers in Boston, Massachusetts and Leiden, The Netherlands. Patients (n = 46) with a diagnosis of narcolepsy with cataplexy, and age-matched healthy healthy controls (n = 41). N/A. "Dream delusions" were surprisingly common in narcolepsy and were often striking in their severity. As opposed to fleeting hypnagogic and hypnopompic hallucinations of the sleep/wake transition, dream delusions were false memories induced by the experience of a vivid dream, which led to false beliefs that could persist for days or weeks. The delusional confusion of dreamed events with reality is a prominent feature of narcolepsy, and suggests the possibility of source memory deficits in this disorder that have not yet been fully characterized.

  12. Triple confusion: An interesting case of proteinuria in pregnancy

    Directory of Open Access Journals (Sweden)

    Pramod K Guru

    2016-01-01

    Full Text Available Pregnancy-related renal diseases are unique and need special attention, both for diagnosis and management. The major confounding factors for diagnosis are the physiological multiorgan changes that occur throughout the gestational period. Proper diagnosis of the renal disease is also important, given the impact of varied management options both on the maternal and fetal health. A young middle-aged female with a long-standing history of diabetes presented to the hospital with worsening proteinuria in her second trimester of pregnancy. Clinical history, examinations, and laboratory analysis did not give any clues for diagnosis of a specific disease entity. This led us to take the risk of renal biopsy for a tissue diagnosis. The odds of renal biopsy favored the management decision in her case, thereby avoiding the confusions prior to biopsy. The pathological diagnosis is a surprise though not a unique entity on its own (minimal change disease in pregnancy. The case illustrates the disparity of clinical presentations and the pathology in patients, and the importance of renal biopsy in pregnant patients in particular.

  13. Coherence, competence, and confusion in narratives of middle childhood.

    Science.gov (United States)

    Weinstein, Lissa; Shustorovich, Ellen

    2011-01-01

    Middle childhood is a pivotal time in character development during which enduring internal structures are formed. Fiction can offer insights into the cognitive and affective shifts of this developmental phase and how they are transformed in adulthood. While the success of beloved books for latency age children lies in the solutions they offer to the conflict between the pull toward independence and the pull back to the safety of childhood, the enduring stories for adults about children in their middle years can be seen as works of mourning for the relationship with the parents and the childhood self, but more importantly as attempts to transform their experience of middle childhood through the retrospective creation of a coherence that was initially absent. Thematic and structural elements distinguish two groups of stories for adults: the first appears to solve the conflicts of this period by importing adult knowledge and perspective into the narrative of childhood; the second describes the unconscious disorganizing aspects of this period, thereby offering readers a chance to reorganize their own memories, to make a coherent whole out of the fragmented, the confusing, and the unresolved.

  14. A Case of Systemic Lupus Erythematosus Confused with Infective Endocarditis

    Directory of Open Access Journals (Sweden)

    Sibel Serin

    2014-09-01

    Full Text Available Systemic lupus erythematosus (SLE is a multisystemic autoimmune disease resulting from immune system-mediated tissue damage. Clinical findings of SLE can involve skin, kidney, central nervous system, cardiovascular system, serosal membranes, and the hematologic and immune systems. In the differential diagnosis, other connective tissue diseases, infective endocarditis, infections such as viral hepatitis, endocrine disorders such as hypothyroidism, sarcoidosis, and some malignant tumors should be considered. Infective endocarditis can imitate all the symptoms of SLE depending on immune complex accumulation glomerulonephritis. Hemolytic anemia, skin lesions, arthralgia, arthritis, decreased complement levels, and autoantibody positivity, including antinuclear autoantibody (ANA, positivity can be seen. Therefore, high fever, blood cultures, eye examination, and echocardiographic findings are of particular value. Here, we present a case of SLE that was confused with infective endocarditis (IE due to the presence of high fever associated with autoimmune hemolytic anemia (AHA and proteinuria as well as increased erythrocyte sedimentation rate (ESR, cardiac murmur, and Roth’s spots. (The Me­di­cal Bul­le­tin of Ha­se­ki 2014; 52: 212-15

  15. A comprehensive analysis about the influence of low-level preprocessing techniques on mass spectrometry data for sample classification.

    Science.gov (United States)

    López-Fernández, Hugo; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Fernández-Riverola, Florentino

    2014-01-01

    Matrix-Assisted Laser Desorption Ionisation Time-of-Flight (MALDI-TOF) is one of the high-throughput mass spectrometry technologies able to produce data requiring an extensive preprocessing before subsequent analyses. In this context, several low-level preprocessing techniques have been successfully developed for different tasks, including baseline correction, smoothing, normalisation, peak detection and peak alignment. In this work, we present a systematic comparison of different software packages aiding in the compulsory preprocessing of MALDI-TOF data. In order to guarantee the validity of our study, we test multiple configurations of each preprocessing technique that are subsequently used to train a set of classifiers whose performance (kappa and accuracy) provide us accurate information for the final comparison. Results from experiments show the real impact of preprocessing techniques on classification, evidencing that MassSpecWavelet provides the best performance and Support Vector Machines (SVM) are one of the most accurate classifiers.

  16. Synchronous correlation matrices and Connes’ embedding conjecture

    Energy Technology Data Exchange (ETDEWEB)

    Dykema, Kenneth J., E-mail: kdykema@math.tamu.edu [Department of Mathematics, Texas A& M University, College Station, Texas 77843-3368 (United States); Paulsen, Vern, E-mail: vern@math.uh.edu [Department of Mathematics, University of Houston, Houston, Texas 77204 (United States)

    2016-01-15

    In the work of Paulsen et al. [J. Funct. Anal. (in press); preprint arXiv:1407.6918], the concept of synchronous quantum correlation matrices was introduced and these were shown to correspond to traces on certain C*-algebras. In particular, synchronous correlation matrices arose in their study of various versions of quantum chromatic numbers of graphs and other quantum versions of graph theoretic parameters. In this paper, we develop these ideas further, focusing on the relations between synchronous correlation matrices and microstates. We prove that Connes’ embedding conjecture is equivalent to the equality of two families of synchronous quantum correlation matrices. We prove that if Connes’ embedding conjecture has a positive answer, then the tracial rank and projective rank are equal for every graph. We then apply these results to more general non-local games.

  17. THE EIGENVALUE PERTURBATION BOUND FOR ARBITRARY MATRICES

    Institute of Scientific and Technical Information of China (English)

    Wen Li; Jian-xin Chen

    2006-01-01

    In this paper we present some new absolute and relative perturbation bounds for the eigenvalue for arbitrary matrices, which improves some recent results. The eigenvalue inclusion region is also discussed.

  18. Sufficient Conditions of Nonsingular H-matrices

    Institute of Scientific and Technical Information of China (English)

    王广彬; 洪振杰; 高中喜

    2004-01-01

    From the concept of a diagonally dominant matrix, two sufficient conditions of nonsingular H-matrices were obtained in this paper. An example was given to show that these results improve the known results.

  19. Optimizing the Evaluation of Finite Element Matrices

    CERN Document Server

    Kirby, Robert C; Logg, Anders; Scott, L Ridgway; 10.1137/040607824

    2012-01-01

    Assembling stiffness matrices represents a significant cost in many finite element computations. We address the question of optimizing the evaluation of these matrices. By finding redundant computations, we are able to significantly reduce the cost of building local stiffness matrices for the Laplace operator and for the trilinear form for Navier-Stokes. For the Laplace operator in two space dimensions, we have developed a heuristic graph algorithm that searches for such redundancies and generates code for computing the local stiffness matrices. Up to cubics, we are able to build the stiffness matrix on any triangle in less than one multiply-add pair per entry. Up to sixth degree, we can do it in less than about two. Preliminary low-degree results for Poisson and Navier-Stokes operators in three dimensions are also promising.

  20. Orthogonal Polynomials from Hermitian Matrices II

    CERN Document Server

    Odake, Satoru

    2016-01-01

    This is the second part of the project `unified theory of classical orthogonal polynomials of a discrete variable derived from the eigenvalue problems of hermitian matrices.' In a previous paper, orthogonal polynomials having Jackson integral measures were not included, since such measures cannot be obtained from single infinite dimensional hermitian matrices. Here we show that Jackson integral measures for the polynomials of the big $q$-Jacobi family are the consequence of the recovery of self-adjointness of the unbounded Jacobi matrices governing the difference equations of these polynomials. The recovery of self-adjointness is achieved in an extended $\\ell^2$ Hilbert space on which a direct sum of two unbounded Jacobi matrices acts as a Hamiltonian or a difference Schr\\"odinger operator for an infinite dimensional eigenvalue problem. The polynomial appearing in the upper/lower end of Jackson integral constitutes the eigenvector of each of the two unbounded Jacobi matrix of the direct sum. We also point out...

  1. A Few Applications of Imprecise Matrices

    Directory of Open Access Journals (Sweden)

    Sahalad Borgoyary

    2015-07-01

    Full Text Available This article introduces generalized form of extension definition of the Fuzzy set and its complement in the sense of reference function namely in imprecise set and its complement. Discuss Partial presence of element, Membership value of an imprecise number in the normal and subnormal imprecise numbers. Further on the basis of reference function define usual matrix into imprecise form with new notation. And with the help of maximum and minimum operators, obtain some new matrices like reducing imprecise matrices, complement of reducing imprecise matrix etc. Along with discuss some of the classical matrix properties which are hold good in the imprecise matrix also. Further bring out examples of application of the addition of imprecise matrices, subtraction of imprecise matrices etc. in the field of transportation problems.

  2. Balanced random Toeplitz and Hankel Matrices

    CERN Document Server

    Basak, Anirban

    2010-01-01

    Except the Toeplitz and Hankel matrices, the common patterned matrices for which the limiting spectral distribution (LSD) are known to exist, share a common property--the number of times each random variable appears in the matrix is (more or less) same across the variables. Thus it seems natural to ask what happens to the spectrum of the Toeplitz and Hankel matrices when each entry is scaled by the square root of the number of times that entry appears in the matrix instead of the uniform scaling by $n^{-1/2}$. We show that the LSD of these balanced matrices exist and derive integral formulae for the moments of the limit distribution. Curiously, it is not clear if these moments define a unique distribution.

  3. Boolean Inner product Spaces and Boolean Matrices

    OpenAIRE

    Gudder, Stan; Latremoliere, Frederic

    2009-01-01

    This article discusses the concept of Boolean spaces endowed with a Boolean valued inner product and their matrices. A natural inner product structure for the space of Boolean n-tuples is introduced. Stochastic boolean vectors and stochastic and unitary Boolean matrices are studied. A dimension theorem for orthonormal bases of a Boolean space is proven. We characterize the invariant stochastic Boolean vectors for a Boolean stochastic matrix and show that they can be used to reduce a unitary m...

  4. Generalized Inverses of Matrices over Rings

    Institute of Scientific and Technical Information of China (English)

    韩瑞珠; 陈建龙

    1992-01-01

    Let R be a ring,*be an involutory function of the set of all finite matrices over R. In this pa-per,necessary and sufficient conditions are given for a matrix to have a (1,3)-inverse,(1,4)-inverse,or Morre-Penrose inverse,relative to *.Some results about generalized inverses of matrices over division rings are generalized and improved.

  5. A Euclidean algorithm for integer matrices

    DEFF Research Database (Denmark)

    Lauritzen, Niels; Thomsen, Jesper Funch

    2015-01-01

    We present a Euclidean algorithm for computing a greatest common right divisor of two integer matrices. The algorithm is derived from elementary properties of finitely generated modules over the ring of integers.......We present a Euclidean algorithm for computing a greatest common right divisor of two integer matrices. The algorithm is derived from elementary properties of finitely generated modules over the ring of integers....

  6. Infinite Products of Random Isotropically Distributed Matrices

    CERN Document Server

    Il'yn, A S; Zybin, K P

    2016-01-01

    Statistical properties of infinite products of random isotropically distributed matrices are investigated. Both for continuous processes with finite correlation time and discrete sequences of independent matrices, a formalism that allows to calculate easily the Lyapunov spectrum and generalized Lyapunov exponents is developed. This problem is of interest to probability theory, statistical characteristics of matrix T-exponentials are also needed for turbulent transport problems, dynamical chaos and other parts of statistical physics.

  7. A Wegner estimate for Wigner matrices

    CERN Document Server

    Maltsev, Anna

    2011-01-01

    In the first part of these notes, we review some of the recent developments in the study of the spectral properties of Wigner matrices. In the second part, we present a new proof of a Wegner estimate for the eigenvalues of a large class of Wigner matrices. The Wegner estimate gives an upper bound for the probability to find an eigenvalue in an interval $I$, proportional to the size $|I|$ of the interval.

  8. Matrices related to some Fock space operators

    Directory of Open Access Journals (Sweden)

    Krzysztof Rudol

    2011-01-01

    Full Text Available Matrices of operators with respect to frames are sometimes more natural and easier to compute than the ones related to bases. The present work investigates such operators on the Segal-Bargmann space, known also as the Fock space. We consider in particular some properties of matrices related to Toeplitz and Hankel operators. The underlying frame is provided by normalised reproducing kernel functions at some lattice points.

  9. Linear algebra for skew-polynomial matrices

    OpenAIRE

    Abramov, Sergei; Bronstein, Manuel

    2002-01-01

    We describe an algorithm for transforming skew-polynomial matrices over an Ore domain in row-reduced form, and show that this algorithm can be used to perform the standard calculations of linear algebra on such matrices (ranks, kernels, linear dependences, inhomogeneous solving). The main application of our algorithm is to desingularize recurrences and to compute the rational solutions of a large class of linear functional systems. It also turns out to be efficient when applied to ordinary co...

  10. Moment matrices, border bases and radical computation

    OpenAIRE

    Mourrain, B.; J. B. Lasserre; Laurent, Monique; Rostalski, P.; Trebuchet, Philippe

    2013-01-01

    In this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming it complex (resp. real) variety is nte. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and semi-denite programming. While the border basis algorithms of [17] are ecient and numerically stable for computing complex roots, algorithms based on moment matrices [12] allow the incorporation of additional polynomials, ...

  11. Infinite Products of Random Isotropically Distributed Matrices

    Science.gov (United States)

    Il'yn, A. S.; Sirota, V. A.; Zybin, K. P.

    2017-01-01

    Statistical properties of infinite products of random isotropically distributed matrices are investigated. Both for continuous processes with finite correlation time and discrete sequences of independent matrices, a formalism that allows to calculate easily the Lyapunov spectrum and generalized Lyapunov exponents is developed. This problem is of interest to probability theory, statistical characteristics of matrix T-exponentials are also needed for turbulent transport problems, dynamical chaos and other parts of statistical physics.

  12. The Lost Lamb: A Literature Review on the Confusion of College Students in China

    Science.gov (United States)

    Dong, Jianmei; Han, Fubin

    2010-01-01

    With the development of mass higher education in China, confusion--a contradictory state between college students' awareness of employment, learning, morality, and their own behavior and societal requirements--is proving a ubiquitous problem among college students. His confusion has garnered much social attention. In this paper, the origins of…

  13. Annual Percentage Rate and Annual Effective Rate: Resolving Confusion in Intermediate Accounting Textbooks

    Science.gov (United States)

    Vicknair, David; Wright, Jeffrey

    2015-01-01

    Evidence of confusion in intermediate accounting textbooks regarding the annual percentage rate (APR) and annual effective rate (AER) is presented. The APR and AER are briefly discussed in the context of a note payable and correct formulas for computing each is provided. Representative examples of the types of confusion that we found is presented…

  14. MERSENNE AND HADAMARD MATRICES CALCULATION BY SCARPIS METHOD

    Directory of Open Access Journals (Sweden)

    N. A. Balonin

    2014-05-01

    Full Text Available Purpose. The paper deals with the problem of basic generalizations of Hadamard matrices associated with maximum determinant matrices or not optimal by determinant matrices with orthogonal columns (weighing matrices, Mersenne and Euler matrices, ets.; calculation methods for the quasi-orthogonal local maximum determinant Mersenne matrices are not studied enough sufficiently. The goal of this paper is to develop the theory of Mersenne and Hadamard matrices on the base of generalized Scarpis method research. Methods. Extreme solutions are found in general by minimization of maximum for absolute values of the elements of studied matrices followed by their subsequent classification according to the quantity of levels and their values depending on orders. Less universal but more effective methods are based on structural invariants of quasi-orthogonal matrices (Silvester, Paley, Scarpis methods, ets.. Results. Generalizations of Hadamard and Belevitch matrices as a family of quasi-orthogonal matrices of odd orders are observed; they include, in particular, two-level Mersenne matrices. Definitions of section and layer on the set of generalized matrices are proposed. Calculation algorithms for matrices of adjacent layers and sections by matrices of lower orders are described. Approximation examples of the Belevitch matrix structures up to 22-nd critical order by Mersenne matrix of the third order are given. New formulation of the modified Scarpis method to approximate Hadamard matrices of high orders by lower order Mersenne matrices is proposed. Williamson method is described by example of one modular level matrices approximation by matrices with a small number of levels. Practical relevance. The efficiency of developing direction for the band-pass filters creation is justified. Algorithms for Mersenne matrices design by Scarpis method are used in developing software of the research program complex. Mersenne filters are based on the suboptimal by

  15. A Brief Historical Introduction to Matrices and Their Applications

    Science.gov (United States)

    Debnath, L.

    2014-01-01

    This paper deals with the ancient origin of matrices, and the system of linear equations. Included are algebraic properties of matrices, determinants, linear transformations, and Cramer's Rule for solving the system of algebraic equations. Special attention is given to some special matrices, including matrices in graph theory and electrical…

  16. A Brief Historical Introduction to Matrices and Their Applications

    Science.gov (United States)

    Debnath, L.

    2014-01-01

    This paper deals with the ancient origin of matrices, and the system of linear equations. Included are algebraic properties of matrices, determinants, linear transformations, and Cramer's Rule for solving the system of algebraic equations. Special attention is given to some special matrices, including matrices in graph theory and electrical…

  17. Representation-independent manipulations with Dirac matrices and spinors

    OpenAIRE

    2007-01-01

    Dirac matrices, also known as gamma matrices, are defined only up to a similarity transformation. Usually, some explicit representation of these matrices is assumed in order to deal with them. In this article, we show how it is possible to proceed without any such assumption. Various important identities involving Dirac matrices and spinors have been derived without assuming any representation at any stage.

  18. Influence of data preprocessing on the quantitative determination of nutrient content in poultry manure by near infrared spectroscopy.

    Science.gov (United States)

    Chen, L J; Xing, L; Han, L J

    2010-01-01

    With increasing concern over potential polltion from farm wastes, there is a need for rapid and robust methods that can analyze livestock manure nutrient content. The near infrared spectroscopy (NIRS) method was used to determine nutrient content in diverse poultry manure samples (n=91). Various standard preprocessing methods (derivatives, multiplicative scatter correction, Savitsky-Golay smoothing, and standard normal variate) were applied to reduce data systemic noise. In addition, a new preprocessing method known as direct orthogonal signal correction (DOSC) was tested. Calibration models for ammonium nitrogen, total potassium, total nitrogen, and total phosphorus were developed with the partial least squares (PLS) method. The results showed that all the preprocessed data improved prediction results compared with the non-preprocessing method. Compared with the other preprocessing methods, the DOSC method gave the best results. The DOSC method achieved moderately successful prediction for ammonium nitrogen, total nitrogen, and total phosphorus. However, all preprocessing methods did not provide reliable prediction for total potassium. This indicates the DOSC method, especially combined with other preprocessing methods, needs further study to allow a more complete predictive analysis of manure nutrient content.

  19. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines.

    Science.gov (United States)

    del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J; Raboso, Mariano

    2015-06-17

    Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation-based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking-to reduce the dimensions of images-and binarization-to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.

  20. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Lara del Val

    2015-06-01

    Full Text Available Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM. The preprocessing techniques used are spatial filtering, segmentation—based on a Gaussian Mixture Model (GMM to separate the person from the background, masking—to reduce the dimensions of images—and binarization—to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.

  1. PRE-Processing for Video Coduing with Rate-Distortion Optimization Decision

    Institute of Scientific and Technical Information of China (English)

    QI Yi; HUANG Yong-gui; QI Hong-gang

    2006-01-01

    This paper proposes an adaptive video pre-processing algorithm for video coding. This algorithm works on the original image before intra- or inter-prediction. It adopts Gaussian filter to remove noise and insignificant features existing in images of video. Detection and restoration of edges are followed to restore the edges which are excessively filtered out in filtered images. Rate-Distortion Optimization (RDO) is employed to decide adaptively whether a processed block or a unprocessed block is coded into bit-streams doe more efficient coding. Our experiment results show that the algorithm achieves good coding performances on both subjective and objective aspects. In addition, the proposed pre-processing algorithm is transparent to decoder, and thus can be compliant with any video coding standards without modifying the decoder.

  2. PREPROCESSING PADA SEGMENTASI CITRA PARU-PARU DAN JANTUNG MENGGUNAKAN ANISOTROPIC DIFFUSION FILTER

    Directory of Open Access Journals (Sweden)

    A. T. A Prawira Kusuma

    2015-12-01

    Full Text Available This paper propose a preprocessing techniques in lung segmentation scheme using Anisotropic Diffusion filters. The aim is to improve the accuracy, sensitivity and specificity results of segmentation. This method was chosen because it has the ability to detect the edge, namely in doing smoothing, this method can obscure noise, while maintaining the edges of objects in the image. Characteristics such as this is needed to process medical image filter, where the boundary between the organ and the background is not so clear. The segmentation process is done by K-means Clustering and Active Contour to segment the lungs. Segmentation results were validated using the Receiver Operating Characteristic (ROC showed an increased accuracy, sensitivity and specificity, when compared with the results of segmentation in the previous paper, in which the preprocessing method used is Gaussian Lowpass filter.

  3. A Study on Pre-processing Algorithms for Metal Parts Inspection

    Directory of Open Access Journals (Sweden)

    Haider Sh. Hashim

    2011-06-01

    Full Text Available Pre-processing is very useful in a variety of situations since it helps to suppress information that is not related to the exact image processing or analysis task. Mathematical morphology is used for analysis, understanding and image processing. It is an influential method in the geometric morphological analysis and image understanding. It has befallen a new theory in the digital image processing domain. Edges detection and noise reduction are a crucial and very important pre-processing step. The classical edge detection methods and filtering are less accurate in detecting complex edge and filtering various types of noise. This paper proposed some useful mathematic morphological techniques to detect edge and to filter noise in metal parts image. The experimental result showed that the proposed algorithm helps to increase accuracy of metal parts inspection system.

  4. Analog preprocessing in a SNS 2 micrometers low-noise CMOS folding ADC

    Science.gov (United States)

    Carr, Richard D.

    1994-12-01

    Significant research in high performance analog-to-digital converters (ADC's) has been directed at retaining part of the high-speed flash ADC architecture, while reducing the total number of comparators in the circuit. The symmetrical number system (SNS) can be used to preprocess the analog input signal, reducing the number of comparators and thus reducing the chip area and power consumption of the ADC. This thesis examines a Very Large Scale Integrated (VLSI) design for a folding circuit for a SNS analog preprocessing architecture in a 9-bit folding ADC with a total of 23 comparators. The analog folding circuit layout uses the Orbit 2 micrometers CMOS N-well double-metal, double-poly low-noise analog process. The effects of Spice level 2 parameter tolerances during fabrication on the operation of the folding circuit are investigated numerically. The frequency response of the circuit is also quantified. An Application Specific Integrated Circuit (ASIC) is designed.

  5. Radar signal pre-processing to suppress surface bounce and multipath

    Science.gov (United States)

    Paglieroni, David W; Mast, Jeffrey E; Beer, N. Reginald

    2013-12-31

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes that return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  6. Preprocessing, classification modeling and feature selection using flow injection electrospray mass spectrometry metabolite fingerprint data.

    Science.gov (United States)

    Enot, David P; Lin, Wanchang; Beckmann, Manfred; Parker, David; Overy, David P; Draper, John

    2008-01-01

    Metabolome analysis by flow injection electrospray mass spectrometry (FIE-MS) fingerprinting generates measurements relating to large numbers of m/z signals. Such data sets often exhibit high variance with a paucity of replicates, thus providing a challenge for data mining. We describe data preprocessing and modeling methods that have proved reliable in projects involving samples from a range of organisms. The protocols interact with software resources specifically for metabolomics provided in a Web-accessible data analysis package FIEmspro (http://users.aber.ac.uk/jhd) written in the R environment and requiring a moderate knowledge of R command-line usage. Specific emphasis is placed on describing the outcome of modeling experiments using FIE-MS data that require further preprocessing to improve quality. The salient features of both poor and robust (i.e., highly generalizable) multivariate models are outlined together with advice on validating classifiers and avoiding false discovery when seeking explanatory variables.

  7. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines

    Science.gov (United States)

    del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J.; Raboso, Mariano

    2015-01-01

    Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation—based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking—to reduce the dimensions of images—and binarization—to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements. PMID:26091392

  8. A Hybrid System based on Multi-Agent System in the Data Preprocessing Stage

    CERN Document Server

    Kularbphettong, Kobkul; Meesad, Phayung

    2010-01-01

    We describe the usage of the Multi-agent system in the data preprocessing stage of an on-going project, called e-Wedding. The aim of this project is to utilize MAS and various approaches, like Web services, Ontology, and Data mining techniques, in e-Business that want to improve responsiveness and efficiency of systems so as to extract customer behavior model on Wedding Businesses. However, in this paper, we propose and implement the multi-agent-system, based on JADE, to only cope data preprocessing stage specified on handle with missing value techniques. JADE is quite easy to learn and use. Moreover, it supports many agent approaches such as agent communication, protocol, behavior and ontology. This framework has been experimented and evaluated in the realization of a simple, but realistic. The results, though still preliminary, are quite.

  9. Input data preprocessing method for exchange rate forecasting via neural network

    Directory of Open Access Journals (Sweden)

    Antić Dragan S.

    2014-01-01

    Full Text Available The aim of this paper is to present a method for neural network input parameters selection and preprocessing. The purpose of this network is to forecast foreign exchange rates using artificial intelligence. Two data sets are formed for two different economic systems. Each system is represented by six categories with 70 economic parameters which are used in the analysis. Reduction of these parameters within each category was performed by using the principal component analysis method. Component interdependencies are established and relations between them are formed. Newly formed relations were used to create input vectors of a neural network. The multilayer feed forward neural network is formed and trained using batch training. Finally, simulation results are presented and it is concluded that input data preparation method is an effective way for preprocessing neural network data. [Projekat Ministarstva nauke Republike Srbije, br.TR 35005, br. III 43007 i br. III 44006

  10. The Role of GRAIL Orbit Determination in Preprocessing of Gravity Science Measurements

    Science.gov (United States)

    Kruizinga, Gerhard; Asmar, Sami; Fahnestock, Eugene; Harvey, Nate; Kahan, Daniel; Konopliv, Alex; Oudrhiri, Kamal; Paik, Meegyeong; Park, Ryan; Strekalov, Dmitry; Watkins, Michael; Yuan, Dah-Ning

    2013-01-01

    The Gravity Recovery And Interior Laboratory (GRAIL) mission has constructed a lunar gravity field with unprecedented uniform accuracy on the farside and nearside of the Moon. GRAIL lunar gravity field determination begins with preprocessing of the gravity science measurements by applying corrections for time tag error, general relativity, measurement noise and biases. Gravity field determination requires the generation of spacecraft ephemerides of an accuracy not attainable with the pre-GRAIL lunar gravity fields. Therefore, a bootstrapping strategy was developed, iterating between science data preprocessing and lunar gravity field estimation in order to construct sufficiently accurate orbit ephemerides.This paper describes the GRAIL measurements, their dependence on the spacecraft ephemerides and the role of orbit determination in the bootstrapping strategy. Simulation results will be presented that validate the bootstrapping strategy followed by bootstrapping results for flight data, which have led to the latest GRAIL lunar gravity fields.

  11. The impact of data preprocessing in traumatic brain injury detection using functional magnetic resonance imaging.

    Science.gov (United States)

    Vergara, Victor M; Damaraju, Eswar; Mayer, Andrew B; Miller, Robyn; Cetin, Mustafa S; Calhoun, Vince

    2015-01-01

    Traumatic brain injury (TBI) can adversely affect a person's thinking, memory, personality and behavior. For this reason new and better biomarkers are being investigated. Resting state functional network connectivity (rsFNC) derived from functional magnetic resonance (fMRI) imaging is emerging as a possible biomarker. One of the main concerns with this technique is the appropriateness of methods used to correct for subject movement. In this work we used 50 mild TBI patients and matched healthy controls to explore the outcomes obtained from different fMRI data preprocessing. Results suggest that correction for motion variance before spatial smoothing is the best alternative. Following this preprocessing option a significant group difference was found between cerebellum and supplementary motor area/paracentral lobule. In this case the mTBI group exhibits an increase in rsFNC.

  12. KONFIG and REKONFIG: Two interactive preprocessing to the Navy/NASA Engine Program (NNEP)

    Science.gov (United States)

    Fishbach, L. H.

    1981-01-01

    The NNEP is a computer program that is currently being used to simulate the thermodynamic cycle performance of almost all types of turbine engines by many government, industry, and university personnel. The NNEP uses arrays of input data to set up the engine simulation and component matching method as well as to describe the characteristics of the components. A preprocessing program (KONFIG) is described in which the user at a terminal on a time shared computer can interactively prepare the arrays of data required. It is intended to make it easier for the occasional or new user to operate NNEP. Another preprocessing program (REKONFIG) in which the user can modify the component specifications of a previously configured NNEP dataset is also described. It is intended to aid in preparing data for parametric studies and/or studies of similar engines such a mixed flow turbofans, turboshafts, etc.

  13. Effective automated prediction of vertebral column pathologies based on logistic model tree with SMOTE preprocessing.

    Science.gov (United States)

    Karabulut, Esra Mahsereci; Ibrikci, Turgay

    2014-05-01

    This study develops a logistic model tree based automation system based on for accurate recognition of types of vertebral column pathologies. Six biomechanical measures are used for this purpose: pelvic incidence, pelvic tilt, lumbar lordosis angle, sacral slope, pelvic radius and grade of spondylolisthesis. A two-phase classification model is employed in which the first step is preprocessing the data by use of Synthetic Minority Over-sampling Technique (SMOTE), and the second one is feeding the classifier Logistic Model Tree (LMT) with the preprocessed data. We have achieved an accuracy of 89.73 %, and 0.964 Area Under Curve (AUC) in computer based automatic detection of the pathology. This was validated via a 10-fold-cross-validation experiment conducted on clinical records of 310 patients. The study also presents a comparative analysis of the vertebral column data with the use of several machine learning algorithms.

  14. The Combined Effect of Filters in ECG Signals for Pre-Processing

    Directory of Open Access Journals (Sweden)

    Isha V. Upganlawar

    2014-05-01

    Full Text Available The ECG signal is abruptly changing and continuous in nature. The heart disease such as paroxysmal of heart, arrhythmia diagnosing, are related with the intelligent health care decision this ECG signal need to be pre-process accurately for further action on it such as extracting the features, wavelet decomposition, distribution of QRS complexes in ECG recordings and related information such as heart rate and RR interval, classification of the signal by using various classifiers etc. Filters plays very important role in analyzing the low frequency components in ECG signal. The biomedical signals are of low frequency, the removal of power line interference and baseline wander is a very important step at the pre-processing stage of ECG. In these paper we deal with the study of Median filtering and FIR (Finite Impulse Responsefiltering of ECG signals under noisy condition

  15. Condition number estimation of preconditioned matrices.

    Science.gov (United States)

    Kushida, Noriyuki

    2015-01-01

    The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.

  16. Condition number estimation of preconditioned matrices.

    Directory of Open Access Journals (Sweden)

    Noriyuki Kushida

    Full Text Available The present paper introduces a condition number estimation method for preconditioned matrices. The newly developed method provides reasonable results, while the conventional method which is based on the Lanczos connection gives meaningless results. The Lanczos connection based method provides the condition numbers of coefficient matrices of systems of linear equations with information obtained through the preconditioned conjugate gradient method. Estimating the condition number of preconditioned matrices is sometimes important when describing the effectiveness of new preconditionerers or selecting adequate preconditioners. Operating a preconditioner on a coefficient matrix is the simplest method of estimation. However, this is not possible for large-scale computing, especially if computation is performed on distributed memory parallel computers. This is because, the preconditioned matrices become dense, even if the original matrices are sparse. Although the Lanczos connection method can be used to calculate the condition number of preconditioned matrices, it is not considered to be applicable to large-scale problems because of its weakness with respect to numerical errors. Therefore, we have developed a robust and parallelizable method based on Hager's method. The feasibility studies are curried out for the diagonal scaling preconditioner and the SSOR preconditioner with a diagonal matrix, a tri-daigonal matrix and Pei's matrix. As a result, the Lanczos connection method contains around 10% error in the results even with a simple problem. On the other hand, the new method contains negligible errors. In addition, the newly developed method returns reasonable solutions when the Lanczos connection method fails with Pei's matrix, and matrices generated with the finite element method.

  17. The Combined Effect of Filters in ECG Signals for Pre-Processing

    OpenAIRE

    Isha V. Upganlawar; Harshal Chowhan

    2014-01-01

    The ECG signal is abruptly changing and continuous in nature. The heart disease such as paroxysmal of heart, arrhythmia diagnosing, are related with the intelligent health care decision this ECG signal need to be pre-process accurately for further action on it such as extracting the features, wavelet decomposition, distribution of QRS complexes in ECG recordings and related information such as heart rate and RR interval, classification of the signal by using various classifiers etc. Filters p...

  18. Data preprocessing for a vehicle-based localization system used in road traffic applications

    Science.gov (United States)

    Patelczyk, Timo; Löffler, Andreas; Biebl, Erwin

    2016-09-01

    This paper presents a fixed-point implementation of the preprocessing using a field programmable gate array (FPGA), which is required for a multipath joint angle and delay estimation (JADE) used in road traffic applications. This paper lays the foundation for many model-based parameter estimation methods. Here, a simulation of a vehicle-based localization system application for protecting vulnerable road users, which were equipped with appropriate transponders, is considered. For such safety critical applications, the robustness and real-time capability of the localization is particularly important. Additionally, a motivation to use a fixed-point implementation for the data preprocessing is a limited computing power of the head unit of a vehicle. This study aims to process the raw data provided by the localization system used in this paper. The data preprocessing applied includes a wideband calibration of the physical localization system, separation of relevant information from the received sampled signal, and preparation of the incoming data via further processing. Further, a channel matrix estimation was implemented to complete the data preprocessing, which contains information on channel parameters, e.g., the positions of the objects to be located. In the presented case of a vehicle-based localization system application we assume an urban environment, in which multipath propagation occurs. Since most methods for localization are based on uncorrelated signals, this fact must be addressed. Hence, a decorrelation of incoming data stream in terms of a further localization is required. This decorrelation was accomplished by considering several snapshots in different time slots. As a final aspect of the use of fixed-point arithmetic, quantization errors are considered. In addition, the resources and runtime of the presented implementation are discussed; these factors are strongly linked to a practical implementation.

  19. A clinical evaluation of the RNCA study using Fourier filtering as a preprocessing method

    Energy Technology Data Exchange (ETDEWEB)

    Robeson, W.; Alcan, K.E.; Graham, M.C.; Palestro, C.; Oliver, F.H.; Benua, R.S.

    1984-06-01

    Forty-one patients (25 male, 16 female) were studied by Radionuclide Cardangiography (RNCA) in our institution. There were 42 rest studies and 24 stress studies (66 studies total). Sixteen patients were normal, 15 had ASHD, seven had a cardiomyopathy, and three had left-sided valvular regurgitation. Each study was preprocessed using both the standard nine-point smoothing method and Fourier filtering. Amplitude and phase images were also generated. Both preprocessing methods were compared with respect to image quality, border definition, reliability and reproducibility of the LVEF, and cine wall motion interpretation. Image quality and border definition were judged superior by the consensus of two independent observers in 65 of 66 studies (98%) using Fourier filtered data. The LVEF differed between the two processes by greater than .05 in 17 of 66 studies (26%) including five studies in which the LVEF could not be determined using nine-point smoothed data. LV wall motion was normal by both techniques in all control patients by cine analysis. However, cine wall motion analysis using Fourier filtered data demonstrated additional abnormalities in 17 of 25 studies (68%) in the ASHD group, including three uninterpretable studies using nine-point smoothed data. In the cardiomyopathy/valvular heart disease group, ten of 18 studies (56%) had additional wall motion abnormalities using Fourier filtered data (including four uninterpretable studies using nine-point smoothed data). We conclude that Fourier filtering is superior to the nine-point smooth preprocessing method now in general use in terms of image quality, border definition, generation of an LVEF, and cine wall motion analysis. The advent of the array processor makes routine preprocessing by Fourier filtering a feasible technologic advance in the development of the RNCA study.

  20. Pre-Processing and Re-Weighting Jet Images with Different Substructure Variables

    CERN Document Server

    Huynh, Lynn

    2016-01-01

    This work is an extension of Monte Carlo simulation based studies in tagging boosted, hadronically decaying W bosons at a center of mass energy of s = 13 TeV. Two pre-processing techniques used with jet images, translation and rotation, are first examined. The generated jet images for W signal jets and QCD background jets are then rescaled and weighted with five different substructure variables for visual comparison.

  1. Preprocessing techniques to reduce atmospheric and sensor variability in multispectral scanner data.

    Science.gov (United States)

    Crane, R. B.

    1971-01-01

    Multispectral scanner data are potentially useful in a variety of remote sensing applications. Large-area surveys of earth resources carried out by automated recognition processing of these data are particularly important. However, the practical realization of such surveys is limited by a variability in the scanner signals that results in improper recognition of the data. This paper discusses ways by which some of this variability can be removed from the data by preprocessing with resultant improvements in recognition results.

  2. Performance evaluation of preprocessing techniques utilizing expert information in multivariate calibration.

    Science.gov (United States)

    Sharma, Sandeep; Goodarzi, Mohammad; Ramon, Herman; Saeys, Wouter

    2014-04-01

    Partial Least Squares (PLS) regression is one of the most used methods for extracting chemical information from Near Infrared (NIR) spectroscopic measurements. The success of a PLS calibration relies largely on the representativeness of the calibration data set. This is not trivial, because not only the expected variation in the analyte of interest, but also the variation of other contributing factors (interferents) should be included in the calibration data. This also implies that changes in interferent concentrations not covered in the calibration step can deteriorate the prediction ability of the calibration model. Several researchers have suggested that PLS models can be robustified against changes in the interferent structure by incorporating expert knowledge in the preprocessing step with the aim to efficiently filter out the spectral influence of the spectral interferents. However, these methods have not yet been compared against each other. Therefore, in the present study, various preprocessing techniques exploiting expert knowledge were compared on two experimental data sets. In both data sets, the calibration and test set were designed to have a different interferent concentration range. The performance of these techniques was compared to that of preprocessing techniques which do not use any expert knowledge. Using expert knowledge was found to improve the prediction performance for both data sets. For data set-1, the prediction error improved nearly 32% when pure component spectra of the analyte and the interferents were used in the Extended Multiplicative Signal Correction framework. Similarly, for data set-2, nearly 63% improvement in the prediction error was observed when the interferent information was utilized in Spectral Interferent Subtraction preprocessing.

  3. Pre-Processing Noise Cross-Correlations with Equalizing the Network Covariance Matrix Eigen-Spectrum

    Science.gov (United States)

    Seydoux, L.; de Rosny, J.; Shapiro, N.

    2016-12-01

    Theoretically, the extraction of Green functions from noise cross-correlation requires the ambient seismic wavefield to be generated by uncorrelated sources evenly distributed in the medium. Yet, this condition is often not verified. Strong events such as earthquakes often produce highly coherent transient signals. Also, the microseismic noise is generated at specific places on the Earth's surface with source regions often very localized in space. Different localized and persistent seismic sources may contaminate the cross-correlations of continuous records resulting in spurious arrivals or asymmetry and, finally, in biased travel-time measurements. Pre-processing techniques therefore must be applied to the seismic data in order to reduce the effect of noise anisotropy and the influence of strong localized events. Here we describe a pre-processing approach that uses the covariance matrix computed from signals recorded by a network of seismographs. We extend the widely used time and spectral equalization pre-processing to the equalization of the covariance matrix spectrum (i.e., its ordered eigenvalues). This approach can be considered as a spatial equalization. This method allows us to correct for the wavefield anisotropy in two ways: (1) the influence of strong directive sources is substantially attenuated, and (2) the weakly excited modes are reinforced, allowing to partially recover the conditions required for the Green's function retrieval. We also present an eigenvector-based spatial filter used to distinguish between surface and body waves. This last filter is used together with the equalization of the eigenvalue spectrum. We simulate two-dimensional wavefield in a heterogeneous medium with strongly dominating source. We show that our method greatly improves the travel-time measurements obtained from the inter-station cross-correlation functions. Also, we apply the developed method to the USArray data and pre-process the continuous records strongly influenced

  4. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis.

  5. Review of Data Preprocessing Methods for Sign Language Recognition Systems based on Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Zorins Aleksejs

    2016-12-01

    Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.

  6. Fifty Years of Climate Curricular Confusion and Pedagogical Gaps

    Science.gov (United States)

    McCaffrey, M. S.; Buhr, S. S.; Niepold, F.

    2008-12-01

    The processes of weather and climate, including the greenhouse effect and the potential for significant, even catastrophic human impacts on the climate system, were sufficiently understood in 1958 during the International Geophysical Year that the authors of the science education booklet, Planet Earth, The Mystery with 100,000 Clues, published by the U.S. National Academy of Sciences, were confident to predict that continued emissions of carbon dioxide into the atmosphere could, in time, melt icecaps and glaciers and rise sea levels. This important scientific insight was further studied by climatologists, but is largely missing as an integral, important component of science education. Now, fifty years later, with a global population that has doubled, fossil fuel emissions that have tripled, and current energy consumption and emission trajectories that are above the IPCC Business as Usual scenario, leading politicians still doubt that our global economy can impact the climate system. NRC estimates that up to 40 percent of the approximately $10 trillion U.S. economy is affected by weather and climate events annually, making it a crucial if not dominant factor in our economic well-being, particularly for future generations. Despite the long term and short term importance of climate in our lives, society is essentially illiterate about climate science and confused about the connections between energy, economy and climate, as numerous public opinion polls and studies have shown. A key reason is that education programs and pedagogical content knowledge focusing on the basics of climate, including natural variability as well as human induced climate change, are largely missing from K12 and undergraduate education. Climate has fallen through disciplinary cracks, been avoided because of perceived controversy, and neglected because most educators lack training or expertise in the subject matter. With a focus on climate in formal education, this paper will provide an overview

  7. Data Cleaning In Data Warehouse: A Survey of Data Pre-processing Techniques and Tools

    Directory of Open Access Journals (Sweden)

    Anosh Fatima

    2017-03-01

    Full Text Available A Data Warehouse is a computer system designed for storing and analyzing an organization's historical data from day-to-day operations in Online Transaction Processing System (OLTP. Usually, an organization summarizes and copies information from its operational systems to the data warehouse on a regular schedule and management performs complex queries and analysis on the information without slowing down the operational systems. Data need to be pre-processed to improve quality of data, before storing into data warehouse. This survey paper presents data cleaning problems and the approaches in use currently for preprocessing. To determine which technique of preprocessing is best in what scenario to improve the performance of Data Warehouse is main goal of this paper. Many techniques have been analyzed for data cleansing, using certain evaluation attributes and tested on different kind of data sets. Data quality tools such as YALE, ALTERYX, and WEKA have been used for conclusive results to ready the data in data warehouse and ensure that only cleaned data populates the warehouse, thus enhancing usability of the warehouse. Results of paper can be useful in many future activities like cleansing, standardizing, correction, matching and transformation. This research can help in data auditing and pattern detection in the data.

  8. Supervised pre-processing approaches in multiple class variables classification for fish recruitment forecasting

    KAUST Repository

    Fernandes, José Antonio

    2013-02-01

    A multi-species approach to fisheries management requires taking into account the interactions between species in order to improve recruitment forecasting of the fish species. Recent advances in Bayesian networks direct the learning of models with several interrelated variables to be forecasted simultaneously. These models are known as multi-dimensional Bayesian network classifiers (MDBNs). Pre-processing steps are critical for the posterior learning of the model in these kinds of domains. Therefore, in the present study, a set of \\'state-of-the-art\\' uni-dimensional pre-processing methods, within the categories of missing data imputation, feature discretization and feature subset selection, are adapted to be used with MDBNs. A framework that includes the proposed multi-dimensional supervised pre-processing methods, coupled with a MDBN classifier, is tested with synthetic datasets and the real domain of fish recruitment forecasting. The correctly forecasting of three fish species (anchovy, sardine and hake) simultaneously is doubled (from 17.3% to 29.5%) using the multi-dimensional approach in comparison to mono-species models. The probability assessments also show high improvement reducing the average error (estimated by means of Brier score) from 0.35 to 0.27. Finally, these differences are superior to the forecasting of species by pairs. © 2012 Elsevier Ltd.

  9. Super-resolution algorithm based on sparse representation and wavelet preprocessing for remote sensing imagery

    Science.gov (United States)

    Ren, Ruizhi; Gu, Lingjia; Fu, Haoyang; Sun, Chenglin

    2017-04-01

    An effective super-resolution (SR) algorithm is proposed for actual spectral remote sensing images based on sparse representation and wavelet preprocessing. The proposed SR algorithm mainly consists of dictionary training and image reconstruction. Wavelet preprocessing is used to establish four subbands, i.e., low frequency, horizontal, vertical, and diagonal high frequency, for an input image. As compared to the traditional approaches involving the direct training of image patches, the proposed approach focuses on the training of features derived from these four subbands. The proposed algorithm is verified using different spectral remote sensing images, e.g., moderate-resolution imaging spectroradiometer (MODIS) images with different bands, and the latest Chinese Jilin-1 satellite images with high spatial resolution. According to the visual experimental results obtained from the MODIS remote sensing data, the SR images using the proposed SR algorithm are superior to those using a conventional bicubic interpolation algorithm or traditional SR algorithms without preprocessing. Fusion algorithms, e.g., standard intensity-hue-saturation, principal component analysis, wavelet transform, and the proposed SR algorithms are utilized to merge the multispectral and panchromatic images acquired by the Jilin-1 satellite. The effectiveness of the proposed SR algorithm is assessed by parameters such as peak signal-to-noise ratio, structural similarity index, correlation coefficient, root-mean-square error, relative dimensionless global error in synthesis, relative average spectral error, spectral angle mapper, and the quality index Q4, and its performance is better than that of the standard image fusion algorithms.

  10. Desktop Software for Patch-Clamp Raw Binary Data Conversion and Preprocessing

    Directory of Open Access Journals (Sweden)

    Ning Zhang

    2011-01-01

    Full Text Available Since raw data recorded by patch-clamp systems are always stored in binary format, electrophysiologists may experience difficulties with patch clamp data preprocessing especially when they want to analyze by custom-designed algorithms. In this study, we present desktop software, called PCDReader, which could be an effective and convenient solution for patch clamp data preprocessing for daily laboratory use. We designed a novel class module, called clsPulseData, to directly read the raw data along with the parameters recorded from HEKA instruments without any other program support. By a graphical user interface, raw binary data files can be converted into several kinds of ASCII text files for further analysis, with several preprocessing options. And the parameters can also be viewed, modified and exported into ASCII files by a user-friendly Explorer style window. The real-time data loading technique and optimized memory management programming makes PCDReader a fast and efficient tool. The compiled software along with the source code of the clsPulseData class module is freely available to academic and nonprofit users.

  11. Learning-based image preprocessing for robust computer-aided detection

    Science.gov (United States)

    Raghupathi, Laks; Devarakota, Pandu R.; Wolf, Matthias

    2013-03-01

    Recent studies have shown that low dose computed tomography (LDCT) can be an effective screening tool to reduce lung cancer mortality. Computer-aided detection (CAD) would be a beneficial second reader for radiologists in such cases. Studies demonstrate that while iterative reconstructions (IR) improve LDCT diagnostic quality, it however degrades CAD performance significantly (increased false positives) when applied directly. For improving CAD performance, solutions such as retraining with newer data or applying a standard preprocessing technique may not be suffice due to high prevalence of CT scanners and non-uniform acquisition protocols. Here, we present a learning-based framework that can adaptively transform a wide variety of input data to boost an existing CAD performance. This not only enhances their robustness but also their applicability in clinical workflows. Our solution consists of applying a suitable pre-processing filter automatically on the given image based on its characteristics. This requires the preparation of ground truth (GT) of choosing an appropriate filter resulting in improved CAD performance. Accordingly, we propose an efficient consolidation process with a novel metric. Using key anatomical landmarks, we then derive consistent feature descriptors for the classification scheme that then uses a priority mechanism to automatically choose an optimal preprocessing filter. We demonstrate CAD prototype∗ performance improvement using hospital-scale datasets acquired from North America, Europe and Asia. Though we demonstrated our results for a lung nodule CAD, this scheme is straightforward to extend to other post-processing tools dedicated to other organs and modalities.

  12. Data pre-processing for web log mining: Case study of commercial bank website usage analysis

    Directory of Open Access Journals (Sweden)

    Jozef Kapusta

    2013-01-01

    Full Text Available We use data cleaning, integration, reduction and data conversion methods in the pre-processing level of data analysis. Data processing techniques improve the overall quality of the patterns mined. The paper describes using of standard pre-processing methods for preparing data of the commercial bank website in the form of the log file obtained from the web server. Data cleaning, as the simplest step of data pre-processing, is non–trivial as the analysed content is highly specific. We had to deal with the problem of frequent changes of the content and even frequent changes of the structure. Regular changes in the structure make use of the sitemap impossible. We presented approaches how to deal with this problem. We were able to create the sitemap dynamically just based on the content of the log file. In this case study, we also examined just the one part of the website over the standard analysis of an entire website, as we did not have access to all log files for the security reason. As the result, the traditional practices had to be adapted for this special case. Analysing just the small fraction of the website resulted in the short session time of regular visitors. We were not able to use recommended methods to determine the optimal value of session time. Therefore, we proposed new methods based on outliers identification for raising the accuracy of the session length in this paper.

  13. Flexibility and utility of pre-processing methods in converting STXM setups for ptychography - Final Paper

    Energy Technology Data Exchange (ETDEWEB)

    Fromm, Catherine [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2015-08-20

    Ptychography is an advanced diffraction based imaging technique that can achieve resolution of 5nm and below. It is done by scanning a sample through a beam of focused x-rays using discrete yet overlapping scan steps. Scattering data is collected on a CCD camera, and the phase of the scattered light is reconstructed with sophisticated iterative algorithms. Because the experimental setup is similar, ptychography setups can be created by retrofitting existing STXM beam lines with new hardware. The other challenge comes in the reconstruction of the collected scattering images. Scattering data must be adjusted and packaged with experimental parameters to calibrate the reconstruction software. The necessary pre-processing of data prior to reconstruction is unique to each beamline setup, and even the optical alignments used on that particular day. Pre-processing software must be developed to be flexible and efficient in order to allow experiments appropriate control and freedom in the analysis of their hard-won data. This paper will describe the implementation of pre-processing software which successfully connects data collection steps to reconstruction steps, letting the user accomplish accurate and reliable ptychography.

  14. Evaluating the validity of spectral calibration models for quantitative analysis following signal preprocessing.

    Science.gov (United States)

    Chen, Da; Grant, Edward

    2012-11-01

    When paired with high-powered chemometric analysis, spectrometric methods offer great promise for the high-throughput analysis of complex systems. Effective classification or quantification often relies on signal preprocessing to reduce spectral interference and optimize the apparent performance of a calibration model. However, less frequently addressed by systematic research is the affect of preprocessing on the statistical accuracy of a calibration result. The present work demonstrates the effectiveness of two criteria for validating the performance of signal preprocessing in multivariate models in the important dimensions of bias and precision. To assess the extent of bias, we explore the applicability of the elliptic joint confidence region (EJCR) test and devise a new means to evaluate precision by a bias-corrected root mean square error of prediction. We show how these criteria can effectively gauge the success of signal pretreatments in suppressing spectral interference while providing a straightforward means to determine the optimal level of model complexity. This methodology offers a graphical diagnostic by which to visualize the consequences of pretreatment on complex multivariate models, enabling optimization with greater confidence. To demonstrate the application of the EJCR criterion in this context, we evaluate the validity of representative calibration models using standard pretreatment strategies on three spectral data sets. The results indicate that the proposed methodology facilitates the reliable optimization of a well-validated calibration model, thus improving the capability of spectrophotometric analysis.

  15. Characterizing the continuously acquired cardiovascular time series during hemodialysis, using median hybrid filter preprocessing noise reduction.

    Science.gov (United States)

    Wilson, Scott; Bowyer, Andrea; Harrap, Stephen B

    2015-01-01

    The clinical characterization of cardiovascular dynamics during hemodialysis (HD) has important pathophysiological implications in terms of diagnostic, cardiovascular risk assessment, and treatment efficacy perspectives. Currently the diagnosis of significant intradialytic systolic blood pressure (SBP) changes among HD patients is imprecise and opportunistic, reliant upon the presence of hypotensive symptoms in conjunction with coincident but isolated noninvasive brachial cuff blood pressure (NIBP) readings. Considering hemodynamic variables as a time series makes a continuous recording approach more desirable than intermittent measures; however, in the clinical environment, the data signal is susceptible to corruption due to both impulsive and Gaussian-type noise. Signal preprocessing is an attractive solution to this problem. Prospectively collected continuous noninvasive SBP data over the short-break intradialytic period in ten patients was preprocessed using a novel median hybrid filter (MHF) algorithm and compared with 50 time-coincident pairs of intradialytic NIBP measures from routine HD practice. The median hybrid preprocessing technique for continuously acquired cardiovascular data yielded a dynamic regression without significant noise and artifact, suitable for high-level profiling of time-dependent SBP behavior. Signal accuracy is highly comparable with standard NIBP measurement, with the added clinical benefit of dynamic real-time hemodynamic information.

  16. Foveal processing difficulty does not affect parafoveal preprocessing in young readers

    Science.gov (United States)

    Marx, Christina; Hawelka, Stefan; Schuster, Sarah; Hutzler, Florian

    2017-01-01

    Recent evidence suggested that parafoveal preprocessing develops early during reading acquisition, that is, young readers profit from valid parafoveal information and exhibit a resultant preview benefit. For young readers, however, it is unknown whether the processing demands of the currently fixated word modulate the extent to which the upcoming word is parafoveally preprocessed – as it has been postulated (for adult readers) by the foveal load hypothesis. The present study used the novel incremental boundary technique to assess whether 4th and 6th Graders exhibit an effect of foveal load. Furthermore, we attempted to distinguish the foveal load effect from the spillover effect. These effects are hard to differentiate with respect to the expected pattern of results, but are conceptually different. The foveal load effect is supposed to reflect modulations of the extent of parafoveal preprocessing, whereas the spillover effect reflects the ongoing processing of the previous word whilst the reader’s fixation is already on the next word. The findings revealed that the young readers did not exhibit an effect of foveal load, but a substantial spillover effect. The implications for previous studies with adult readers and for models of eye movement control in reading are discussed. PMID:28139718

  17. Bayesian Nonparametric Clustering for Positive Definite Matrices.

    Science.gov (United States)

    Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2016-05-01

    Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which soft-clustering algorithms (K-Means, expectation maximization, etc.) are generally used. As is well-known, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the log-determinant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the Wishart-Inverse-Wishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several state-of-the-art parametric and nonparametric clustering algorithms.

  18. Using Elimination Theory to construct Rigid Matrices

    CERN Document Server

    Kumar, Abhinav; Patankar, Vijay M; N, Jayalal Sarma M

    2009-01-01

    The rigidity of a matrix A for target rank r is the minimum number of entries of A that must be changed to ensure that the rank of the altered matrix is at most r. Since its introduction by Valiant (1977), rigidity and similar rank-robustness functions of matrices have found numerous applications in circuit complexity, communication complexity, and learning complexity. Almost all nxn matrices over an infinite field have a rigidity of (n-r)^2. It is a long-standing open question to construct infinite families of explicit matrices even with superlinear rigidity when r=Omega(n). In this paper, we construct an infinite family of complex matrices with the largest possible, i.e., (n-r)^2, rigidity. The entries of an nxn matrix in this family are distinct primitive roots of unity of orders roughly exp(n^4 log n). To the best of our knowledge, this is the first family of concrete (but not entirely explicit) matrices having maximal rigidity and a succinct algebraic description. Our construction is based on elimination...

  19. Mirror-Symmetric Matrices and Their Application

    Institute of Scientific and Technical Information of China (English)

    李国林; 冯正和

    2002-01-01

    The well-known centrosymmetric matrices correctly reflect mirror-symmetry with no component or only one component on the mirror plane. Mirror-symmetric matrices defined in this paper can represent mirror-symmetric structures with various components on the mirror plane. Some basic properties of mirror-symmetric matrices were studied and applied to interconnection analysis. A generalized odd/even-mode decomposition scheme was developed based on the mirror reflection relationship for mirror-symmetric multiconductor transmission lines (MTLs). The per-unit-length (PUL) impedance matrix Z and admittance matrix Y can be divided into odd-mode and even-mode PUL matrices. Thus the order of the MTL system is reduced from n to k and k+p, where p(≥0)is the conductor number on the mirror plane. The analysis of mirror-symmetric matrices is related to the theory of symmetric group, which is the most effective tool for the study of symmetry.

  20. The use of confusion patterns to evaluate the neural basis for concurrent vowel identificationa

    Science.gov (United States)

    Chintanpalli, Ananthakrishna; Heinz, Michael G.

    2013-01-01

    Normal-hearing listeners take advantage of differences in fundamental frequency (F0) to segregate competing talkers. Computational modeling using an F0-based segregation algorithm and auditory-nerve temporal responses captures the gradual improvement in concurrent-vowel identification with increasing F0 difference. This result has been taken to suggest that F0-based segregation is the basis for this improvement; however, evidence suggests that other factors may also contribute. The present study further tested models of concurrent-vowel identification by evaluating their ability to predict the specific confusions made by listeners. Measured human confusions consisted of at most one to three confusions per vowel pair, typically from an error in only one of the two vowels. An improvement due to F0 difference was correlated with spectral differences between vowels; however, simple models based on acoustic and cochlear spectral patterns predicted some confusions not made by human listeners. In contrast, a neural temporal model was better at predicting listener confusion patterns. However, the full F0-based segregation algorithm using these neural temporal analyses was inconsistent across F0 difference in capturing listener confusions, being worse for smaller differences. The inability of this commonly accepted model to fully account for listener confusions suggests that other factors besides F0 segregation are likely to contribute. PMID:24116434

  1. New Construction Approach of Basic Belief Assignment Function Based on Confusion Matrix

    Directory of Open Access Journals (Sweden)

    Jing Zhu

    2012-08-01

    Full Text Available In the application of belief function theory, the first problem is the construction of the basic belief assignment. This study presents a new construction approach based on the confusion matrix. The method starts from the output of the confusion matrix and then designs construction strategy for basic belief assignment functions based on the expectation vector of the confusion matrix. Comparative tests of several other construction methods on the U.C.I database show that our proposed method can achieve higher target classification accuracy, lower computational complexity, which has a strong ability to promote the application.

  2. Geometry of 2×2 hermitian matrices

    Institute of Scientific and Technical Information of China (English)

    HUANG; Liping(黄礼平); WAN; Zhexian(万哲先)

    2002-01-01

    Let D be a division ring which possesses an involution a→ā. Assume that F = {a∈D|a=ā} is a proper subfield of D and is contained in the center of D. It is pointed out that if D is of characteristic not two, D is either a separable quadratic extension of F or a division ring of generalized quaternions over F and that if D is of characteristic two, D is a separable quadratic extension of F. Thus the trace map Tr: D→F,hermitian matrices over D when n≥3 and now can be deleted. When D is a field, the fundamental theorem of 2×2 hermitian matrices over D has already been proved. This paper proves the fundamental theorem of 2×2 hermitian matrices over any division ring of generalized quaternions of characteristic not two.

  3. INERTIA SETS OF SYMMETRIC SIGN PATTERN MATRICES

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A sign pattern matrix is a matrixwhose entries are from the set {+ ,- ,0}. The symmetric sign pattern matrices that require unique inertia have recently been characterized. The purpose of this paper is to more generally investigate the inertia sets of symmetric sign pattern matrices. In particular, nonnegative fri-diagonal sign patterns and the square sign pattern with all + entries are examined. An algorithm is given for generating nonnegative real symmetric Toeplitz matrices with zero diagonal of orders n≥3 which have exactly two negative eigenvalues. The inertia set of the square pattern with all + off-diagonal entries and zero diagonal entries is then analyzed. The types of inertias which can be in the inertia set of any sign pattern are also obtained in the paper. Specifically, certain compatibility and consecutiveness properties are established.

  4. Generalized Inverse Eigenvalue Problem for Centrohermitian Matrices

    Institute of Scientific and Technical Information of China (English)

    刘仲云; 谭艳祥; 田兆录

    2004-01-01

    In this paper we first consider the existence and the general form of solution to the following generalized inverse eigenvalue problem(GIEP) : given a set of n-dimension complex vectors { xj }jm = 1 and a set of complex numbers { λj} jm = 1, find two n × n centrohermitian matrices A, B such that { xj }jm = 1 and { λj }jm= 1 are the generalized eigenvectors and generalized eigenvalues of Ax = λBx, respectively. We then discuss the optimal approximation problem for the GIEP. More concretely, given two arbitrary matrices, A-, B- ∈Cn×n , we find two matrices A* and B* such that the matrix (A* ,B* ) is closest to (A- ,B-) in the Frobenius norm, where the matrix (A*, B* ) is the solution to the GIEP. We show that the expression of the solution of the optimal approximation is unique and derive the expression for it.

  5. PRM: A database of planetary reflection matrices

    Science.gov (United States)

    Stam, D. M.; Batista, S. F. A.

    2014-04-01

    We present the PRM database with reflection matrices of various types of planets. With the matrices, users can calculate the total, and the linearly and circularly polarized fluxes of incident unpolarized light that is reflected by a planet for arbitrary illumination and viewing geometries. To allow for flexibility in these geometries, the database does not contain the elements of reflection matrices, but the coefficients of their Fourier series expansion. We describe how to sum these coefficients for given illumination and viewing geometries to obtain the local reflection matrix. The coefficients in the database can also be used to calculate flux and polarization signals of exoplanets, by integrating, for a given planetary phase angle, locally reflected fluxes across the visible part of the planetary disk. Algorithms for evaluating the summation for locally reflected fluxes, as applicable to spatially resolved observations of planets, and the subsequent integration for the disk-integrated fluxes, as applicable to spatially unresolved exoplanets are also in the database

  6. On classification of dynamical r-matrices

    CERN Document Server

    Schiffmann, O

    1997-01-01

    Using recent results of P. Etingof and A. Varchenko on the Classical Dynamical Yang-Baxter equation, we reduce the classification of dynamical r-matrices on a commutative subalgebra l of a Lie algebra g to a purely algebraic problem when l admits a g^l-invariant complement, where g^l is the centralizer of l in g. Using this, we then classify all non skew-symmetric dynamical r-matrices when g is a simple Lie algebra and l a commutative subalgebra containing a regular semisimple element. This partially answers an open problem in [EV] q-alg/9703040, and generalizes the Belavin-Drinfled classification of constant r-matrices. This classification is similar and in some sense simpler than the Belavin-Drinfled classification.

  7. Octonion generalization of Pauli and Dirac matrices

    Science.gov (United States)

    Chanyal, B. C.

    2015-10-01

    Starting with octonion algebra and its 4 × 4 matrix representation, we have made an attempt to write the extension of Pauli's matrices in terms of division algebra (octonion). The octonion generalization of Pauli's matrices shows the counterpart of Pauli's spin and isospin matrices. In this paper, we also have obtained the relationship between Clifford algebras and the division algebras, i.e. a relation between octonion basis elements with Dirac (gamma), Weyl and Majorana representations. The division algebra structure leads to nice representations of the corresponding Clifford algebras. We have made an attempt to investigate the octonion formulation of Dirac wave equations, conserved current and weak isospin in simple, compact, consistent and manifestly covariant manner.

  8. A Multipath Connection Model for Traffic Matrices

    Directory of Open Access Journals (Sweden)

    Mr. M. V. Prabhakaran

    2015-02-01

    Full Text Available Peer-to-Peer (P2P applications have witnessed an increasing popularity in recent years, which brings new challenges to network management and traffic engineering (TE. As basic input information, P2P traffic matrices are of significant importance for TE. Because of the excessively high cost of direct measurement. In this paper,A multipath connection model for traffic matrices in operational networks. Media files can share the peer to peer, the localization ratio of peer to peer traffic. This evaluates its performance using traffic traces collected from both the real peer to peer video-on-demand and file-sharing applications. The estimation of the general traffic matrices (TM then used for sending the media file without traffic. Share the media file, source to destination traffic is not occur. So it give high performance and short time process.

  9. Block TERM factorization of block matrices

    Institute of Scientific and Technical Information of China (English)

    SHE Yiyuan; HAO Pengwei

    2004-01-01

    Reversible integer mapping (or integer transform) is a useful way to realize Iossless coding, and this technique has been used for multi-component image compression in the new international image compression standard JPEG 2000. For any nonsingular linear transform of finite dimension, its integer transform can be implemented by factorizing the transform matrix into 3 triangular elementary reversible matrices (TERMs) or a series of single-row elementary reversible matrices (SERMs). To speed up and parallelize integer transforms, we study block TERM and SERM factorizations in this paper. First, to guarantee flexible scaling manners, the classical determinant (det) is generalized to a matrix function, DET, which is shown to have many important properties analogous to those of det. Then based on DET, a generic block TERM factorization,BLUS, is presented for any nonsingular block matrix. Our conclusions can cover the early optimal point factorizations and provide an efficient way to implement integer transforms for large matrices.

  10. Advanced incomplete factorization algorithms for Stiltijes matrices

    Energy Technology Data Exchange (ETDEWEB)

    Il`in, V.P. [Siberian Division RAS, Novosibirsk (Russian Federation)

    1996-12-31

    The modern numerical methods for solving the linear algebraic systems Au = f with high order sparse matrices A, which arise in grid approximations of multidimensional boundary value problems, are based mainly on accelerated iterative processes with easily invertible preconditioning matrices presented in the form of approximate (incomplete) factorization of the original matrix A. We consider some recent algorithmic approaches, theoretical foundations, experimental data and open questions for incomplete factorization of Stiltijes matrices which are {open_quotes}the best{close_quotes} ones in the sense that they have the most advanced results. Special attention is given to solving the elliptic differential equations with strongly variable coefficients, singular perturbated diffusion-convection and parabolic equations.

  11. Infinite matrices and their recent applications

    CERN Document Server

    Shivakumar, P N; Zhang, Yang

    2016-01-01

    This monograph covers the theory of finite and infinite matrices over the fields of real numbers, complex numbers and over quaternions. Emphasizing topics such as sections or truncations and their relationship to the linear operator theory on certain specific separable and sequence spaces, the authors explore techniques like conformal mapping, iterations and truncations that are used to derive precise estimates in some cases and explicit lower and upper bounds for solutions in the other cases. Most of the matrices considered in this monograph have typically special structures like being diagonally dominated or tridiagonal, possess certain sign distributions and are frequently nonsingular. Such matrices arise, for instance, from solution methods for elliptic partial differential equations. The authors focus on both theoretical and computational aspects concerning infinite linear algebraic equations, differential systems and infinite linear programming, among others. Additionally, the authors cover topics such ...

  12. [Confusing clinical presentations and differential diagnosis of bipolar disorder].

    Science.gov (United States)

    Gorwood, P

    2004-01-01

    euthymia periods may also increase the risk to shift from bipolar to schizophrenia diagnosis. Schizophreniform disorder ("bouffée délirante" aiguë in France) is a frequent form of bipolar disorder onset when major dissociative features are not obvious. The borderline personality is also a problem for the diagnosis of bipolar disorder, some Authors proposing that bipolar disorder is a mood-related personality disorder, sometimes improved by mood-stabilizers. Phasic instead of reactional, weeks and not days-length, clearcut onset and recovery versus non-easy to delimit mood-episodes may help to adjust the diagnosis. Organic disorders may lead to diagnostic confusion, but it is generally proposed that bipolar disorder should be treated the same way, whether or not an organic condition is detected (with special focus on treatment tolerance). Addictive disorders are frequent comorbid conditions in bipolar disorders. Psychostimulants (such as amphetamins or cocaine) intoxications sometimes mimic manic episodes. As these drugs are preferentially chosen by subjects with bipolar disorder, the later diagnosis should be systematically assessed. Puerperal psychosis is a frequent type of onset in female bipolar disorder. The systematic prescription of mood-stabilizers for and after such episode, when mood elation is a major symptom, is generally proposed. Attention deficit-hyperactivity disorder also has unclear border with bipolar disorder, as a quarter of child hyperactivity may be latterly associated with bipolar disorder. The assessment of mood cycling and their follow-up in adulthood may thus be particularly important. Lastly, presence of some anxious disorders may delay the diagnosis of comorbid bipolar disorder.

  13. Edge fluctuations of eigenvalues of Wigner matrices

    CERN Document Server

    Döring, Hanna

    2012-01-01

    We establish a moderate deviation principle (MDP) for the number of eigenvalues of a Wigner matrix in an interval close to the edge of the spectrum. Moreover we prove a MDP for the $i$th largest eigenvalue close to the edge. The proof relies on fine asymptotics of the variance of the eigenvalue counting function of GUE matrices due to Gustavsson. The extension to large families of Wigner matrices is based on the Tao and Vu Four Moment Theorem. Possible extensions to other random matrix ensembles are commented.

  14. Forecasting Covariance Matrices: A Mixed Frequency Approach

    DEFF Research Database (Denmark)

    Halbleib, Roxana; Voev, Valeri

    This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... for flexible dependence patterns for volatilities and correlations, and can be applied to covariance matrices of large dimensions. The separate modeling of volatility and correlation forecasts considerably reduces the estimation and measurement error implied by the joint estimation and modeling of covariance...... matrix dynamics. Our empirical results show that the new mixing approach provides superior forecasts compared to multivariate volatility specifications using single sources of information....

  15. Almost Hadamard matrices: general theory and examples

    CERN Document Server

    Banica, Teodor; Zyczkowski, Karol

    2012-01-01

    We develop a general theory of "almost Hadamard matrices". These are by definition the matrices $H\\in M_N(\\mathbb R)$ having the property that $U=H/\\sqrt{N}$ is orthogonal, and is a local maximum of the 1-norm on O(N). Our study includes a detailed discussion of the circulant case ($H_{ij}=\\gamma_{j-i}$) and of the two-entry case ($H_{ij}\\in\\{x,y\\}$), with the construction of several families of examples, and some 1-norm computations.

  16. Extremal spacings of random unitary matrices

    CERN Document Server

    Smaczynski, Marek; Kus, Marek; Zyczkowski, Karol

    2012-01-01

    Extremal spacings between unimodular eigenvalues of random unitary matrices of size N pertaining to circular ensembles are investigated. Probability distributions for the minimal spacing for various ensembles are derived for N=4. We show that for large matrices the average minimal spacing s_min of a random unitary matrix behaves as N^(-1/(1+B)) for B equal to 0,1 and 2 for circular Poisson, orthogonal and unitary ensembles, respectively. For these ensembles also asymptotic probability distributions P(s_min) are obtained and the statistics of the largest spacing s_max are investigated.

  17. Age differences on Raven's Coloured Progressive Matrices.

    Science.gov (United States)

    Panek, P E; Stoner, S B

    1980-06-01

    Raven's Coloured Progressive Matrices was administered to 150 subjects (75 males, 75 females) ranging in age from 20 to 86 yr. Subjects were placed into one of three age groups: adult (M age = 27.04 yr.), middle-age (M age = 53.36 yr.), old (M age = 73.78 yr.), with 25 males and 25 females in each age group. Significant differences between age groups on the matrices were obtained after partialing out the effects of educational level, while sex of subject was not significant.

  18. Super Special Codes using Super Matrices

    CERN Document Server

    Kandasamy, W B Vasantha; Ilanthenral, K

    2010-01-01

    The new classes of super special codes are constructed in this book using the specially constructed super special vector spaces. These codes mainly use the super matrices. These codes can be realized as a special type of concatenated codes. This book has four chapters. In chapter one basic properties of codes and super matrices are given. A new type of super special vector space is constructed in chapter two of this book. Three new classes of super special codes namely, super special row code, super special column code and super special codes are introduced in chapter three. Applications of these codes are given in the final chapter.

  19. HEp-2 Cell Classification: The Role of Gaussian Scale Space Theory as A Pre-processing Approach

    OpenAIRE

    Qi, Xianbiao; Zhao, Guoying; Chen, Jie; Pietikäinen, Matti

    2015-01-01

    \\textit{Indirect Immunofluorescence Imaging of Human Epithelial Type 2} (HEp-2) cells is an effective way to identify the presence of Anti-Nuclear Antibody (ANA). Most existing works on HEp-2 cell classification mainly focus on feature extraction, feature encoding and classifier design. Very few efforts have been devoted to study the importance of the pre-processing techniques. In this paper, we analyze the importance of the pre-processing, and investigate the role of Gaussian Scale Space (GS...

  20. Pre-Processing Effect on the Accuracy of Event-Based Activity Segmentation and Classification through Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Benish Fida

    2015-09-01

    Full Text Available Inertial sensors are increasingly being used to recognize and classify physical activities in a variety of applications. For monitoring and fitness applications, it is crucial to develop methods able to segment each activity cycle, e.g., a gait cycle, so that the successive classification step may be more accurate. To increase detection accuracy, pre-processing is often used, with a concurrent increase in computational cost. In this paper, the effect of pre-processing operations on the detection and classification of locomotion activities was investigated, to check whether the presence of pre-processing significantly contributes to an increase in accuracy. The pre-processing stages evaluated in this study were inclination correction and de-noising. Level walking, step ascending, descending and running were monitored by using a shank-mounted inertial sensor. Raw and filtered segments, obtained from a modified version of a rule-based gait detection algorithm optimized for sequential processing, were processed to extract time and frequency-based features for physical activity classification through a support vector machine classifier. The proposed method accurately detected >99% gait cycles from raw data and produced >98% accuracy on these segmented gait cycles. Pre-processing did not substantially increase classification accuracy, thus highlighting the possibility of reducing the amount of pre-processing for real-time applications.

  1. Complex and magnitude-only preprocessing of 2D and 3D BOLD fMRI data at 7 T.

    Science.gov (United States)

    Barry, Robert L; Strother, Stephen C; Gore, John C

    2012-03-01

    A challenge to ultra high field functional magnetic resonance imaging is the predominance of noise associated with physiological processes unrelated to tasks of interest. This degradation in data quality may be partially reversed using a series of preprocessing algorithms designed to retrospectively estimate and remove the effects of these noise sources. However, such algorithms are routinely validated only in isolation, and thus consideration of their efficacies within realistic preprocessing pipelines and on different data sets is often overlooked. We investigate the application of eight possible combinations of three pseudo-complementary preprocessing algorithms - phase regression, Stockwell transform filtering, and retrospective image correction - to suppress physiological noise in 2D and 3D functional data at 7 T. The performance of each preprocessing pipeline was evaluated using data-driven metrics of reproducibility and prediction. The optimal preprocessing pipeline for both 2D and 3D functional data included phase regression, Stockwell transform filtering, and retrospective image correction. This result supports the hypothesis that a complex preprocessing pipeline is preferable to a magnitude-only pipeline, and suggests that functional magnetic resonance imaging studies should retain complex images and externally monitor subjects' respiratory and cardiac cycles so that these supplementary data may be used to retrospectively reduce noise and enhance overall data quality.

  2. Confusion and nausea in a man who appeared to be drunk.

    Science.gov (United States)

    Breckenridge, M B; Larry, J A; Mazzaferri, E L

    1996-02-15

    A 51-year-old man with no significant medical history presented to the emergency department with acute onset of confusion, nausea, and vomiting. He denied ethanol abuse and was not taking any medications.

  3. Monosyllable speech audiometry in noise-exposed workers—consonant and vowel confusion

    Science.gov (United States)

    Miyakita, T.; Miura, H.

    1988-12-01

    To obtain basic data for evaluating the hearing handicaps experienced by workers with noise-induced hearing loss, the ability to distinguish monosyllables was examined by speech audiometry. The percentage of correct scores for each monosyllable varied widely in 88 male workers, depending on the presentation level and the severity of hearing loss. A 67-S word list (prepared by the Japan Audiological Society), consisting of 20 Japanese monosyllables (17 consonant-vowel (CV) syllables and three vowel syllables), was used to evaluate consonant and vowel confusion at the level of 20 to 90 dB ( re HL at 1000 Hz [9]). Regarding the confusion among five subsequent vowel nuclei, we observed particular confusion patterns resulting from the similarity of the first formant (F1). Analysis of the tendency toward confusion among individual monosyllables together with the audiometric configuration will provide useful information for evaluating noise-induced hearing loss.

  4. Determinación y propiedades de H-matrices

    OpenAIRE

    SCOTT GUILLEARD, JOSÉ ANTONIO

    2015-01-01

    [EN] The essential topic of this memory is the study of H-matrices as they were introduced by Ostrowski and hereinafter extended and developed by different authors. In this study three slopes are outlined: 1) the iterative or automatic determination of H-matrices, 2) the properties inherent in the H-matrices and 3) the matrices related to H-matrices. H-matrices acquire every time major relevancy due to the fact that they arise in numerous applications so much in Mathematics,...

  5. Universal portfolios generated by Toeplitz matrices

    Science.gov (United States)

    Tan, Choon Peng; Chu, Sin Yen; Pan, Wei Yeing

    2014-06-01

    Performance of universal portfolios generated by Toeplitz matrices is studied in this paper. The general structure of the companion matrix of the generating Toeplitz matrix is determined. Empirical performance of the threeband and nine-band Toeplitz universal portfolios on real stock data is presented. Pseudo Toeplitz universal portfolios are studied with promising empirical achievement of wealth demonstrated.

  6. Parametrizations of Positive Matrices With Applications

    CERN Document Server

    Tseng, M C; Ramakrishna, V; Zhou, Hong

    2006-01-01

    This paper reviews some characterizations of positive matrices and discusses which lead to useful parametrizations. It is argued that one of them, which we dub the Schur-Constantinescu parametrization is particularly useful. Two new applications of it are given. One shows all block-Toeplitz states are PPT. The other application is to relaxation rates.

  7. Generation Speed in Raven's Progressive Matrices Test.

    Science.gov (United States)

    Verguts, Tom; De Boeck, Paul; Maris, Eric

    1999-01-01

    Studied the role of response fluency on results of the Raven's Advanced Progressive Matrices (APM) Test by comparing scores on a test of generation speed (speed of generating rules that govern the items) with APM test performance for 127 Belgian undergraduates. Discusses the importance of generation speed in intelligence. (SLD)

  8. Deconvolution and Regularization with Toeplitz Matrices

    DEFF Research Database (Denmark)

    Hansen, Per Christian

    2002-01-01

    of these discretized deconvolution problems, with emphasis on methods that take the special structure of the matrix into account. Wherever possible, analogies to classical DFT-based deconvolution problems are drawn. Among other things, we present direct methods for regularization with Toeplitz matrices, and we show...

  9. Extremal norms of graphs and matrices

    CERN Document Server

    Nikiforov, Vladimir

    2010-01-01

    In the recent years, the trace norm of graphs has been extensively studied under the name of graph energy. In this paper some of this research is extended to more general matrix norms, like the Schatten p-norms and the Ky Fan k-norms. Whenever possible the results are given both for graphs and general matrices.

  10. Numerical Methods for Structured Matrices and Applications

    CERN Document Server

    Bini, Dario A; Olshevsky, Vadim; Tyrtsyhnikov, Eugene; van Barel, Marc

    2010-01-01

    This cross-disciplinary volume brings together theoretical mathematicians, engineers and numerical analysts and publishes surveys and research articles related to the topics where Georg Heinig had made outstanding achievements. In particular, this includes contributions from the fields of structured matrices, fast algorithms, operator theory, and applications to system theory and signal processing.

  11. Generation speed in Raven's Progressive Matrices Test

    NARCIS (Netherlands)

    Verguts, T.; Boeck, P. De; Maris, E.G.G.

    1999-01-01

    In this paper, we investigate the role of response fluency on a well-known intelligence test, Raven's (1962) Advanced Progressive Matrices (APM) test. Critical in solving this test is finding rules that govern the items. Response fluency is conceptualized as generation speed or the speed at which a

  12. Positivity of Matrices with Generalized Matrix Functions

    Institute of Scientific and Technical Information of China (English)

    Fuzhen ZHANG

    2012-01-01

    Using an elementary fact on matrices we show by a unified approach the positivity of a partitioned positive semidefinite matrix with each square block replaced by a compound matrix,an elementary symmetric function or a generalized matrix function.In addition,we present a refined version of the Thompson determinant compression theorem.

  13. Robust stability of interval parameter matrices

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This note is devoted to the problem of robust stability of interval parameter matrices. Based on some basic facts relating the H∞ norm of a transfer function to the Riccati matrix inequality and Hamilton matrix, several test conditions with parameter perturbation bounds are obtained.

  14. Constructing random matrices to represent real ecosystems.

    Science.gov (United States)

    James, Alex; Plank, Michael J; Rossberg, Axel G; Beecham, Jonathan; Emmerson, Mark; Pitchford, Jonathan W

    2015-05-01

    Models of complex systems with n components typically have order n(2) parameters because each component can potentially interact with every other. When it is impractical to measure these parameters, one may choose random parameter values and study the emergent statistical properties at the system level. Many influential results in theoretical ecology have been derived from two key assumptions: that species interact with random partners at random intensities and that intraspecific competition is comparable between species. Under these assumptions, community dynamics can be described by a community matrix that is often amenable to mathematical analysis. We combine empirical data with mathematical theory to show that both of these assumptions lead to results that must be interpreted with caution. We examine 21 empirically derived community matrices constructed using three established, independent methods. The empirically derived systems are more stable by orders of magnitude than results from random matrices. This consistent disparity is not explained by existing results on predator-prey interactions. We investigate the key properties of empirical community matrices that distinguish them from random matrices. We show that network topology is less important than the relationship between a species' trophic position within the food web and its interaction strengths. We identify key features of empirical networks that must be preserved if random matrix models are to capture the features of real ecosystems.

  15. Spectral averaging techniques for Jacobi matrices

    CERN Document Server

    del Rio, Rafael; Schulz-Baldes, Hermann

    2008-01-01

    Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.

  16. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-11-30

    We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.

  17. Correspondence Analysis of Archeological Abundance Matrices

    OpenAIRE

    de Leeuw, Jan

    2007-01-01

    In this chapter we discuss the Correspondence Analysis (CA) techniques used in other chapters of this book. CA is presented as a multivariate exploratory technique, as a proximity analysis technique based on Benzecri distances, as a technique to decompose the total chi-square of frequency matrices, and as a least squares method to fit association or ordination models.

  18. Moment matrices, border bases and radical computation

    NARCIS (Netherlands)

    Mourrain, B.; Lasserre, J.B.; Laurent, M.; Rostalski, P.; Trebuchet, P.

    2011-01-01

    In this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming it complex (resp. real) variety is nte. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and semi-denite programming.

  19. Moment matrices, border bases and radical computation

    NARCIS (Netherlands)

    Mourrain, B.; Lasserre, J.B.; Laurent, M.; Rostalski, P.; Trebuchet, P.

    2013-01-01

    In this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming it complex (resp. real) variety is nte. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and semi-denite programming.

  20. Spectral properties of random triangular matrices

    CERN Document Server

    Basu, Riddhipratim; Ganguly, Shirshendu; Hazra, Rajat Subhra

    2011-01-01

    We provide a relatively elementary proof of the existence of the limiting spectral distribution (LSD) of symmetric triangular patterned matrices and also show their joint convergence. We also derive the expressions for the moments of the LSD of the symmetric triangular Wigner matrix using properties of Catalan words.

  1. Affine processes on positive semidefinite matrices

    CERN Document Server

    Cuchiero, Christa; Mayerhofer, Eberhard; Teichmann, Josef

    2009-01-01

    This paper provides the mathematical foundation for stochastically continuous affine processes on the cone of positive semidefinite symmetric matrices. These matrix-valued affine processes have arisen from a large and growing range of useful applications in finance, including multi-asset option pricing with stochastic volatility and correlation structures, and fixed-income models with stochastically correlated risk factors and default intensities.

  2. Malware Analysis Using Visualized Image Matrices

    Directory of Open Access Journals (Sweden)

    KyoungSoo Han

    2014-01-01

    Full Text Available This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  3. EARLINET Single Calculus Chain - technical - Part 1: Pre-processing of raw lidar data

    Science.gov (United States)

    D'Amico, Giuseppe; Amodeo, Aldo; Mattis, Ina; Freudenthaler, Volker; Pappalardo, Gelsomina

    2016-02-01

    In this paper we describe an automatic tool for the pre-processing of aerosol lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of ELPP, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of ELPP is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of ELPP. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. ELPP has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  4. AN ENHANCED PRE-PROCESSING RESEARCH FRAMEWORK FOR WEB LOG DATA USING A LEARNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    V.V.R. Maheswara Rao

    2011-01-01

    Full Text Available With the continued growth and proliferation of Web services and Web based information systems, the volumes of user data have reached astronomical proportions. Before analyzing such data using web mining techniques, the web log has to be pre processed, integrated and transformed. As the World Wide Web is continuously and rapidly growing, it is necessary for the web miners to utilize intelligent tools in order to find, extract, filter and evaluate the desired information. The data pre-processing stage is the most important phase for investigation of the web user usage behaviour. To do this one must extract the only human user accesses from weblog data which is critical and complex. The web log is incremental in nature, thus conventional data pre-processing techniques were proved to be not suitable. Hence an extensive learning algorithm is required in order to get the desired information.This paper introduces an extensive research frame work capable of pre processing web log data completely and efficiently. The learning algorithm of proposed research frame work can separates human user and search engine accesses intelligently, with less time. In order to create suitable target data, the further essential tasks of pre-processing Data Cleansing, User Identification, Sessionization and Path Completion are designed collectively. The framework reduces the error rate and improves significant learning performance of the algorithm. The work ensures the goodness of split by using popular measures like Entropy and Gini index. This framework helps to investigate the web user usage behaviour efficiently. The experimental results proving this claim are given in this paper.

  5. EARLINET Single Calculus Chain – technical – Part 1: Pre-processing of raw lidar data

    Directory of Open Access Journals (Sweden)

    G. D'Amico

    2015-10-01

    Full Text Available In this paper we describe an automatic tool for the pre-processing of lidar data called ELPP (EARLINET Lidar Pre-Processor. It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC, the automatic tool for the analysis of EARLINET data. The ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, the ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. The ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of the ELPP module, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of the ELPP module is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of the ELPP module. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. The ELPP module has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  6. Classification-based comparison of pre-processing methods for interpretation of mass spectrometry generated clinical datasets

    Directory of Open Access Journals (Sweden)

    Hoefsloot Huub CJ

    2009-05-01

    Full Text Available Abstract Background Mass spectrometry is increasingly being used to discover proteins or protein profiles associated with disease. Experimental design of mass-spectrometry studies has come under close scrutiny and the importance of strict protocols for sample collection is now understood. However, the question of how best to process the large quantities of data generated is still unanswered. Main challenges for the analysis are the choice of proper pre-processing and classification methods. While these two issues have been investigated in isolation, we propose to use the classification of patient samples as a clinically relevant benchmark for the evaluation of pre-processing methods. Results Two in-house generated clinical SELDI-TOF MS datasets are used in this study as an example of high throughput mass-spectrometry data. We perform a systematic comparison of two commonly used pre-processing methods as implemented in Ciphergen ProteinChip Software and in the Cromwell package. With respect to reproducibility, Ciphergen and Cromwell pre-processing are largely comparable. We find that the overlap between peaks detected by either Ciphergen ProteinChip Software or Cromwell is large. This is especially the case for the more stringent peak detection settings. Moreover, similarity of the estimated intensities between matched peaks is high. We evaluate the pre-processing methods using five different classification methods. Classification is done in a double cross-validation protocol using repeated random sampling to obtain an unbiased estimate of classification accuracy. No pre-processing method significantly outperforms the other for all peak detection settings evaluated. Conclusion We use classification of patient samples as a clinically relevant benchmark for the evaluation of pre-processing methods. Both pre-processing methods lead to similar classification results on an ovarian cancer and a Gaucher disease dataset. However, the settings for pre-processing

  7. Comparative Evaluation of Preprocessing Freeware on Chromatography/Mass Spectrometry Data for Signature Discovery

    Energy Technology Data Exchange (ETDEWEB)

    Coble, Jamie B.; Fraga, Carlos G.

    2014-07-07

    Preprocessing software is crucial for the discovery of chemical signatures in metabolomics, chemical forensics, and other signature-focused disciplines that involve analyzing large data sets from chemical instruments. Here, four freely available and published preprocessing tools known as metAlign, MZmine, SpectConnect, and XCMS were evaluated for impurity profiling using nominal mass GC/MS data and accurate mass LC/MS data. Both data sets were previously collected from the analysis of replicate samples from multiple stocks of a nerve-agent precursor. Each of the four tools had their parameters set for the untargeted detection of chromatographic peaks from impurities present in the stocks. The peak table generated by each preprocessing tool was analyzed to determine the number of impurity components detected in all replicate samples per stock. A cumulative set of impurity components was then generated using all available peak tables and used as a reference to calculate the percent of component detections for each tool, in which 100% indicated the detection of every component. For the nominal mass GC/MS data, metAlign performed the best followed by MZmine, SpectConnect, and XCMS with detection percentages of 83, 60, 47, and 42%, respectively. For the accurate mass LC/MS data, the order was metAlign, XCMS, and MZmine with detection percentages of 80, 45, and 35%, respectively. SpectConnect did not function for the accurate mass LC/MS data. Larger detection percentages were obtained by combining the top performer with at least one of the other tools such as 96% by combining metAlign with MZmine for the GC/MS data and 93% by combining metAlign with XCMS for the LC/MS data. In terms of quantitative performance, the reported peak intensities had average absolute biases of 41, 4.4, 1.3 and 1.3% for SpectConnect, metAlign, XCMS, and MZmine, respectively, for the GC/MS data. For the LC/MS data, the average absolute biases were 22, 4.5, and 3.1% for metAlign, MZmine, and XCMS

  8. A Multi-channel Pre-processing Circuit for Signals from Thermocouple/Thermister

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    In this paper,a new developed multi-channel pre-processing circuit for signals from temperature sensor was introduced in brief.This circuit was developed to collect and amplify the signals from temperature sensor.This is a universal circuit.It can be used to process the signals from thermocouples and also used to process signals from thermistors.This circuit was mounted in a standard box(440W×405D×125H mm)as an instrument.The

  9. Experimental examination of similarity measures and preprocessing methods used for image registration

    Science.gov (United States)

    Svedlow, M.; Mcgillem, C. D.; Anuta, P. E.

    1976-01-01

    The criterion used to measure the similarity between images and thus find the position where the images are registered is examined. The three similarity measures considered are the correlation coefficient, the sum of the absolute differences, and the correlation function. Three basic types of preprocessing are then discussed: taking the magnitude of the gradient of the images, thresholding the images at their medians, and thresholding the magnitude of the gradient of the images at an arbitrary level to be determined experimentally. These multitemporal registration techniques are applied to remote imagery of agricultural areas.

  10. Preprocessing for Optimization of Probabilistic-Logic Models for Sequence Analysis

    DEFF Research Database (Denmark)

    Christiansen, Henning; Lassen, Ole Torp

    2009-01-01

    , the original complex models may be used for generating artificial evaluation data by efficient sampling, which can be used in the evaluation, although it does not constitute a foolproof test procedure. These models and evaluation processes are illustrated in the PRISM system developed by other authors, and we...... and approximation are needed. The first steps are taken towards a methodology for optimizing such models by approximations using auxiliary models for preprocessing or splitting them into submodels. Evaluation of such approximating models is challenging as authoritative test data may be sparse. On the other hand...

  11. Combined principal component preprocessing and n-tuple neural networks for improved classification

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Linneberg, Christian

    2000-01-01

    We present a combined principal component analysis/neural network scheme for classification. The data used to illustrate the method consist of spectral fluorescence recordings from seven different production facilities, and the task is to relate an unknown sample to one of these seven factories....... The data are first preprocessed by performing an individual principal component analysis on each of the seven groups of data. The components found are then used for classifying the data, but instead of making a single multiclass classifier, we follow the ideas of turning a multiclass problem into a number...

  12. Fast randomized point location without preprocessing in two- and three-dimensional Delaunay triangulations

    Energy Technology Data Exchange (ETDEWEB)

    Muecke, E.P.; Saias, I.; Zhu, B.

    1996-05-01

    This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point simply by walking through the triangulation, after selecting a good starting point by random sampling. The analysis generalizes and extends a recent result of d = 2 dimensions by proving this procedure to take expected time close to O(n{sup 1/(d+1)}) for point location in Delaunay triangulations of n random points in d = 3 dimensions. Empirical results in both two and three dimensions show that this procedure is efficient in practice.

  13. Interest rate prediction: a neuro-hybrid approach with data preprocessing

    Science.gov (United States)

    Mehdiyev, Nijat; Enke, David

    2014-07-01

    The following research implements a differential evolution-based fuzzy-type clustering method with a fuzzy inference neural network after input preprocessing with regression analysis in order to predict future interest rates, particularly 3-month T-bill rates. The empirical results of the proposed model is compared against nonparametric models, such as locally weighted regression and least squares support vector machines, along with two linear benchmark models, the autoregressive model and the random walk model. The root mean square error is reported for comparison.

  14. Reservoir computing with a slowly modulated mask signal for preprocessing using a mutually coupled optoelectronic system

    Science.gov (United States)

    Tezuka, Miwa; Kanno, Kazutaka; Bunsen, Masatoshi

    2016-08-01

    Reservoir computing is a machine-learning paradigm based on information processing in the human brain. We numerically demonstrate reservoir computing with a slowly modulated mask signal for preprocessing by using a mutually coupled optoelectronic system. The performance of our system is quantitatively evaluated by a chaotic time series prediction task. Our system can produce comparable performance with reservoir computing with a single feedback system and a fast modulated mask signal. We showed that it is possible to slow down the modulation speed of the mask signal by using the mutually coupled system in reservoir computing.

  15. Comparative evaluation of preprocessing freeware on chromatography/mass spectrometry data for signature discovery.

    Science.gov (United States)

    Coble, Jamie B; Fraga, Carlos G

    2014-09-01

    Preprocessing software, which converts large instrumental data sets into a manageable format for data analysis, is crucial for the discovery of chemical signatures in metabolomics, chemical forensics, and other signature-focused disciplines. Here, four freely available and published preprocessing tools known as MetAlign, MZmine, SpectConnect, and XCMS were evaluated for impurity profiling using nominal mass GC/MS data and accurate mass LC/MS data. Both data sets were previously collected from the analysis of replicate samples from multiple stocks of a nerve-agent precursor and method blanks. Parameters were optimized for each of the four tools for the untargeted detection, matching, and cataloging of chromatographic peaks from impurities present in the stock samples. The peak table generated by each preprocessing tool was analyzed to determine the number of impurity components detected in all replicate samples per stock and absent in the method blanks. A cumulative set of impurity components was then generated using all available peak tables and used as a reference to calculate the percent of component detections for each tool, in which 100% indicated the detection of every known component present in a stock. For the nominal mass GC/MS data, MetAlign had the most component detections followed by MZmine, SpectConnect, and XCMS with detection percentages of 83, 60, 47, and 41%, respectively. For the accurate mass LC/MS data, the order was MetAlign, XCMS, and MZmine with detection percentages of 80, 45, and 35%, respectively. SpectConnect did not function for the accurate mass LC/MS data. Larger detection percentages were obtained by combining the top performer with at least one of the other tools such as 96% by combining MetAlign with MZmine for the GC/MS data and 93% by combining MetAlign with XCMS for the LC/MS data. In terms of quantitative performance, the reported peak intensities from each tool had averaged absolute biases (relative to peak intensities obtained

  16. Computer-assisted bone age assessment: image preprocessing and epiphyseal/metaphyseal ROI extraction.

    Science.gov (United States)

    Pietka, E; Gertych, A; Pospiech, S; Cao, F; Huang, H K; Gilsanz, V

    2001-08-01

    Clinical assessment of skeletal maturity is based on a visual comparison of a left-hand wrist radiograph with atlas patterns. Using a new digital hand atlas an image analysis methodology is being developed. To assist radiologists in bone age estimation. The analysis starts with a preprocessing function yielding epiphyseal/metaphyseal regions of interest (EMROIs). Then, these regions are subjected to a feature extraction function. Accuracy has been measured independently at three stages of the image analysis: detection of phalangeal tip, extraction of the EMROIs, and location of diameters and lower edge of the EMROIs. Extracted features describe the stage of skeletal development more objectively than visual comparison.

  17. Mapping of electrical potentials from the chest surface - preprocessing and visualization

    Directory of Open Access Journals (Sweden)

    Vaclav Chudacek

    2005-01-01

    Full Text Available The aim of the paper is to present current research activity in the area of computer supported ECG processing. Analysis of heart electric field based on standard 12lead system is at present the most frequently used method of heart diseasediagnostics. However body surface potential mapping (BSPM that measures electric potentials from several tens to hundreds of electrodes placed on thorax surface has in certain cases higher diagnostic value given by data collection in areas that are inaccessible for standard 12lead ECG. For preprocessing, wavelet transform is used; it allows detect significant values on the ECG signal. Several types of maps, namely immediate potential, integral, isochronous, and differential.

  18. Concentration of measure and spectra of random matrices: Applications to correlation matrices, elliptical distributions and beyond

    CERN Document Server

    Karoui, Noureddine El

    2009-01-01

    We place ourselves in the setting of high-dimensional statistical inference, where the number of variables $p$ in a data set of interest is of the same order of magnitude as the number of observations $n$. More formally, we study the asymptotic properties of correlation and covariance matrices, in the setting where $p/n\\to\\rho\\in(0,\\infty),$ for general population covariance. We show that, for a large class of models studied in random matrix theory, spectral properties of large-dimensional correlation matrices are similar to those of large-dimensional covarance matrices. We also derive a Mar\\u{c}enko--Pastur-type system of equations for the limiting spectral distribution of covariance matrices computed from data with elliptical distributions and generalizations of this family. The motivation for this study comes partly from the possible relevance of such distributional assumptions to problems in econometrics and portfolio optimization, as well as robustness questions for certain classical random matrix result...

  19. The primitive matrices of sandwich semigroups of generalized circulant Boolean matrices

    Institute of Scientific and Technical Information of China (English)

    LIU Jian-ping; CHEN Jin-song

    2013-01-01

    Let Gn(C) be the sandwich semigroup of generalized circulant Boolean matrices with the sandwich matrix C and GC (Jn) the set of all primitive matrices in Gn(C). In this paper, some necessary and suffi cient conditions for A in the semigroup Gn(C) to be primitive are given. We also show that GC (Jn) is a subsemigroup of Gn(C).

  20. Detailed assessment of homology detection using different substitution matrices

    Institute of Scientific and Technical Information of China (English)

    LI Jing; WANG Wei

    2006-01-01

    Homology detection plays a key role in bioinformatics, whereas substitution matrix is one of the most important components in homology detection. Thus, besides the improvement of alignment algorithms, another effective way to enhance the accuracy of homology detection is to use proper substitution matrices or even construct new matrices.A study on the features of various matrices and on the comparison of the performances between different matrices in homology detection enable us to choose the most proper or optimal matrix for some specific applications. In this paper, by taking BLOSUM matrices as an example, some detailed features of matrices in homology detection are studied by calculating the distributions of numbers of recognized proteins over different sequence identities and sequence lengths. Our results clearly showed that different matrices have different preferences and abilities to the recognition of remote homologous proteins. Furthermore, detailed features of the various matrices can be used to improve the accuracy of homology detection.

  1. Electrospun human keratin matrices as templates for tissue regeneration.

    Science.gov (United States)

    Sow, Wan Ting; Lui, Yuan Siang; Ng, Kee Woei

    2013-04-01

    The aim of this work was to study the feasibility of fabricating human hair keratin matrices through electrospinning and to evaluate the potential of these matrices for tissue regeneration. Keratin was extracted from human hair using Na2S and blended with poly(ethylene oxide) in the weight ratio of 60:1 for electrospinning. Physical morphology and chemical properties of the matrices were characterized using scanning electron microscopy and Fourier transform infrared spectroscopy, respectively. Cell viability and morphology of murine and human fibroblasts cultured on the matrices were evaluated through the Live/Dead(®) assay and scanning electron microscopy. Electrospun keratin matrices were successfully produced without affecting the chemical conformation of keratin. Fibroblasts cultured on keratin matrices showed healthy morphology and penetration into matrices at day 7. Electrospun human hair keratin matrices provide a bioinductive and structural environment for cell growth and are thus attractive as alternative templates for tissue regeneration.

  2. Higher-Order Singular Systems and Polynomial Matrices

    OpenAIRE

    2005-01-01

    There is a one-to-one correspondence between the set of quadruples of matrices defining singular linear time-invariant dynamical systems and a subset of the set of polynomial matrices. This correspondence preserves the equivalence relations introduced in both sets (feedback-similarity and strict equivalence): two quadruples of matrices are feedback-equivalent if, and only if, the polynomial matrices associated to them are also strictly equivalent. Los sistemas lineales singulares...

  3. Decision Matrices: Tools to Enhance Middle School Engineering Instruction

    Science.gov (United States)

    Gonczi, Amanda L.; Bergman, Brenda G.; Huntoon, Jackie; Allen, Robin; McIntyre, Barb; Turner, Sheri; Davis, Jen; Handler, Rob

    2017-01-01

    Decision matrices are valuable engineering tools. They allow engineers to objectively examine solution options. Decision matrices can be incorporated in K-12 classrooms to support authentic engineering instruction. In this article we provide examples of how decision matrices have been incorporated into 6th and 7th grade classrooms as part of an…

  4. 19 CFR 10.90 - Master records and metal matrices.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Master records and metal matrices. 10.90 Section... Master Records, and Metal Matrices § 10.90 Master records and metal matrices. (a) Consumption entries... made, of each master record or metal matrix covered thereby. (c) A bond on Customs Form 301,...

  5. Decision Matrices: Tools to Enhance Middle School Engineering Instruction

    Science.gov (United States)

    Gonczi, Amanda L.; Bergman, Brenda G.; Huntoon, Jackie; Allen, Robin; McIntyre, Barb; Turner, Sheri; Davis, Jen; Handler, Rob

    2017-01-01

    Decision matrices are valuable engineering tools. They allow engineers to objectively examine solution options. Decision matrices can be incorporated in K-12 classrooms to support authentic engineering instruction. In this article we provide examples of how decision matrices have been incorporated into 6th and 7th grade classrooms as part of an…

  6. On Skew Circulant Type Matrices Involving Any Continuous Fibonacci Numbers

    Directory of Open Access Journals (Sweden)

    Zhaolin Jiang

    2014-01-01

    inverse matrices of them by constructing the transformation matrices. Furthermore, the maximum column sum matrix norm, the spectral norm, the Euclidean (or Frobenius norm, and the maximum row sum matrix norm and bounds for the spread of these matrices are given, respectively.

  7. Fungible Correlation Matrices: A Method for Generating Nonsingular, Singular, and Improper Correlation Matrices for Monte Carlo Research.

    Science.gov (United States)

    Waller, Niels G

    2016-01-01

    For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.

  8. ITSG-Grace2016 data preprocessing methodologies revisited: impact of using Level-1A data products

    Science.gov (United States)

    Klinger, Beate; Mayer-Gürr, Torsten

    2017-04-01

    For the ITSG-Grace2016 release, the gravity field recovery is based on the use of official GRACE (Gravity Recovery and Climate Experiment) Level-1B data products, generated by the Jet Propulsion Laboratory (JPL). Before gravity field recovery, the Level-1B instrument data are preprocessed. This data preprocessing step includes the combination of Level-1B star camera (SCA1B) and angular acceleration (ACC1B) data for an improved attitude determination (sensor fusion), instrument data screening and ACC1B data calibration. Based on a Level-1A test dataset, provided for individual month throughout the GRACE period by the Center of Space Research at the University of Texas at Austin (UTCSR), the impact of using Level-1A instead of Level-1B data products within the ITSG-Grace2016 processing chain is analyzed. We discuss (1) the attitude determination through an optimal combination of SCA1A and ACC1A data using our sensor fusion approach, (2) the impact of the new attitude product on temporal gravity field solutions, and (3) possible benefits of using Level-1A data for instrument data screening and calibration. As the GRACE mission is currently reaching its end-of-life, the presented work aims not only at a better understanding of GRACE science data to reduce the impact of possible error sources on the gravity field recovery, but it also aims at preparing Level-1A data handling capabilities for the GRACE Follow-On mission.

  9. Experimental evaluation of video preprocessing algorithms for automatic target hand-off

    Science.gov (United States)

    McIngvale, P. H.; Guyton, R. D.

    It is pointed out that the Automatic Target Hand-Off Correlator (ATHOC) hardware has been modified to permit operation in a nonreal-time mode as a programmable laboratory test unit using video recordings as inputs and allowing several preprocessing algorithms to be software programmable. In parallel with this hardware modification effort, an analysis and simulation effort has been underway to help determine which of the many available preprocessing algorithms should be implemented in the ATHOC software. It is noted that videotapes from a current technology airborne target acquisition system and an imaging infrared missile seeker were recorded and used in the laboratory experiments. These experiments are described and the results are presented. A set of standard parameters is found for each case. Consideration of the background in the target scene is found to be important. Analog filter cutoff frequencies of 2.5 MHz for low pass and 300 kHz for high pass are found to give best results. EPNC = 1 is found to be slightly better than EPNC = 0. It is also shown that trilevel gives better results than bilevel.

  10. Automated cleaning and pre-processing of immunoglobulin gene sequences from high-throughput sequencing

    Directory of Open Access Journals (Sweden)

    Miri eMichaeli

    2012-12-01

    Full Text Available High throughput sequencing (HTS yields tens of thousands to millions of sequences that require a large amount of pre-processing work to clean various artifacts. Such cleaning cannot be performed manually. Existing programs are not suitable for immunoglobulin (Ig genes, which are variable and often highly mutated. This paper describes Ig-HTS-Cleaner (Ig High Throughput Sequencing Cleaner, a program containing a simple cleaning procedure that successfully deals with pre-processing of Ig sequences derived from HTS, and Ig-Indel-Identifier (Ig Insertion – Deletion Identifier, a program for identifying legitimate and artifact insertions and/or deletions (indels. Our programs were designed for analyzing Ig gene sequences obtained by 454 sequencing, but they are applicable to all types of sequences and sequencing platforms. Ig-HTS-Cleaner and Ig-Indel-Identifier have been implemented in Java and saved as executable JAR files, supported on Linux and MS Windows. No special requirements are needed in order to run the programs, except for correctly constructing the input files as explained in the text. The programs' performance has been tested and validated on real and simulated data sets.

  11. Preprocessing of A-scan GPR data based on energy features

    Science.gov (United States)

    Dogan, Mesut; Turhan-Sayan, Gonul

    2016-05-01

    There is an increasing demand for noninvasive real-time detection and classification of buried objects in various civil and military applications. The problem of detection and annihilation of landmines is particularly important due to strong safety concerns. The requirement for a fast real-time decision process is as important as the requirements for high detection rates and low false alarm rates. In this paper, we introduce and demonstrate a computationally simple, timeefficient, energy-based preprocessing approach that can be used in ground penetrating radar (GPR) applications to eliminate reflections from the air-ground boundary and to locate the buried objects, simultaneously, at one easy step. The instantaneous power signals, the total energy values and the cumulative energy curves are extracted from the A-scan GPR data. The cumulative energy curves, in particular, are shown to be useful to detect the presence and location of buried objects in a fast and simple way while preserving the spectral content of the original A-scan data for further steps of physics-based target classification. The proposed method is demonstrated using the GPR data collected at the facilities of IPA Defense, Ankara at outdoor test lanes. Cylindrically shaped plastic containers were buried in fine-medium sand to simulate buried landmines. These plastic containers were half-filled by ammonium nitrate including metal pins. Results of this pilot study are demonstrated to be highly promising to motivate further research for the use of energy-based preprocessing features in landmine detection problem.

  12. Selections of data preprocessing methods and similarity metrics for gene cluster analysis

    Institute of Scientific and Technical Information of China (English)

    YANG Chunmei; WAN Baikun; GAO Xiaofeng

    2006-01-01

    Clustering is one of the major exploratory techniques for gene expression data analysis. Only with suitable similarity metrics and when datasets are properly preprocessed, can results of high quality be obtained in cluster analysis. In this study, gene expression datasets with external evaluation criteria were preprocessed as normalization by line, normalization by column or logarithm transformation by base-2, and were subsequently clustered by hierarchical clustering, k-means clustering and self-organizing maps (SOMs) with Pearson correlation coefficient or Euclidean distance as similarity metric. Finally, the quality of clusters was evaluated by adjusted Rand index. The results illustrate that k-means clustering and SOMs have distinct advantages over hierarchical clustering in gene clustering, and SOMs are a bit better than k-means when randomly initialized. It also shows that hierarchical clustering prefers Pearson correlation coefficient as similarity metric and dataset normalized by line. Meanwhile, k-means clustering and SOMs can produce better clusters with Euclidean distance and logarithm transformed datasets. These results will afford valuable reference to the implementation of gene expression cluster analysis.

  13. A Technical Review on Biomass Processing: Densification, Preprocessing, Modeling and Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Jaya Shankar Tumuluru; Christopher T. Wright

    2010-06-01

    It is now a well-acclaimed fact that burning fossil fuels and deforestation are major contributors to climate change. Biomass from plants can serve as an alternative renewable and carbon-neutral raw material for the production of bioenergy. Low densities of 40–60 kg/m3 for lignocellulosic and 200–400 kg/m3 for woody biomass limits their application for energy purposes. Prior to use in energy applications these materials need to be densified. The densified biomass can have bulk densities over 10 times the raw material helping to significantly reduce technical limitations associated with storage, loading and transportation. Pelleting, briquetting, or extrusion processing are commonly used methods for densification. The aim of the present research is to develop a comprehensive review of biomass processing that includes densification, preprocessing, modeling and optimization. The specific objective include carrying out a technical review on (a) mechanisms of particle bonding during densification; (b) methods of densification including extrusion, briquetting, pelleting, and agglomeration; (c) effects of process and feedstock variables and biomass biochemical composition on the densification (d) effects of preprocessing such as grinding, preheating, steam explosion, and torrefaction on biomass quality and binding characteristics; (e) models for understanding the compression characteristics; and (f) procedures for response surface modeling and optimization.

  14. [Research on preprocessing method of near-infrared spectroscopy detection of coal ash calorific value].

    Science.gov (United States)

    Zhang, Lin; Lu, Hui-Shan; Yan, Hong-Wei; Gao, Qiang; Wang, Fu-Jie

    2013-12-01

    The calorific value of coal ash is an important indicator to evaluate the coal quality. In the experiment, the effect of spectrum and processing methods such as smoothing, differential processing, multiplicative scatter correction (MSC) and standard normal variate (SNV) in improving the near-infrared diffuse reflection spectrum signal-noise ratio was analyzed first, then partial least squares (PLS) and principal component analysis (PCR) were used to establish the calorific value model of coal ash for the spectrums processed with each preprocessing method respectively. It was found that the model performance can be obviously improved with 5-point smoothing processing, MSC and SNV, in which 5-point smoothing processing has the best effect, the coefficient of association, correction standard deviation and forecast standard deviation are respectively 0.9899, 0.00049 and 0.00052, and when 25-point smoothing processing is adopted, over-smoothing occurs, which worsens the model performance, while the model established with the spectrum after differential preprocessing has no obvious change and the influence on the model is not large.

  15. Satellite Dwarf Galaxies in a Hierarchical Universe: Infall Histories, Group Preprocessing, and Reionization

    CERN Document Server

    Wetzel, Andrew R; Garrison-Kimmel, Shea

    2015-01-01

    In the Local Group, almost all satellite dwarf galaxies that are within the virial radius of the Milky Way (MW) and M31 exhibit strong environmental influence. The orbital histories of these satellites provide the key to understanding the role of the MW/M31 halo, lower-mass groups, and cosmic reionization on the evolution of dwarf galaxies. We examine the virial-infall histories of satellites with M_star = 10 ^ {3 - 9} M_sun using the ELVIS suite of cosmological zoom-in dissipationless simulations of 48 MW/M31-like halos. Satellites at z = 0 fell into the MW/M31 halos typically 5 - 8 Gyr ago at z = 0.5 - 1. However, they first fell into any host halo typically 7 - 10 Gyr ago at z = 0.7 - 1.5. This difference arises because many satellites experienced "group preprocessing" in another host halo, typically of M_vir ~ 10 ^ {10 - 12} M_sun, before falling into the MW/M31 halos. Satellites with lower-mass and/or those closer to the MW/M31 fell in earlier and are more likely to have experienced group preprocessing; ...

  16. Tactile on-chip pre-processing with techniques from artificial retinas

    Science.gov (United States)

    Maldonado-Lopez, R.; Vidal-Verdu, F.; Linan, G.; Roca, E.; Rodriguez-Vazquez, A.

    2005-06-01

    The interest in tactile sensors is increasing as their use in complex unstructured environments is demanded, like in telepresence, minimal invasive surgery, robotics etc. The matrix of pressure data these devices provide can be managed with many image processing algorithms to extract the required information. However, as in the case of vision chips or artificial retinas, problems arise when the array size and the computation complexity increase. Having a look to the skin, the information collected by every mechanoreceptor is not carried to the brain for its processing, but some complex pre-processing is performed to fit the limited throughput of the nervous system. This is specially important for high bandwidth demanding tasks. Experimental works report that neural response of skin mechanoreceptors encodes the change in local shape from an offset level rather than the absolute force or pressure distributions. This is also the behavior of the retina, which implements a spatio-temporal averaging. We propose the same strategy in tactile preprocessing, and we show preliminary results when it faces the detection of the slip, which involves fast real-time processing.

  17. Penggunaan Web Crawler Untuk Menghimpun Tweets dengan Metode Pre-Processing Text Mining

    Directory of Open Access Journals (Sweden)

    Bayu Rima Aditya

    2015-11-01

    Full Text Available Saat ini jumlah data di media sosial sudah terbilang sangat besar, namun jumlah data tersebut masih belum banyak dimanfaatkan atau diolah untuk menjadi sesuatu yang bernilai guna, salah satunya adalah tweets pada media sosial twitter. Paper ini menguraikan hasil penggunaan engine web crawel menggunakan metode pre-processing text mining. Penggunaan engine web crawel itu sendiri bertujuan untuk menghimpun tweets melalui API twitter sebagai data teks tidak terstruktur yang kemudian direpresentasikan kembali kedalam bentuk web. Sedangkan penggunaan metode pre-processing bertujuan untuk menyaring tweets melalui tiga tahap, yaitu cleansing, case folding, dan parsing. Aplikasi yang dirancang pada penelitian ini menggunakan metode pengembangan perangkat lunak yaitu model waterfall dan diimplementasikan dengan bahasa pemrograman PHP. Sedangkan untuk pengujiannya menggunakan black box testing untuk memeriksa apakah hasil perancangan sudah dapat berjalan sesuai dengan harapan atau belum. Hasil dari penelitian ini adalah berupa aplikasi yang dapat mengubah tweets yang telah dihimpun menjadi data yang siap diolah lebih lanjut sesuai dengan kebutuhan user berdasarkan kata kunci dan tanggal pencarian. Hal ini dilakukan karena dari beberapa penelitian terkait terlihat bahwa data pada media sosial khususnya twitter saat ini menjadi tujuan perusahaan atau instansi untuk memahami opini masyarakat

  18. Review of Intelligent Techniques Applied for Classification and Preprocessing of Medical Image Data

    Directory of Open Access Journals (Sweden)

    H S Hota

    2013-01-01

    Full Text Available Medical image data like ECG, EEG and MRI, CT-scan images are the most important way to diagnose disease of human being in precise way and widely used by the physician. Problem can be clearly identified with the help of these medical images. A robust model can classify the medical image data in better way .In this paper intelligent techniques like neural network and fuzzy logic techniques are explored for MRI medical image data to identify tumor in human brain. Also need of preprocessing of medical image data is explored. Classification technique has been used extensively in the field of medical imaging. The conventional method in medical science for medical image data classification is done by human inspection which may result misclassification of data sometime this type of problem identification are impractical for large amounts of data and noisy data, a noisy data may be produced due to some technical fault of the machine or by human errors and can lead misclassification of medical image data. We have collected number of papers based on neural network and fuzzy logic along with hybrid technique to explore the efficiency and robustness of the model for brain MRI data. It has been analyzed that intelligent model along with data preprocessing using principal component analysis (PCA and segmentation may be the competitive model in this domain.

  19. Statistical Downscaling Output GCM Modeling with Continuum Regression and Pre-Processing PCA Approach

    Directory of Open Access Journals (Sweden)

    Sutikno Sutikno

    2010-08-01

    Full Text Available One of the climate models used to predict the climatic conditions is Global Circulation Models (GCM. GCM is a computer-based model that consists of different equations. It uses numerical and deterministic equation which follows the physics rules. GCM is a main tool to predict climate and weather, also it uses as primary information source to review the climate change effect. Statistical Downscaling (SD technique is used to bridge the large-scale GCM with a small scale (the study area. GCM data is spatial and temporal data most likely to occur where the spatial correlation between different data on the grid in a single domain. Multicollinearity problems require the need for pre-processing of variable data X. Continuum Regression (CR and pre-processing with Principal Component Analysis (PCA methods is an alternative to SD modelling. CR is one method which was developed by Stone and Brooks (1990. This method is a generalization from Ordinary Least Square (OLS, Principal Component Regression (PCR and Partial Least Square method (PLS methods, used to overcome multicollinearity problems. Data processing for the station in Ambon, Pontianak, Losarang, Indramayu and Yuntinyuat show that the RMSEP values and R2 predict in the domain 8x8 and 12x12 by uses CR method produces results better than by PCR and PLS.

  20. Fast data preprocessing with Graphics Processing Units for inverse problem solving in light-scattering measurements

    Science.gov (United States)

    Derkachov, G.; Jakubczyk, T.; Jakubczyk, D.; Archer, J.; Woźniak, M.

    2017-07-01

    Utilising Compute Unified Device Architecture (CUDA) platform for Graphics Processing Units (GPUs) enables significant reduction of computation time at a moderate cost, by means of parallel computing. In the paper [Jakubczyk et al., Opto-Electron. Rev., 2016] we reported using GPU for Mie scattering inverse problem solving (up to 800-fold speed-up). Here we report the development of two subroutines utilising GPU at data preprocessing stages for the inversion procedure: (i) A subroutine, based on ray tracing, for finding spherical aberration correction function. (ii) A subroutine performing the conversion of an image to a 1D distribution of light intensity versus azimuth angle (i.e. scattering diagram), fed from a movie-reading CPU subroutine running in parallel. All subroutines are incorporated in PikeReader application, which we make available on GitHub repository. PikeReader returns a sequence of intensity distributions versus a common azimuth angle vector, corresponding to the recorded movie. We obtained an overall ∼ 400 -fold speed-up of calculations at data preprocessing stages using CUDA codes running on GPU in comparison to single thread MATLAB-only code running on CPU.

  1. Evaluation of preprocessing, mapping and postprocessing algorithms for analyzing whole genome bisulfite sequencing data.

    Science.gov (United States)

    Tsuji, Junko; Weng, Zhiping

    2016-11-01

    Cytosine methylation regulates many biological processes such as gene expression, chromatin structure and chromosome stability. The whole genome bisulfite sequencing (WGBS) technique measures the methylation level at each cytosine throughout the genome. There are an increasing number of publicly available pipelines for analyzing WGBS data, reflecting many choices of read mapping algorithms as well as preprocessing and postprocessing methods. We simulated single-end and paired-end reads based on three experimental data sets, and comprehensively evaluated 192 combinations of three preprocessing, five postprocessing and five widely used read mapping algorithms. We also compared paired-end data with single-end data at the same sequencing depth for performance of read mapping and methylation level estimation. Bismark and LAST were the most robust mapping algorithms. We found that Mott trimming and quality filtering individually improved the performance of both read mapping and methylation level estimation, but combining them did not lead to further improvement. Furthermore, we confirmed that paired-end sequencing reduced error rate and enhanced sensitivity for both read mapping and methylation level estimation, especially for short reads and in repetitive regions of the human genome.

  2. Data Acquisition and Preprocessing in Studies on Humans: What Is Not Taught in Statistics Classes?

    Science.gov (United States)

    Zhu, Yeyi; Hernandez, Ladia M; Mueller, Peter; Dong, Yongquan; Forman, Michele R

    2013-01-01

    The aim of this paper is to address issues in research that may be missing from statistics classes and important for (bio-)statistics students. In the context of a case study, we discuss data acquisition and preprocessing steps that fill the gap between research questions posed by subject matter scientists and statistical methodology for formal inference. Issues include participant recruitment, data collection training and standardization, variable coding, data review and verification, data cleaning and editing, and documentation. Despite the critical importance of these details in research, most of these issues are rarely discussed in an applied statistics program. One reason for the lack of more formal training is the difficulty in addressing the many challenges that can possibly arise in the course of a study in a systematic way. This article can help to bridge this gap between research questions and formal statistical inference by using an illustrative case study for a discussion. We hope that reading and discussing this paper and practicing data preprocessing exercises will sensitize statistics students to these important issues and achieve optimal conduct, quality control, analysis, and interpretation of a study.

  3. A data preprocessing strategy for metabolomics to reduce the mask effect in data analysis.

    Science.gov (United States)

    Yang, Jun; Zhao, Xinjie; Lu, Xin; Lin, Xiaohui; Xu, Guowang

    2015-01-01

    HighlightsDeveloped a data preprocessing strategy to cope with missing values and mask effects in data analysis from high variation of abundant metabolites.A new method- 'x-VAST' was developed to amend the measurement deviation enlargement.Applying the above strategy, several low abundant masked differential metabolites were rescued. Metabolomics is a booming research field. Its success highly relies on the discovery of differential metabolites by comparing different data sets (for example, patients vs. controls). One of the challenges is that differences of the low abundant metabolites between groups are often masked by the high variation of abundant metabolites. In order to solve this challenge, a novel data preprocessing strategy consisting of three steps was proposed in this study. In step 1, a 'modified 80%' rule was used to reduce effect of missing values; in step 2, unit-variance and Pareto scaling methods were used to reduce the mask effect from the abundant metabolites. In step 3, in order to fix the adverse effect of scaling, stability information of the variables deduced from intensity information and the class information, was used to assign suitable weights to the variables. When applying to an LC/MS based metabolomics dataset from chronic hepatitis B patients study and two simulated datasets, the mask effect was found to be partially eliminated and several new low abundant differential metabolites were rescued.

  4. Effective Preprocessing Procedures Virtually Eliminate Distance-Dependent Motion Artifacts in Resting State FMRI.

    Science.gov (United States)

    Jo, Hang Joon; Gotts, Stephen J; Reynolds, Richard C; Bandettini, Peter A; Martin, Alex; Cox, Robert W; Saad, Ziad S

    2013-05-21

    Artifactual sources of resting-state (RS) FMRI can originate from head motion, physiology, and hardware. Of these sources, motion has received considerable attention and was found to induce corrupting effects by differentially biasing correlations between regions depending on their distance. Numerous corrective approaches have relied on the identification and censoring of high-motion time points and the use of the brain-wide average time series as a nuisance regressor to which the data are orthogonalized (Global Signal Regression, GSReg). We first replicate the previously reported head-motion bias on correlation coefficients using data generously contributed by Power et al. (2012). We then show that while motion can be the source of artifact in correlations, the distance-dependent bias-taken to be a manifestation of the motion effect on correlation-is exacerbated by the use of GSReg. Put differently, correlation estimates obtained after GSReg are more susceptible to the presence of motion and by extension to the levels of censoring. More generally, the effect of motion on correlation estimates depends on the preprocessing steps leading to the correlation estimate, with certain approaches performing markedly worse than others. For this purpose, we consider various models for RS FMRI preprocessing and show that WMeLOCAL, as subset of the ANATICOR discussed by Jo et al. (2010), denoising approach results in minimal sensitivity to motion and reduces by extension the dependence of correlation results on censoring.

  5. Data preprocessing method for liquid chromatography-mass spectrometry based metabolomics.

    Science.gov (United States)

    Wei, Xiaoli; Shi, Xue; Kim, Seongho; Zhang, Li; Patrick, Jeffrey S; Binkley, Joe; McClain, Craig; Zhang, Xiang

    2012-09-18

    A set of data preprocessing algorithms for peak detection and peak list alignment are reported for analysis of liquid chromatography-mass spectrometry (LC-MS)-based metabolomics data. For spectrum deconvolution, peak picking is achieved at the selected ion chromatogram (XIC) level. To estimate and remove the noise in XICs, each XIC is first segmented into several peak groups based on the continuity of scan number, and the noise level is estimated by all the XIC signals, except the regions potentially with presence of metabolite ion peaks. After removing noise, the peaks of molecular ions are detected using both the first and the second derivatives, followed by an efficient exponentially modified Gaussian-based peak deconvolution method for peak fitting. A two-stage alignment algorithm is also developed, where the retention times of all peaks are first transferred into the z-score domain and the peaks are aligned based on the measure of their mixture scores after retention time correction using a partial linear regression. Analysis of a set of spike-in LC-MS data from three groups of samples containing 16 metabolite standards mixed with metabolite extract from mouse livers demonstrates that the developed data preprocessing method performs better than two of the existing popular data analysis packages, MZmine2.6 and XCMS(2), for peak picking, peak list alignment, and quantification.

  6. A Lightweight Data Preprocessing Strategy with Fast Contradiction Analysis for Incremental Classifier Learning

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2015-01-01

    Full Text Available A prime objective in constructing data streaming mining models is to achieve good accuracy, fast learning, and robustness to noise. Although many techniques have been proposed in the past, efforts to improve the accuracy of classification models have been somewhat disparate. These techniques include, but are not limited to, feature selection, dimensionality reduction, and the removal of noise from training data. One limitation common to all of these techniques is the assumption that the full training dataset must be applied. Although this has been effective for traditional batch training, it may not be practical for incremental classifier learning, also known as data stream mining, where only a single pass of the data stream is seen at a time. Because data streams can amount to infinity and the so-called big data phenomenon, the data preprocessing time must be kept to a minimum. This paper introduces a new data preprocessing strategy suitable for the progressive purging of noisy data from the training dataset without the need to process the whole dataset at one time. This strategy is shown via a computer simulation to provide the significant benefit of allowing for the dynamic removal of bad records from the incremental classifier learning process.

  7. Robust symmetrical number system preprocessing for minimizing encoding errors in photonic analog-to-digital converters

    Science.gov (United States)

    Arvizo, Mylene R.; Calusdian, James; Hollinger, Kenneth B.; Pace, Phillip E.

    2011-08-01

    A photonic analog-to-digital converter (ADC) preprocessing architecture based on the robust symmetrical number system (RSNS) is presented. The RSNS preprocessing architecture is a modular scheme in which a modulus number of comparators are used at the output of each Mach-Zehnder modulator channel. The number of comparators with a logic 1 in each channel represents the integer values within each RSNS modulus sequence. When considered together, the integers within each sequence change one at a time at the next code position, resulting in an integer Gray code property. The RSNS ADC has the feature that the maximum nonlinearity is less than a least significant bit (LSB). Although the observed dynamic range (greatest length of combined sequences that contain no ambiguities) of the RSNS ADC is less than the optimum symmetrical number system ADC, the integer Gray code properties make it attractive for error control. A prototype is presented to demonstrate the feasibility of the concept and to show the important RSNS property that the largest nonlinearity is always less than a LSB. Also discussed are practical considerations related to multi-gigahertz implementations.

  8. Effective Preprocessing Procedures Virtually Eliminate Distance-Dependent Motion Artifacts in Resting State FMRI

    Directory of Open Access Journals (Sweden)

    Hang Joon Jo

    2013-01-01

    Full Text Available Artifactual sources of resting-state (RS FMRI can originate from head motion, physiology, and hardware. Of these sources, motion has received considerable attention and was found to induce corrupting effects by differentially biasing correlations between regions depending on their distance. Numerous corrective approaches have relied on the identification and censoring of high-motion time points and the use of the brain-wide average time series as a nuisance regressor to which the data are orthogonalized (Global Signal Regression, GSReg. We replicate the previously reported head-motion bias on correlation coefficients and then show that while motion can be the source of artifact in correlations, the distance-dependent bias is exacerbated by GSReg. Put differently, correlation estimates obtained after GSReg are more susceptible to the presence of motion and by extension to the levels of censoring. More generally, the effect of motion on correlation estimates depends on the preprocessing steps leading to the correlation estimate, with certain approaches performing markedly worse than others. For this purpose, we consider various models for RS FMRI preprocessing and show that the local white matter regressor (WMeLOCAL, a subset of ANATICOR, results in minimal sensitivity to motion and reduces by extension the dependence of correlation results on censoring.

  9. MODIStsp: An R package for automatic preprocessing of MODIS Land Products time series

    Science.gov (United States)

    Busetto, L.; Ranghetti, L.

    2016-12-01

    MODIStsp is a new R package allowing automating the creation of raster time series derived from MODIS Land Products. It allows performing several preprocessing steps (e.g. download, mosaicing, reprojection and resize) on MODIS products on a selected time period and area. All processing parameters can be set with a user-friendly GUI, allowing users to select which specific layers of the original MODIS HDF files have to be processed and which Quality Indicators have to be extracted from the aggregated MODIS Quality Assurance layers. Moreover, the tool allows on-the-fly computation of time series of Spectral Indexes (either standard or custom-specified by the user through the GUI) from surface reflectance bands. Outputs are saved as single-band rasters corresponding to each available acquisition date and output layer. Virtual files allowing easy access to the entire time series as a single file using common image processing/GIS software or R scripts can be also created. Non-interactive execution within an R script and stand-alone execution outside an R environment exploiting a previously created Options File are also possible, the latter allowing scheduling execution of MODIStsp to automatically update a time series when a new image is available. The proposed software constitutes a very useful tool for the Remote Sensing community, since it allows performing all the main preprocessing steps required for the creation of time series of MODIS data within a common framework, and without requiring any particular programming skills by its users.

  10. A preprocessing tool for removing artifact from cardiac RR interval recordings using three-dimensional spatial distribution mapping.

    Science.gov (United States)

    Stapelberg, Nicolas J C; Neumann, David L; Shum, David H K; McConnell, Harry; Hamilton-Craig, Ian

    2016-04-01

    Artifact is common in cardiac RR interval data that is recorded for heart rate variability (HRV) analysis. A novel algorithm for artifact detection and interpolation in RR interval data is described. It is based on spatial distribution mapping of RR interval magnitude and relationships to adjacent values in three dimensions. The characteristics of normal physiological RR intervals and artifact intervals were established using 24-h recordings from 20 technician-assessed human cardiac recordings. The algorithm was incorporated into a preprocessing tool and validated using 30 artificial RR (ARR) interval data files, to which known quantities of artifact (0.5%, 1%, 2%, 3%, 5%, 7%, 10%) were added. The impact of preprocessing ARR files with 1% added artifact was also assessed using 10 time domain and frequency domain HRV metrics. The preprocessing tool was also used to preprocess 69 24-h human cardiac recordings. The tool was able to remove artifact from technician-assessed human cardiac recordings (sensitivity 0.84, SD = 0.09, specificity of 1.00, SD = 0.01) and artificial data files. The removal of artifact had a low impact on time domain and frequency domain HRV metrics (ranging from 0% to 2.5% change in values). This novel preprocessing tool can be used with human 24-h cardiac recordings to remove artifact while minimally affecting physiological data and therefore having a low impact on HRV measures of that data.

  11. Increasing conclusiveness of metabonomic studies by chem-informatic preprocessing of capillary electrophoretic data on urinary nucleoside profiles.

    Science.gov (United States)

    Szymańska, E; Markuszewski, M J; Capron, X; van Nederkassel, A-M; Heyden, Y Vander; Markuszewski, M; Krajka, K; Kaliszan, R

    2007-01-17

    Nowadays, bioinformatics offers advanced tools and procedures of data mining aimed at finding consistent patterns or systematic relationships between variables. Numerous metabolites concentrations can readily be determined in a given biological system by high-throughput analytical methods. However, such row analytical data comprise noninformative components due to many disturbances normally occurring in analysis of biological samples. To eliminate those unwanted original analytical data components advanced chemometric data preprocessing methods might be of help. Here, such methods are applied to electrophoretic nucleoside profiles in urine samples of cancer patients and healthy volunteers. The electrophoretic nucleoside profiles were obtained under following conditions: 100 mM borate, 72.5 mM phosphate, 160 mM SDS, pH 6.7; 25 kV voltage, 30 degrees C temperature; untreated fused silica capillary 70 cm effective length, 50 microm I.D. Different most advanced preprocessing tools were applied for baseline correction, denoising and alignment of electrophoretic data. That approach was compared to standard procedure of electrophoretic peak integration. The best results of preprocessing were obtained after application of the so-called correlation optimized warping (COW) to align the data. The principal component analysis (PCA) of preprocessed data provides a clearly better consistency of the nucleoside electrophoretic profiles with health status of subjects than PCA of peak areas of original data (without preprocessing).

  12. Evaluating the reliability of different preprocessing steps to estimate graph theoretical measures in resting state fMRI data.

    Science.gov (United States)

    Aurich, Nathassia K; Alves Filho, José O; Marques da Silva, Ana M; Franco, Alexandre R

    2015-01-01

    With resting-state functional MRI (rs-fMRI) there are a variety of post-processing methods that can be used to quantify the human brain connectome. However, there is also a choice of which preprocessing steps will be used prior to calculating the functional connectivity of the brain. In this manuscript, we have tested seven different preprocessing schemes and assessed the reliability between and reproducibility within the various strategies by means of graph theoretical measures. Different preprocessing schemes were tested on a publicly available dataset, which includes rs-fMRI data of healthy controls. The brain was parcellated into 190 nodes and four graph theoretical (GT) measures were calculated; global efficiency (GEFF), characteristic path length (CPL), average clustering coefficient (ACC), and average local efficiency (ALE). Our findings indicate that results can significantly differ based on which preprocessing steps are selected. We also found dependence between motion and GT measurements in most preprocessing strategies. We conclude that by using censoring based on outliers within the functional time-series as a processing, results indicate an increase in reliability of GT measurements with a reduction of the dependency of head motion.

  13. Lectures on S-matrices and Integrability

    CERN Document Server

    Bombardelli, Diego

    2016-01-01

    In these notes we review the S-matrix theory in (1+1)-dimensional integrable models, focusing mainly on the relativistic case. Once the main definitions and physical properties are introduced, we discuss the factorization of scattering processes due to integrability. We then focus on the analytic properties of the 2-particle scattering amplitude and illustrate the derivation of the S-matrices for all the possible bound states using the so-called bootstrap principle. General algebraic structures underlying the S-matrix theory and its relation with the form factors axioms are briefly mentioned. Finally, we discuss the S-matrices of sine-Gordon and SU(2), SU(3) chiral Gross-Neveu models. This is part of a collection of lecture notes for the Young Researchers Integrability School, organised by the GATIS network at Durham University on 6-10 July 2015.

  14. Inferring Passenger Type from Commuter Eigentravel Matrices

    CERN Document Server

    Legara, Erika Fille

    2015-01-01

    A sufficient knowledge of the demographics of a commuting public is essential in formulating and implementing more targeted transportation policies, as commuters exhibit different ways of traveling. With the advent of the Automated Fare Collection system (AFC), probing the travel patterns of commuters has become less invasive and more accessible. Consequently, numerous transport studies related to human mobility have shown that these observed patterns allow one to pair individuals with locations and/or activities at certain times of the day. However, classifying commuters using their travel signatures is yet to be thoroughly examined. Here, we contribute to the literature by demonstrating a procedure to characterize passenger types (Adult, Child/Student, and Senior Citizen) based on their three-month travel patterns taken from a smart fare card system. We first establish a method to construct distinct commuter matrices, which we refer to as eigentravel matrices, that capture the characteristic travel routines...

  15. Astronomical Receiver Modelling Using Scattering Matrices

    CERN Document Server

    King, O G; Copley, C; Davis, R J; Leahy, J P; Leech, J; Muchovej, S J C; Pearson, T J; Taylor, Angela C

    2014-01-01

    Proper modelling of astronomical receivers is vital: it describes the systematic errors in the raw data, guides the receiver design process, and assists data calibration. In this paper we describe a method of analytically modelling the full signal and noise behaviour of arbitrarily complex radio receivers. We use electrical scattering matrices to describe the signal behaviour of individual components in the receiver, and noise correlation matrices to describe their noise behaviour. These are combined to produce the full receiver model. We apply this approach to a specified receiver architecture: a hybrid of a continous comparison radiometer and correlation polarimeter designed for the C-Band All-Sky Survey. We produce analytic descriptions of the receiver Mueller matrix and noise temperature, and discuss how imperfections in crucial components affect the raw data. Many of the conclusions drawn are generally applicable to correlation polarimeters and continuous comparison radiometers.

  16. Approximate inverse preconditioners for general sparse matrices

    Energy Technology Data Exchange (ETDEWEB)

    Chow, E.; Saad, Y. [Univ. of Minnesota, Minneapolis, MN (United States)

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  17. Asymptotic properties of random matrices and pseudomatrices

    CERN Document Server

    Lenczewski, Romuald

    2010-01-01

    We study the asymptotics of sums of matricially free random variables called random pseudomatrices, and we compare it with that of random matrices with block-identical variances. For objects of both types we find the limit joint distributions of blocks and give their Hilbert space realizations, using operators called `matricially free Gaussian operators'. In particular, if the variance matrices are symmetric, the asymptotics of symmetric blocks of random pseudomatrices agrees with that of symmetric random blocks. We also show that blocks of random pseudomatrices are `asymptotically matricially free' whereas the corresponding symmetric random blocks are `asymptotically symmetrically matricially free', where symmetric matricial freeness is obtained from matricial freeness by an operation of symmetrization. Finally, we show that row blocks of square, lower-block-triangular and block-diagonal pseudomatrices are asymptotically free, monotone independent and boolean independent, respectively.

  18. Non-Hermitean Wishart random matrices (I)

    CERN Document Server

    Kanzieper, Eugene

    2010-01-01

    A non-Hermitean extension of paradigmatic Wishart random matrices is introduced to set up a theoretical framework for statistical analysis of (real, complex and real quaternion) stochastic time series representing two "remote" complex systems. The first paper in a series provides a detailed spectral theory of non-Hermitean Wishart random matrices composed of complex valued entries. The great emphasis is placed on an asymptotic analysis of the mean eigenvalue density for which we derive, among other results, a complex-plane analogue of the Marchenko-Pastur law. A surprising connection with a class of matrix models previously invented in the context of quantum chromodynamics is pointed out. This provides one more evidence of the ubiquity of Random Matrix Theory.

  19. Determinants of adjacency matrices of graphs

    Directory of Open Access Journals (Sweden)

    Alireza Abdollahi

    2012-12-01

    Full Text Available We study the set of all determinants of adjacency matrices of graphs with a given number of vertices. Using Brendan McKay's data base of small graphs, determinants of graphs with at most $9$ vertices are computed so that the number of non-isomorphic graphs with given vertices whose determinants are all equal to a number is exhibited in a table. Using an idea of M. Newman, it is proved that if $G$ is a graph with $n$ vertices and ${d_1,dots,d_n}$ is the set of vertex degrees of $G$, then $gcd(2m,d^2$ divides the determinant of the adjacency matrix of $G$, where $d=gcd(d_1,dots,d_n$. Possible determinants of adjacency matrices of graphs with exactly two cycles are obtained.

  20. A NOVEL TECHNIQUE TO IMPROVE PHOTOMETRY IN CONFUSED IMAGES USING GRAPHS AND BAYESIAN PRIORS

    Energy Technology Data Exchange (ETDEWEB)

    Safarzadeh, Mohammadtaher [Department of Physics and Astronomy, Johns Hopkins University, 366 Bloomberg Center, 3400 North Charles Street, Baltimore, MD 21218 (United States); Ferguson, Henry C. [Space Telescope Science Institute, 3700 San Martin Boulevard, Baltimore, MD 21218 (United States); Lu, Yu [Kavli Institute for Particle Astrophysics and Cosmology, Stanford, CA 94309 (United States); Inami, Hanae [National Optical Astronomy Observatory, 950 North Cherry Avenue, Tucson, AZ 85719 (United States); Somerville, Rachel S., E-mail: mts@pha.jhu.edu [Department of Physics and Astronomy, Rutgers, The State University of New Jersey, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States)

    2015-01-10

    We present a new technique for overcoming confusion noise in deep far-infrared Herschel space telescope images making use of prior information from shorter λ < 2 μm wavelengths. For the deepest images obtained by Herschel, the flux limit due to source confusion is about a factor of three brighter than the flux limit due to instrumental noise and (smooth) sky background. We have investigated the possibility of de-confusing simulated Herschel PACS 160 μm images by using strong Bayesian priors on the positions and weak priors on the flux of sources. We find the blended sources and group them together and simultaneously fit their fluxes. We derive the posterior probability distribution function of fluxes subject to these priors through Monte Carlo Markov Chain (MCMC) sampling by fitting the image. Assuming we can predict the FIR flux of sources based on the ultraviolet-optical part of their SEDs to within an order of magnitude, the simulations show that we can obtain reliable fluxes and uncertainties at least a factor of three fainter than the confusion noise limit of 3σ {sub c} = 2.7 mJy in our simulated PACS-160 image. This technique could in principle be used to mitigate the effects of source confusion in any situation where one has prior information of positions and plausible fluxes of blended sources. For Herschel, application of this technique will improve our ability to constrain the dust content in normal galaxies at high redshift.

  1. MULTIFRACTAL STRUCTURE AND PRODUCT OF MATRICES

    Institute of Scientific and Technical Information of China (English)

    Lau Ka-sing

    2003-01-01

    There is a well established multifractal theory for self-similar measures generated by non-overlapping contractive similutudes.Our report here concerns those with overlaps.In particular we restrict our attention to the important classes of self-similar measures that have matrix representations.The dimension spectra and the Lq-spectra are analyzed through the product of matrices.There are abnormal behaviors on the multifrac-tal structure and they will be discussed in detail.

  2. Ferrers Matrices Characterized by the Rook Polynomials

    Institute of Scientific and Technical Information of China (English)

    MAHai-cheng; HUSheng-biao

    2003-01-01

    In this paper,we show that there exist precisely W(A) Ferrers matrices F(C1,C2,…,cm)such that the rook polynomials is equal to the rook polynomial of Ferrers matrix F(b1,b2,…,bm), where A={b1,b2-1,…,bm-m+1} is a repeated set,W(A) is weight of A.

  3. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-01-07

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design

  4. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-01-05

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.

  5. Connection matrices for ultradiscrete linear problems

    Energy Technology Data Exchange (ETDEWEB)

    Ormerod, Chris [School of Mathematics and Statistics F07, The University of Sydney, Sydney (Australia)

    2007-10-19

    We present theory outlining associated linear problems for ultradiscrete equations. The appropriate domain for these problems is the max-plus semiring. Our main result is that despite the restrictive nature of the max-plus semiring, it is still possible to define a theory of connection matrices analogous to that of Birkhoff and his school for systems of linear difference equations. We use such theory to provide evidence for the integrability of an ultradiscrete difference equation.

  6. Functional CLT for sample covariance matrices

    CERN Document Server

    Bai, Zhidong; Zhou, Wang; 10.3150/10-BEJ250

    2010-01-01

    Using Bernstein polynomial approximations, we prove the central limit theorem for linear spectral statistics of sample covariance matrices, indexed by a set of functions with continuous fourth order derivatives on an open interval including $[(1-\\sqrt{y})^2,(1+\\sqrt{y})^2]$, the support of the Mar\\u{c}enko--Pastur law. We also derive the explicit expressions for asymptotic mean and covariance functions.

  7. Index matrices towards an augmented matrix calculus

    CERN Document Server

    Atanassov, Krassimir T

    2014-01-01

    This book presents the very concept of an index matrix and its related augmented matrix calculus in a comprehensive form. It mostly illustrates the exposition with examples related to the generalized nets and intuitionistic fuzzy sets which are examples of an extremely wide array of possible application areas. The present book contains the basic results of the author over index matrices and some of its open problems with the aim to stimulating more researchers to start working in this area.

  8. On the exponentials of some structured matrices

    Energy Technology Data Exchange (ETDEWEB)

    Ramakrishna, Viswanath; Costa, F [Department of Mathematical Sciences and Center for Signals, Systems and Communications, University of Texas at Dallas, PO Box 830688, Richardson, TX 75083 (United States)

    2004-12-03

    This paper provides explicit techniques to compute the exponentials of a variety of structured 4 x 4 matrices. The procedures are fully algorithmic and can be used to find the desired exponentials in closed form. With one exception, they require no spectral information about the matrix being exponentiated. They rely on a mixture of Lie theory and one particular Clifford algebra isomorphism. These can be extended, in some cases, to higher dimensions when combined with techniques such as Givens rotations.

  9. The spectrum of kernel random matrices

    CERN Document Server

    Karoui, Noureddine El

    2010-01-01

    We place ourselves in the setting of high-dimensional statistical inference where the number of variables $p$ in a dataset of interest is of the same order of magnitude as the number of observations $n$. We consider the spectrum of certain kernel random matrices, in particular $n\\times n$ matrices whose $(i,j)$th entry is $f(X_i'X_j/p)$ or $f(\\Vert X_i-X_j\\Vert^2/p)$ where $p$ is the dimension of the data, and $X_i$ are independent data vectors. Here $f$ is assumed to be a locally smooth function. The study is motivated by questions arising in statistics and computer science where these matrices are used to perform, among other things, nonlinear versions of principal component analysis. Surprisingly, we show that in high-dimensions, and for the models we analyze, the problem becomes essentially linear--which is at odds with heuristics sometimes used to justify the usage of these methods. The analysis also highlights certain peculiarities of models widely studied in random matrix theory and raises some questio...

  10. Quark flavor mixings from hierarchical mass matrices

    Energy Technology Data Exchange (ETDEWEB)

    Verma, Rohit [Chinese Academy of Sciences, Institute of High Energy Physics, Beijing (China); Rayat Institute of Engineering and Information Technology, Ropar (India); Zhou, Shun [Chinese Academy of Sciences, Institute of High Energy Physics, Beijing (China); Peking University, Center for High Energy Physics, Beijing (China)

    2016-05-15

    In this paper, we extend the Fritzsch ansatz of quark mass matrices while retaining their hierarchical structures and show that the main features of the Cabibbo-Kobayashi-Maskawa (CKM) matrix V, including vertical stroke V{sub us} vertical stroke ≅ vertical stroke V{sub cd} vertical stroke, vertical stroke V{sub cb} vertical stroke ≅ vertical stroke V{sub ts} vertical stroke and vertical stroke V{sub ub} vertical stroke / vertical stroke V{sub cb} vertical stroke < vertical stroke V{sub td} vertical stroke / vertical stroke V{sub ts} vertical stroke can be well understood. This agreement is observed especially when the mass matrices have non-vanishing (1, 3) and (3, 1) off-diagonal elements. The phenomenological consequences of these for the allowed texture content and gross structural features of 'hierarchical' quark mass matrices are addressed from a model-independent prospective under the assumption of factorizable phases in these. The approximate and analytical expressions of the CKM matrix elements are derived and a detailed analysis reveals that such structures are in good agreement with the observed quark flavor mixing angles and the CP-violating phase at the 1σ level and call upon a further investigation of the realization of these structures from a top-down prospective. (orig.)

  11. Scattering Matrices and Conductances of Leaky Tori

    Science.gov (United States)

    Pnueli, A.

    1994-04-01

    Leaky tori are two-dimensional surfaces that extend to infinity but which have finite area. It is a tempting idea to regard them as models of mesoscopic systems connected to very long leads. Because of this analogy-scattering matrices on leaky tori are potentially interesting, and indeed-the scattering matrix on one such object-"the" leaky torus-was studied by M. Gutzwiller, who showed that it has chaotic behavior. M. Antoine, A. Comtet and S. Ouvry generalized Gutzwiller‧s result by calculating the scattering matrix in the presence of a constant magnetic field B perpendicular to the surface. Motivated by these results-we generalize them further. We define scattering matrices for spinless electrons on a general leaky torus in the presence of a constant magnetic field "perpendicular" to the surface. From the properties of these matrices we show the following: (a) For integer values of B, Tij (the transition probability from cusp i to cusp j), and hence also the Büttiker conductances of the surfaces, are B-independent (this cannot be interpreted as a kind of Aharonov-Bohm effect since a magnetic force is acting on the electrons). (b) The Wigner time-delay is a monotonically increasing function of B.

  12. Identification of irradiated insects: Alterations in total proteins of irradiated adults of the confused flour beetle, tribolium confused DuVal. (Coleoptera: Tenenbrionidae)

    Energy Technology Data Exchange (ETDEWEB)

    Ignatowicz, S. [Szkola Glowna Gospodarstwa Wiejskiego, Warsaw (Poland)

    1996-12-31

    The results of electrophoretic separation of proteins (SDS-PAGE) revealed several protein bands from homogenate samples of irradiated and control adults of the confused flour beetle, Tribolium confusum DuVal. However, it was nor possible to detect protein bands that show any shifts or separations between the irradiated and control beetles of the confused flour beetle. A remarkable reduction in the content of total proteins in irradiated adults was noted. Irradiation treatment altered the electrophoretic patterns and densities of proteins. The density of low molecular weight proteins (<16 kDa) increased, while the of high molecular weight proteins (> 23 kDa) decreased. The alterations in total proteins of adult confused beetles are related to the dose of gamma radiation and the time elapsed after treatment. However, these clear changes in the electrophoretic pattern of protein fractions cannot be used to distinguish irradiated insects from non-irradiated ones as these alterations are not specific for irradiation. (author). 28 refs, 6 figs, 2 tabs.

  13. On the Construction of Jointly Superregular Lower Triangular Toeplitz Matrices

    DEFF Research Database (Denmark)

    Hansen, Jonas; Østergaard, Jan; Kudahl, Johnny

    2016-01-01

    superregular and product preserving jointly superregular matrices, and extend our explicit constructions of superregular matrices to these cases. Jointly superregular matrices are necessary to achieve optimal decoding capabilities for the case of codes with a rate lower than 1/2, and the product preserving......Superregular matrices have the property that all of their submatrices, which can be full rank are so. Lower triangular superregular matrices are useful for e.g., maximum distance separable convolutional codes as well as for (sequential) network codes. In this work, we provide an explicit design...

  14. The modern origin of matrices and their applications

    Science.gov (United States)

    Debnath, L.

    2014-05-01

    This paper deals with the modern development of matrices, linear transformations, quadratic forms and their applications to geometry and mechanics, eigenvalues, eigenvectors and characteristic equations with applications. Included are the representations of real and complex numbers, and quaternions by matrices, and isomorphism in order to show that matrices form a ring in abstract algebra. Some special matrices, including Hilbert's matrix, Toeplitz's matrix, Pauli's and Dirac's matrices in quantum mechanics, and Einstein's Pythagorean formula are discussed to illustrate diverse applications of matrix algebra. Included also is a modern piece of information that puts mathematics, science and mathematics education professionals at the forefront of advanced study and research on linear algebra and its applications.

  15. Synthesis and spectroscopy of a series of substituted N-confused tetraphenylporphyrins.

    Science.gov (United States)

    Shaw, Janet L; Garrison, Shana A; Alemán, Elvin A; Ziegler, Christopher J; Modarelli, David A

    2004-10-29

    A series of N-confused tetraphenylporphyrins (H(2)NCTPPs) with substituents on either the para- or the 3,5-positions of the meso phenyl rings were prepared using Lindsey conditions. Both electron-withdrawing and electron-donating groups were chosen in order to probe the effects of peripheral substitution on the properties of the macrocycles. The series includes 5,10,15,20-tetra-(4-R-phenyl) N-confused porphyrins (where R = bromo (1), iodo (2), cyano (3), methoxy (4), 2',5'-dimethoxyphenyl (5), or ethynyl (6)) and 5,10,15,20-(3,5-di-tert-butylphenyl) N-confused porphyrin (7). Absorption and steady-state fluorescence measurements were carried out, and quantum yields were measured for all compounds in both dichloromethane (CH(2)Cl(2)) and dimethylacetamide (DMAc).

  16. [Reliability and validity of the pain assessment tool in confused older adults--IADIC].

    Science.gov (United States)

    Saurin, Gislaine; Crossetti, Maria da Graça Oliveira

    2013-12-01

    This is a methodological study, the objective was to conduct the pre-test and validate the psychometric properties of the Pain Assessment Tool in Confused Elderly (IADIC) in the immediate postoperative period. The sample consisted of 104 patients aged 60 years and over in the immediate postoperative perio4 admitted to the recovery room after surgery in a general hospital of Rio Grande do Sul Brasil. Data were collected from April to August 2012. Patients included in the study were diagnosed as confused after application of the Confusion Assessment Method-CAM and possessed age of 71.51 +/- 8.81 years. In the pre-test did not require modifications of the instrument. Upon validation the psychometric properties and internal consistency showed a Cronbach's alpha of 0.88 and reproducibility assessed by the intmraclass coefficient was 0.838. Internal consistency and reproducibility gave IADIC the validity and reliability for use in Brazil.

  17. Deterministic sensing matrices in compressive sensing: a survey.

    Science.gov (United States)

    Nguyen, Thu L N; Shin, Yoan

    2013-01-01

    Compressive sensing is a sampling method which provides a new approach to efficient signal compression and recovery by exploiting the fact that a sparse signal can be suitably reconstructed from very few measurements. One of the most concerns in compressive sensing is the construction of the sensing matrices. While random sensing matrices have been widely studied, only a few deterministic sensing matrices have been considered. These matrices are highly desirable on structure which allows fast implementation with reduced storage requirements. In this paper, a survey of deterministic sensing matrices for compressive sensing is presented. We introduce a basic problem in compressive sensing and some disadvantage of the random sensing matrices. Some recent results on construction of the deterministic sensing matrices are discussed.

  18. Matrices with restricted entries and q-analogues of permutations

    CERN Document Server

    Lewis, Joel Brewster; Morales, Alejandro H; Panova, Greta; Sam, Steven V; Zhang, Yan

    2010-01-01

    We study the functions that count matrices of given rank over a finite field with specified positions equal to zero. We show that these matrices are $q$-analogues of permutations with certain restricted values. We obtain a simple closed formula for the number of invertible matrices with zero diagonal, a $q$-analogue of derangements, and a curious relationship between invertible skew-symmetric matrices and invertible symmetric matrices with zero diagonal. In addition, we provide recursions to enumerate matrices and symmetric matrices with zero diagonal by rank, and we frame some of our results in the context of Lie theory. Finally, we provide a brief exposition of polynomiality results for enumeration questions related to those mentioned, and give several open questions.

  19. On image pre-processing for PIV of single- and two-phase flows over reflecting objects

    Energy Technology Data Exchange (ETDEWEB)

    Deen, Niels G.; Willems, Paul; Sint Annaland, Martin van; Kuipers, J.A.M.; Lammertink, Rob G.H.; Kemperman, Antoine J.B.; Wessling, Matthias; Meer, Walter G.J. van der [University of Twente, Faculty of Science and Technology, Institute of Mechanics, Processes and Control Twente (IMPACT), Enschede (Netherlands)

    2010-08-15

    A novel image pre-processing scheme for PIV of single- and two-phase flows over reflecting objects which does not require the use of additional hardware is discussed. The approach for single-phase flow consists of image normalization and intensity stretching followed by background subtraction. For two-phase flow, an additional masking step is added after the background subtraction. The effectiveness of the pre-processing scheme is shown for two examples: PIV of single-phase flow in spacer-filled channels and two-phase flow in these channels. The pre-processing scheme increased the displacement peak detectability significantly and produced high quality vector fields, without the use of additional hardware. (orig.)

  20. A simpler method of preprocessing MALDI-TOF MS data for differential biomarker analysis: stem cell and melanoma cancer studies

    Directory of Open Access Journals (Sweden)

    Tong Dong L

    2011-09-01

    Full Text Available Abstract Introduction Raw spectral data from matrix-assisted laser desorption/ionisation time-of-flight (MALDI-TOF with MS profiling techniques usually contains complex information not readily providing biological insight into disease. The association of identified features within raw data to a known peptide is extremely difficult. Data preprocessing to remove uncertainty characteristics in the data is normally required before performing any further analysis. This study proposes an alternative yet simple solution to preprocess raw MALDI-TOF-MS data for identification of candidate marker ions. Two in-house MALDI-TOF-MS data sets from two different sample sources (melanoma serum and cord blood plasma are used in our study. Method Raw MS spectral profiles were preprocessed using the proposed approach to identify peak regions in the spectra. The preprocessed data was then analysed using bespoke machine learning algorithms for data reduction and ion selection. Using the selected ions, an ANN-based predictive model was constructed to examine the predictive power of these ions for classification. Results Our model identified 10 candidate marker ions for both data sets. These ion panels achieved over 90% classification accuracy on blind validation data. Receiver operating characteristics analysis was performed and the area under the curve for melanoma and cord blood classifiers was 0.991 and 0.986, respectively. Conclusion The results suggest that our data preprocessing technique removes unwanted characteristics of the raw data, while preserving the predictive components of the data. Ion identification analysis can be carried out using MALDI-TOF-MS data with the proposed data preprocessing technique coupled with bespoke algorithms for data reduction and ion selection.