WorldWideScience

Sample records for preprocessing confusion matrices

  1. Automated pre-processing and multivariate vibrational spectra analysis software for rapid results in clinical settings

    Science.gov (United States)

    Bhattacharjee, T.; Kumar, P.; Fillipe, L.

    2018-02-01

    Vibrational spectroscopy, especially FTIR and Raman, has shown enormous potential in disease diagnosis, especially in cancers. Their potential for detecting varied pathological conditions are regularly reported. However, to prove their applicability in clinics, large multi-center multi-national studies need to be undertaken; and these will result in enormous amount of data. A parallel effort to develop analytical methods, including user-friendly software that can quickly pre-process data and subject them to required multivariate analysis is warranted in order to obtain results in real time. This study reports a MATLAB based script that can automatically import data, preprocess spectra— interpolation, derivatives, normalization, and then carry out Principal Component Analysis (PCA) followed by Linear Discriminant Analysis (LDA) of the first 10 PCs; all with a single click. The software has been verified on data obtained from cell lines, animal models, and in vivo patient datasets, and gives results comparable to Minitab 16 software. The software can be used to import variety of file extensions, asc, .txt., .xls, and many others. Options to ignore noisy data, plot all possible graphs with PCA factors 1 to 5, and save loading factors, confusion matrices and other parameters are also present. The software can provide results for a dataset of 300 spectra within 0.01 s. We believe that the software will be vital not only in clinical trials using vibrational spectroscopic data, but also to obtain rapid results when these tools get translated into clinics.

  2. Confusing confusability

    DEFF Research Database (Denmark)

    Starrfelt, Randi; Lindegaard, Martin; Bundesen, Claus

    2015-01-01

    The effect of letter confusability on reading has received increasing attention over the last decade. Confusability scores for individual letters, derived from older psychophysical studies, have been used to calculate summed confusability scores for whole words, and effects of this variable...... on normal and alexic reading have been reported. On this basis, letter confusability is now increasingly controlled for in stimulus selection. In this commentary, we try to clarify what letter confusability scores represent and discuss several problems with the way this variable has been treated...... in neuropsychological research. We conclude that it is premature to control for this variable when selecting stimuli in studies of reading and alexia. Although letter confusability may play a role in (impaired) reading, it remains to be determined how this measure should be calculated, and what effect it may have...

  3. Chain of matrices, loop equations and topological recursion

    CERN Document Server

    Orantin, Nicolas

    2009-01-01

    Random matrices are used in fields as different as the study of multi-orthogonal polynomials or the enumeration of discrete surfaces. Both of them are based on the study of a matrix integral. However, this term can be confusing since the definition of a matrix integral in these two applications is not the same. These two definitions, perturbative and non-perturbative, are discussed in this chapter as well as their relation. The so-called loop equations satisfied by integrals over random matrices coupled in chain is discussed as well as their recursive solution in the perturbative case when the matrices are Hermitean.

  4. Evaluation of the confusion matrix method in the validation of an automated system for measuring feeding behaviour of cattle.

    Science.gov (United States)

    Ruuska, Salla; Hämäläinen, Wilhelmiina; Kajava, Sari; Mughal, Mikaela; Matilainen, Pekka; Mononen, Jaakko

    2018-03-01

    The aim of the present study was to evaluate empirically confusion matrices in device validation. We compared the confusion matrix method to linear regression and error indices in the validation of a device measuring feeding behaviour of dairy cattle. In addition, we studied how to extract additional information on classification errors with confusion probabilities. The data consisted of 12 h behaviour measurements from five dairy cows; feeding and other behaviour were detected simultaneously with a device and from video recordings. The resulting 216 000 pairs of classifications were used to construct confusion matrices and calculate performance measures. In addition, hourly durations of each behaviour were calculated and the accuracy of measurements was evaluated with linear regression and error indices. All three validation methods agreed when the behaviour was detected very accurately or inaccurately. Otherwise, in the intermediate cases, the confusion matrix method and error indices produced relatively concordant results, but the linear regression method often disagreed with them. Our study supports the use of confusion matrix analysis in validation since it is robust to any data distribution and type of relationship, it makes a stringent evaluation of validity, and it offers extra information on the type and sources of errors. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. A comparison of the effects of filtering and sensorineural hearing loss on patients of consonant confusions.

    Science.gov (United States)

    Wang, M D; Reed, C M; Bilger, R C

    1978-03-01

    It has been found that listeners with sensorineural hearing loss who show similar patterns of consonant confusions also tend to have similar audiometric profiles. The present study determined whether normal listeners, presented with filtered speech, would produce consonant confusions similar to those previously reported for the hearing-impaired listener. Consonant confusion matrices were obtained from eight normal-hearing subjects for four sets of CV and VC nonsense syllables presented under six high-pass and six-low pass filtering conditions. Patterns of consonant confusion for each condition were described using phonological features in sequential information analysis. Severe low-pass filtering produced consonant confusions comparable to those of listeners with high-frequency hearing loss. Severe high-pass filtering gave a result comparable to that of patients with flat or rising audiograms. And, mild filtering resulted in confusion patterns comparable to those of listeners with essentially normal hearing. An explanation in terms of the spectrum, the level of speech, and the configuration of this individual listener's audiogram is given.

  6. Predicting consonant recognition and confusions in normal-hearing listeners

    DEFF Research Database (Denmark)

    Zaar, Johannes; Dau, Torsten

    2017-01-01

    , Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102, 2892–2905]. The model was evaluated based on the extensive consonant perception data set provided by Zaar and Dau [(2015). J. Acoust. Soc. Am. 138, 1253–1267], which was obtained with normal-hearing listeners using 15 consonant-vowel combinations...... confusion groups. The large predictive power of the proposed model suggests that adaptive processes in the auditory preprocessing in combination with a cross-correlation based template-matching back end can account for some of the processes underlying consonant perception in normal-hearing listeners....... The proposed model may provide a valuable framework, e.g., for investigating the effects of hearing impairment and hearing-aid signal processing on phoneme recognition....

  7. Effective Feature Preprocessing for Time Series Forecasting

    DEFF Research Database (Denmark)

    Zhao, Junhua; Dong, Zhaoyang; Xu, Zhao

    2006-01-01

    Time series forecasting is an important area in data mining research. Feature preprocessing techniques have significant influence on forecasting accuracy, therefore are essential in a forecasting model. Although several feature preprocessing techniques have been applied in time series forecasting...... performance in time series forecasting. It is demonstrated in our experiment that, effective feature preprocessing can significantly enhance forecasting accuracy. This research can be a useful guidance for researchers on effectively selecting feature preprocessing techniques and integrating them with time...... series forecasting models....

  8. Facilitating Watermark Insertion by Preprocessing Media

    Directory of Open Access Journals (Sweden)

    Matt L. Miller

    2004-10-01

    Full Text Available There are several watermarking applications that require the deployment of a very large number of watermark embedders. These applications often have severe budgetary constraints that limit the computation resources that are available. Under these circumstances, only simple embedding algorithms can be deployed, which have limited performance. In order to improve performance, we propose preprocessing the original media. It is envisaged that this preprocessing occurs during content creation and has no budgetary or computational constraints. Preprocessing combined with simple embedding creates a watermarked Work, the performance of which exceeds that of simple embedding alone. However, this performance improvement is obtained without any increase in the computational complexity of the embedder. Rather, the additional computational burden is shifted to the preprocessing stage. A simple example of this procedure is described and experimental results confirm our assertions.

  9. Retinal Image Preprocessing: Background and Noise Segmentation

    Directory of Open Access Journals (Sweden)

    Usman Akram

    2012-09-01

    Full Text Available Retinal images are used for the automated screening and diagnosis of diabetic retinopathy. The retinal image quality must be improved for the detection of features and abnormalities and for this purpose preprocessing of retinal images is vital. In this paper, we present a novel automated approach for preprocessing of colored retinal images. The proposed technique improves the quality of input retinal image by separating the background and noisy area from the overall image. It contains coarse segmentation and fine segmentation. Standard retinal images databases Diaretdb0, Diaretdb1, DRIVE and STARE are used to test the validation of our preprocessing technique. The experimental results show the validity of proposed preprocessing technique.

  10. "G.P.S Matrices" programme: A method to improve the mastery level of social science students in matrices operations

    Science.gov (United States)

    Lee, Ken Voon

    2013-04-01

    The purpose of this action research was to increase the mastery level of Form Five Social Science students in Tawau II National Secondary School in the operations of addition, subtraction and multiplication of matrices in Mathematics. A total of 30 students were involved. Preliminary findings through the analysis of pre-test results and questionnaire had identified the main problem faced in which the students felt confused with the application of principles of the operations of matrices when performing these operations. Therefore, an action research was conducted using an intervention programme called "G.P.S Matrices" to overcome the problem. This programme was divided into three phases. 'Gift of Matrices' phase aimed at forming matrix teaching aids. The second and third phases were 'Positioning the Elements of Matrices' and 'Strenghtening the Concept of Matrices'. These two phases were aimed at increasing the level of understanding and memory of the students towards the principles of matrix operations. Besides, this third phase was also aimed at creating an interesting learning environment. A comparison between the results of pre-test and post-test had shown a remarkable improvement in students' performances after implementing the programme. In addition, the analysis of interview findings also indicated a positive feedback on the changes in students' attitude, particularly in the aspect of students' understanding level. Moreover, the level of students' memory also increased following the use of the concrete matrix teaching aids created in phase one. Besides, teachers felt encouraging when conducive learning environment was created through students' presentation activity held in third phase. Furthermore, students were voluntarily involved in these student-centred activities. In conclusion, this research findings showed an increase in the mastery level of students in these three matrix operations and thus the objective of the research had been achieved.

  11. The Effect of Preprocessing on Arabic Document Categorization

    Directory of Open Access Journals (Sweden)

    Abdullah Ayedh

    2016-04-01

    Full Text Available Preprocessing is one of the main components in a conventional document categorization (DC framework. This paper aims to highlight the effect of preprocessing tasks on the efficiency of the Arabic DC system. In this study, three classification techniques are used, namely, naive Bayes (NB, k-nearest neighbor (KNN, and support vector machine (SVM. Experimental analysis on Arabic datasets reveals that preprocessing techniques have a significant impact on the classification accuracy, especially with complicated morphological structure of the Arabic language. Choosing appropriate combinations of preprocessing tasks provides significant improvement on the accuracy of document categorization depending on the feature size and classification techniques. Findings of this study show that the SVM technique has outperformed the KNN and NB techniques. The SVM technique achieved 96.74% micro-F1 value by using the combination of normalization and stemming as preprocessing tasks.

  12. Data preprocessing in data mining

    CERN Document Server

    García, Salvador; Herrera, Francisco

    2015-01-01

    Data Preprocessing for Data Mining addresses one of the most important issues within the well-known Knowledge Discovery from Data process. Data directly taken from the source will likely have inconsistencies, errors or most importantly, it is not ready to be considered for a data mining process. Furthermore, the increasing amount of data in recent science, industry and business applications, calls to the requirement of more complex tools to analyze it. Thanks to data preprocessing, it is possible to convert the impossible into possible, adapting the data to fulfill the input demands of each data mining algorithm. Data preprocessing includes the data reduction techniques, which aim at reducing the complexity of the data, detecting or removing irrelevant and noisy elements from the data. This book is intended to review the tasks that fill the gap between the data acquisition from the source and the data mining process. A comprehensive look from a practical point of view, including basic concepts and surveying t...

  13. An Analysis of the Max-Min Texture Measure.

    Science.gov (United States)

    1982-01-01

    PANC 33 D2 Confusion Matrices for Scene A, IR 34 D3 Confusion Matrices for Scene B, PANC 35 D4 Confusion Matrices for Scene B, IR 36 D5 Confusion...Matrices for Scene C, PANC 37 D6 Confusion Matrices for Scene C, IR 38 D7 Confusion Matrices for Scene E, PANC 39 D8 Confusion Matrices for Scene E, IR 40...D9 Confusion Matrices for Scene H, PANC 41 DIO Confusion Matrices for Scene H, JR 42 3 .D 10CnuinMtie o cn ,IR4 AN ANALYSIS OF THE MAX-MIN TEXTURE

  14. Confusion

    Science.gov (United States)

    ... suddenly or there are other symptoms, such as: Cold or clammy skin Dizziness or feeling faint Fast pulse Fever Headache Slow or rapid breathing Uncontrolled shivering Also call 911 if: Confusion has come on ...

  15. Micro-Analyzer: automatic preprocessing of Affymetrix microarray data.

    Science.gov (United States)

    Guzzi, Pietro Hiram; Cannataro, Mario

    2013-08-01

    A current trend in genomics is the investigation of the cell mechanism using different technologies, in order to explain the relationship among genes, molecular processes and diseases. For instance, the combined use of gene-expression arrays and genomic arrays has been demonstrated as an effective instrument in clinical practice. Consequently, in a single experiment different kind of microarrays may be used, resulting in the production of different types of binary data (images and textual raw data). The analysis of microarray data requires an initial preprocessing phase, that makes raw data suitable for use on existing analysis platforms, such as the TIGR M4 (TM4) Suite. An additional challenge to be faced by emerging data analysis platforms is the ability to treat in a combined way those different microarray formats coupled with clinical data. In fact, resulting integrated data may include both numerical and symbolic data (e.g. gene expression and SNPs regarding molecular data), as well as temporal data (e.g. the response to a drug, time to progression and survival rate), regarding clinical data. Raw data preprocessing is a crucial step in analysis but is often performed in a manual and error prone way using different software tools. Thus novel, platform independent, and possibly open source tools enabling the semi-automatic preprocessing and annotation of different microarray data are needed. The paper presents Micro-Analyzer (Microarray Analyzer), a cross-platform tool for the automatic normalization, summarization and annotation of Affymetrix gene expression and SNP binary data. It represents the evolution of the μ-CS tool, extending the preprocessing to SNP arrays that were not allowed in μ-CS. The Micro-Analyzer is provided as a Java standalone tool and enables users to read, preprocess and analyse binary microarray data (gene expression and SNPs) by invoking TM4 platform. It avoids: (i) the manual invocation of external tools (e.g. the Affymetrix Power

  16. Practical Secure Computation with Pre-Processing

    DEFF Research Database (Denmark)

    Zakarias, Rasmus Winther

    Secure Multiparty Computation has been divided between protocols best suited for binary circuits and protocols best suited for arithmetic circuits. With their MiniMac protocol in [DZ13], Damgård and Zakarias take an important step towards bridging these worlds with an arithmetic protocol tuned...... space for pre-processing material than computing the non-linear parts online (depends on the quality of circuit of course). Surprisingly, even for our optimized AES-circuit this is not the case. We further improve the design of the pre-processing material and end up with only 10 megabyes of pre...... a protocol for small field arithmetic to do fast large integer multipli- cations. This is achieved by devising pre-processing material that allows the Toom-Cook multiplication algorithm to run between the parties with linear communication complexity. With this result computation on the CPU by the parties...

  17. Ensemble preprocessing of near-infrared (NIR) spectra for multivariate calibration

    International Nuclear Information System (INIS)

    Xu Lu; Zhou Yanping; Tang Lijuan; Wu Hailong; Jiang Jianhui; Shen Guoli; Yu Ruqin

    2008-01-01

    Preprocessing of raw near-infrared (NIR) spectral data is indispensable in multivariate calibration when the measured spectra are subject to significant noises, baselines and other undesirable factors. However, due to the lack of sufficient prior information and an incomplete knowledge of the raw data, NIR spectra preprocessing in multivariate calibration is still trial and error. How to select a proper method depends largely on both the nature of the data and the expertise and experience of the practitioners. This might limit the applications of multivariate calibration in many fields, where researchers are not very familiar with the characteristics of many preprocessing methods unique in chemometrics and have difficulties to select the most suitable methods. Another problem is many preprocessing methods, when used alone, might degrade the data in certain aspects or lose some useful information while improving certain qualities of the data. In order to tackle these problems, this paper proposes a new concept of data preprocessing, ensemble preprocessing method, where partial least squares (PLSs) models built on differently preprocessed data are combined by Monte Carlo cross validation (MCCV) stacked regression. Little or no prior information of the data and expertise are required. Moreover, fusion of complementary information obtained by different preprocessing methods often leads to a more stable and accurate calibration model. The investigation of two real data sets has demonstrated the advantages of the proposed method

  18. Inverse m-matrices and ultrametric matrices

    CERN Document Server

    Dellacherie, Claude; San Martin, Jaime

    2014-01-01

    The study of M-matrices, their inverses and discrete potential theory is now a well-established part of linear algebra and the theory of Markov chains. The main focus of this monograph is the so-called inverse M-matrix problem, which asks for a characterization of nonnegative matrices whose inverses are M-matrices. We present an answer in terms of discrete potential theory based on the Choquet-Deny Theorem. A distinguished subclass of inverse M-matrices is ultrametric matrices, which are important in applications such as taxonomy. Ultrametricity is revealed to be a relevant concept in linear algebra and discrete potential theory because of its relation with trees in graph theory and mean expected value matrices in probability theory. Remarkable properties of Hadamard functions and products for the class of inverse M-matrices are developed and probabilistic insights are provided throughout the monograph.

  19. Evaluating the impact of image preprocessing on iris segmentation

    Directory of Open Access Journals (Sweden)

    José F. Valencia-Murillo

    2014-08-01

    Full Text Available Segmentation is one of the most important stages in iris recognition systems. In this paper, image preprocessing algorithms are applied in order to evaluate their impact on successful iris segmentation. The preprocessing algorithms are based on histogram adjustment, Gaussian filters and suppression of specular reflections in human eye images. The segmentation method introduced by Masek is applied on 199 images acquired under unconstrained conditions, belonging to the CASIA-irisV3 database, before and after applying the preprocessing algorithms. Then, the impact of image preprocessing algorithms on the percentage of successful iris segmentation is evaluated by means of a visual inspection of images in order to determine if circumferences of iris and pupil were detected correctly. An increase from 59% to 73% in percentage of successful iris segmentation is obtained with an algorithm that combine elimination of specular reflections, followed by the implementation of a Gaussian filter having a 5x5 kernel. The results highlight the importance of a preprocessing stage as a previous step in order to improve the performance during the edge detection and iris segmentation processes.

  20. Preprocessing Moist Lignocellulosic Biomass for Biorefinery Feedstocks

    Energy Technology Data Exchange (ETDEWEB)

    Neal Yancey; Christopher T. Wright; Craig Conner; J. Richard Hess

    2009-06-01

    Biomass preprocessing is one of the primary operations in the feedstock assembly system of a lignocellulosic biorefinery. Preprocessing is generally accomplished using industrial grinders to format biomass materials into a suitable biorefinery feedstock for conversion to ethanol and other bioproducts. Many factors affect machine efficiency and the physical characteristics of preprocessed biomass. For example, moisture content of the biomass as received from the point of production has a significant impact on overall system efficiency and can significantly affect the characteristics (particle size distribution, flowability, storability, etc.) of the size-reduced biomass. Many different grinder configurations are available on the market, each with advantages under specific conditions. Ultimately, the capacity and/or efficiency of the grinding process can be enhanced by selecting the grinder configuration that optimizes grinder performance based on moisture content and screen size. This paper discusses the relationships of biomass moisture with respect to preprocessing system performance and product physical characteristics and compares data obtained on corn stover, switchgrass, and wheat straw as model feedstocks during Vermeer HG 200 grinder testing. During the tests, grinder screen configuration and biomass moisture content were varied and tested to provide a better understanding of their relative impact on machine performance and the resulting feedstock physical characteristics and uniformity relative to each crop tested.

  1. Comparison of pre-processing methods for multiplex bead-based immunoassays.

    Science.gov (United States)

    Rausch, Tanja K; Schillert, Arne; Ziegler, Andreas; Lüking, Angelika; Zucht, Hans-Dieter; Schulz-Knappe, Peter

    2016-08-11

    High throughput protein expression studies can be performed using bead-based protein immunoassays, such as the Luminex® xMAP® technology. Technical variability is inherent to these experiments and may lead to systematic bias and reduced power. To reduce technical variability, data pre-processing is performed. However, no recommendations exist for the pre-processing of Luminex® xMAP® data. We compared 37 different data pre-processing combinations of transformation and normalization methods in 42 samples on 384 analytes obtained from a multiplex immunoassay based on the Luminex® xMAP® technology. We evaluated the performance of each pre-processing approach with 6 different performance criteria. Three performance criteria were plots. All plots were evaluated by 15 independent and blinded readers. Four different combinations of transformation and normalization methods performed well as pre-processing procedure for this bead-based protein immunoassay. The following combinations of transformation and normalization were suitable for pre-processing Luminex® xMAP® data in this study: weighted Box-Cox followed by quantile or robust spline normalization (rsn), asinh transformation followed by loess normalization and Box-Cox followed by rsn.

  2. Applying the concept of consumer confusion to healthcare: development and validation of a patient confusion model.

    Science.gov (United States)

    Gebele, Christoph; Tscheulin, Dieter K; Lindenmeier, Jörg; Drevs, Florian; Seemann, Ann-Kathrin

    2014-01-01

    As patient autonomy and consumer sovereignty increase, information provision is considered essential to decrease information asymmetries between healthcare service providers and patients. However, greater availability of third party information sources can have negative side effects. Patients can be confused by the nature, as well as the amount, of quality information when making choices among competing health care providers. Therefore, the present study explores how information may cause patient confusion and affect the behavioral intention to choose a health care provider. Based on a quota sample of German citizens (n = 198), the present study validates a model of patient confusion in the context of hospital choice. The study results reveal that perceived information overload, perceived similarity, and perceived ambiguity of health information impact the affective and cognitive components of patient confusion. Confused patients have a stronger inclination to hastily narrow down their set of possible decision alternatives. Finally, an empirical analysis reveals that the affective and cognitive components of patient confusion mediate perceived information overload, perceived similarity, and perceived ambiguity of information. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  3. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.

    Science.gov (United States)

    Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang

    2018-04-25

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.

  4. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery

    Science.gov (United States)

    Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang

    2018-01-01

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585

  5. Effects of preprocessing method on TVOC emission of car mat

    Science.gov (United States)

    Wang, Min; Jia, Li

    2013-02-01

    The effects of the mat preprocessing method on total volatile organic compounds (TVOC) emission of car mat are studied in this paper. An appropriate TVOC emission period for car mat is suggested. The emission factors for total volatile organic compounds from three kinds of new car mats are discussed. The car mats are preprocessed by washing, baking and ventilation. When car mats are preprocessed by washing, the TVOC emission for all samples tested are lower than that preprocessed in other methods. The TVOC emission is in stable situation for a minimum of 4 days. The TVOC emitted from some samples may exceed 2500μg/kg. But the TVOC emitted from washed Polyamide (PA) and wool mat is less than 2500μg/kg. The emission factors of total volatile organic compounds (TVOC) are experimentally investigated in the case of different preprocessing methods. The air temperature in environment chamber and the water temperature for washing are important factors influencing on emission of car mats.

  6. Confused or not Confused?: Disentangling Brain Activity from EEG Data Using Bidirectional LSTM Recurrent Neural Networks.

    Science.gov (United States)

    Ni, Zhaoheng; Yuksel, Ahmet Cem; Ni, Xiuyan; Mandel, Michael I; Xie, Lei

    2017-08-01

    Brain fog, also known as confusion, is one of the main reasons for low performance in the learning process or any kind of daily task that involves and requires thinking. Detecting confusion in a human's mind in real time is a challenging and important task that can be applied to online education, driver fatigue detection and so on. In this paper, we apply Bidirectional LSTM Recurrent Neural Networks to classify students' confusion in watching online course videos from EEG data. The results show that Bidirectional LSTM model achieves the state-of-the-art performance compared with other machine learning approaches, and shows strong robustness as evaluated by cross-validation. We can predict whether or not a student is confused in the accuracy of 73.3%. Furthermore, we find the most important feature to detecting the brain confusion is the gamma 1 wave of EEG signal. Our results suggest that machine learning is a potentially powerful tool to model and understand brain activity.

  7. Preprocessing Algorithm for Deciphering Historical Inscriptions Using String Metric

    Directory of Open Access Journals (Sweden)

    Lorand Lehel Toth

    2016-07-01

    Full Text Available The article presents the improvements in the preprocessing part of the deciphering method (shortly preprocessing algorithm for historical inscriptions of unknown origin. Glyphs used in historical inscriptions changed through time; therefore, various versions of the same script may contain different glyphs for each grapheme. The purpose of the preprocessing algorithm is reducing the running time of the deciphering process by filtering out the less probable interpretations of the examined inscription. However, the first version of the preprocessing algorithm leads incorrect outcome or no result in the output in certain cases. Therefore, its improved version was developed to find the most similar words in the dictionary by relaying the search conditions more accurately, but still computationally effectively. Moreover, a sophisticated similarity metric used to determine the possible meaning of the unknown inscription is introduced. The results of the evaluations are also detailed.

  8. Confusing the heterotic string

    Science.gov (United States)

    Benett, D.; Brene, N.; Mizrachi, Leah; Nielsen, H. B.

    1986-10-01

    A confusion mechanism is proposed as a global modification of the heterotic string model. It envolves a confusion hypersurface across which the two E 8's of the heterotic string are permuted. A remarkable numerical coincidence is found which prevents an inconsistency in the model. The low energy limit of this theory (after compactification) is typically invariant under one E 8 only, thereby removing the shadow world from the original model.

  9. The 1996 ENDF pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1996-01-01

    The codes are named 'the Pre-processing' codes, because they are designed to pre-process ENDF/B data, for later, further processing for use in applications. This is a modular set of computer codes, each of which reads and writes evaluated nuclear data in the ENDF/B format. Each code performs one or more independent operations on the data, as described below. These codes are designed to be computer independent, and are presently operational on every type of computer from large mainframe computer to small personal computers, such as IBM-PC and Power MAC. The codes are available from the IAEA Nuclear Data Section, free of charge upon request. (author)

  10. Confusing the heterotic string

    International Nuclear Information System (INIS)

    Benett, D.L.; Mizrachi, L.

    1986-01-01

    A confusion mechanism is proposed as a global modification of the heterotic string model. It envolves a confusion hypersurface across which the two E 8 's of the heterotic string are permuted. A remarkable numerical coincidence is found which prevents an inconsistency in the model. The low energy limit of this theory (after compactification) is typically invariant under one E 8 only, thereby removing the shadow world from the original model. (orig.)

  11. Confusing the heterotic string

    Energy Technology Data Exchange (ETDEWEB)

    Benett, D.L.; Brene, N.; Nielsen, H.B.; Mizrachi, L.

    1986-10-02

    A confusion mechanism is proposed as a global modification of the heterotic string model. It envolves a confusion hypersurface across which the two E/sub 8/'s of the heterotic string are permuted. A remarkable numerical coincidence is found which prevents an inconsistency in the model. The low energy limit of this theory (after compactification) is typically invariant under one E/sub 8/ only, thereby removing the shadow world from the original model.

  12. A survey of visual preprocessing and shape representation techniques

    Science.gov (United States)

    Olshausen, Bruno A.

    1988-01-01

    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention).

  13. Psychoanalytic peregrinations. III: Confusion of tongues, psychoanalyst as translator.

    Science.gov (United States)

    Chessick, Richard D

    2002-01-01

    A variety of problems cause a confusion of tongues between the psychoanalyst and the patient. In this sense the psychoanalyst faces the same problems as the translator of a text from one language to another. Examples are given of confusion due cultural differences, confusion due translation differences among translators, confusion due translator prejudice or ignorance, confusion due ambiguous visual cues and images, and confusion due to an inherently ambiguous text. It is due to this unavoidable confusion that the humanistic sciences cannot in principle achieve the mathematical exactness of the natural sciences and should not be expected to do so or condemned because they do not.

  14. Real-time topic-aware influence maximization using preprocessing.

    Science.gov (United States)

    Chen, Wei; Lin, Tian; Yang, Cheng

    2016-01-01

    Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.

  15. Compact Circuit Preprocesses Accelerometer Output

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1993-01-01

    Compact electronic circuit transfers dc power to, and preprocesses ac output of, accelerometer and associated preamplifier. Incorporated into accelerometer case during initial fabrication or retrofit onto commercial accelerometer. Made of commercial integrated circuits and other conventional components; made smaller by use of micrologic and surface-mount technology.

  16. Preprocessing of emotional visual information in the human piriform cortex.

    Science.gov (United States)

    Schulze, Patrick; Bestgen, Anne-Kathrin; Lech, Robert K; Kuchinke, Lars; Suchan, Boris

    2017-08-23

    This study examines the processing of visual information by the olfactory system in humans. Recent data point to the processing of visual stimuli by the piriform cortex, a region mainly known as part of the primary olfactory cortex. Moreover, the piriform cortex generates predictive templates of olfactory stimuli to facilitate olfactory processing. This study fills the gap relating to the question whether this region is also capable of preprocessing emotional visual information. To gain insight into the preprocessing and transfer of emotional visual information into olfactory processing, we recorded hemodynamic responses during affective priming using functional magnetic resonance imaging (fMRI). Odors of different valence (pleasant, neutral and unpleasant) were primed by images of emotional facial expressions (happy, neutral and disgust). Our findings are the first to demonstrate that the piriform cortex preprocesses emotional visual information prior to any olfactory stimulation and that the emotional connotation of this preprocessing is subsequently transferred and integrated into an extended olfactory network for olfactory processing.

  17. Image preprocessing study on KPCA-based face recognition

    Science.gov (United States)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  18. An Effective Measured Data Preprocessing Method in Electrical Impedance Tomography

    Directory of Open Access Journals (Sweden)

    Chenglong Yu

    2014-01-01

    Full Text Available As an advanced process detection technology, electrical impedance tomography (EIT has widely been paid attention to and studied in the industrial fields. But the EIT techniques are greatly limited to the low spatial resolutions. This problem may result from the incorrect preprocessing of measuring data and lack of general criterion to evaluate different preprocessing processes. In this paper, an EIT data preprocessing method is proposed by all rooting measured data and evaluated by two constructed indexes based on all rooted EIT measured data. By finding the optimums of the two indexes, the proposed method can be applied to improve the EIT imaging spatial resolutions. In terms of a theoretical model, the optimal rooting times of the two indexes range in [0.23, 0.33] and in [0.22, 0.35], respectively. Moreover, these factors that affect the correctness of the proposed method are generally analyzed. The measuring data preprocessing is necessary and helpful for any imaging process. Thus, the proposed method can be generally and widely used in any imaging process. Experimental results validate the two proposed indexes.

  19. Research on pre-processing of QR Code

    Science.gov (United States)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  20. Optimization of miRNA-seq data preprocessing.

    Science.gov (United States)

    Tam, Shirley; Tsao, Ming-Sound; McPherson, John D

    2015-11-01

    The past two decades of microRNA (miRNA) research has solidified the role of these small non-coding RNAs as key regulators of many biological processes and promising biomarkers for disease. The concurrent development in high-throughput profiling technology has further advanced our understanding of the impact of their dysregulation on a global scale. Currently, next-generation sequencing is the platform of choice for the discovery and quantification of miRNAs. Despite this, there is no clear consensus on how the data should be preprocessed before conducting downstream analyses. Often overlooked, data preprocessing is an essential step in data analysis: the presence of unreliable features and noise can affect the conclusions drawn from downstream analyses. Using a spike-in dilution study, we evaluated the effects of several general-purpose aligners (BWA, Bowtie, Bowtie 2 and Novoalign), and normalization methods (counts-per-million, total count scaling, upper quartile scaling, Trimmed Mean of M, DESeq, linear regression, cyclic loess and quantile) with respect to the final miRNA count data distribution, variance, bias and accuracy of differential expression analysis. We make practical recommendations on the optimal preprocessing methods for the extraction and interpretation of miRNA count data from small RNA-sequencing experiments. © The Author 2015. Published by Oxford University Press.

  1. Core Knowledge Confusions among University Students

    Science.gov (United States)

    Lindeman, Marjaana; Svedholm, Annika M.; Takada, Mikito; Lonnqvist, Jan-Erik; Verkasalo, Markku

    2011-01-01

    Previous studies have demonstrated that university students hold several paranormal beliefs and that paranormal beliefs can be best explained with core knowledge confusions. The aim of this study was to explore to what extent university students confuse the core ontological attributes of lifeless material objects (e.g. a house, a stone), living…

  2. Effect of packaging on physicochemical characteristics of irradiated pre-processed chicken

    International Nuclear Information System (INIS)

    Jiang Xiujie; Zhang Dongjie; Zhang Dequan; Li Shurong; Gao Meixu; Wang Zhidong

    2011-01-01

    To explore the effect of modified atmosphere packaging and antioxidants on the physicochemical characteristics of irradiated pre-processed chicken, the pre-processed chicken was added antioxidants first, and then packaged in common, vacuum and gas respectively, and finally irradiated at 5 kGy dosage. All samples was stored at 4 ℃. The pH, TBA, TVB-N and color deviation were evaluated after 0, 3, 7, 10, 14, 18 and 21 d of storage. The results showed that pH value of pre-processed chicken with antioxidants and vacuum packaged increased with the storage time but not significantly among different treatments. The TBA value was also increased but not significantly (P > 0.05), which indicated that vacuum package inhibited the lipid oxidation. TVB-N value increased with storage time, TVB-N value of vacuum package samples reached 14.29 mg/100 g at 21 d storage, which did not exceeded the reference indexes of fresh meat. a * value of the pre-processed chicken of vacuum package and non-oxygen package samples increased significantly during storage (P > 0.05), and chicken color kept bright red after 21 d storage with vacuum package It is concluded that vacuum packaging of irradiated pre-processed chicken is effective on ensuring its physical and chemical properties during storage. (authors)

  3. Examination of Speed Contribution of Parallelization for Several Fingerprint Pre-Processing Algorithms

    Directory of Open Access Journals (Sweden)

    GORGUNOGLU, S.

    2014-05-01

    Full Text Available In analysis of minutiae based fingerprint systems, fingerprints needs to be pre-processed. The pre-processing is carried out to enhance the quality of the fingerprint and to obtain more accurate minutiae points. Reducing the pre-processing time is important for identification and verification in real time systems and especially for databases holding large fingerprints information. Parallel processing and parallel CPU computing can be considered as distribution of processes over multi core processor. This is done by using parallel programming techniques. Reducing the execution time is the main objective in parallel processing. In this study, pre-processing of minutiae based fingerprint system is implemented by parallel processing on multi core computers using OpenMP and on graphics processor using CUDA to improve execution time. The execution times and speedup ratios are compared with the one that of single core processor. The results show that by using parallel processing, execution time is substantially improved. The improvement ratios obtained for different pre-processing algorithms allowed us to make suggestions on the more suitable approaches for parallelization.

  4. Normalization: A Preprocessing Stage

    OpenAIRE

    Patro, S. Gopal Krishna; Sahu, Kishore Kumar

    2015-01-01

    As we know that the normalization is a pre-processing stage of any type problem statement. Especially normalization takes important role in the field of soft computing, cloud computing etc. for manipulation of data like scale down or scale up the range of data before it becomes used for further stage. There are so many normalization techniques are there namely Min-Max normalization, Z-score normalization and Decimal scaling normalization. So by referring these normalization techniques we are ...

  5. An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI.

    Directory of Open Access Journals (Sweden)

    Nathan W Churchill

    Full Text Available BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the "pipeline" significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard "fixed" preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each, demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets.

  6. Reliable RANSAC Using a Novel Preprocessing Model

    Directory of Open Access Journals (Sweden)

    Xiaoyan Wang

    2013-01-01

    Full Text Available Geometric assumption and verification with RANSAC has become a crucial step for corresponding to local features due to its wide applications in biomedical feature analysis and vision computing. However, conventional RANSAC is very time-consuming due to redundant sampling times, especially dealing with the case of numerous matching pairs. This paper presents a novel preprocessing model to explore a reduced set with reliable correspondences from initial matching dataset. Both geometric model generation and verification are carried out on this reduced set, which leads to considerable speedups. Afterwards, this paper proposes a reliable RANSAC framework using preprocessing model, which was implemented and verified using Harris and SIFT features, respectively. Compared with traditional RANSAC, experimental results show that our method is more efficient.

  7. Averaging operations on matrices

    Indian Academy of Sciences (India)

    2014-07-03

    Jul 3, 2014 ... Role of Positive Definite Matrices. • Diffusion Tensor Imaging: 3 × 3 pd matrices model water flow at each voxel of brain scan. • Elasticity: 6 × 6 pd matrices model stress tensors. • Machine Learning: n × n pd matrices occur as kernel matrices. Tanvi Jain. Averaging operations on matrices ...

  8. Fusion or confusion in obsessive compulsive disorder.

    Science.gov (United States)

    O'Connor, Kieron; Aardema, Frederick

    2003-08-01

    Inferential confusion occurs when a person mistakes an imagined possibility for a real probability and might account for some types of thought-action and other fusions reported in obsessive-compulsive disorder. Inferential confusion could account for the ego-dystonic nature of obsessions and their recurrent nature, since the person acts "as if" an imagined aversive inference is probable and tries unsuccessfully to modify this imaginary probability in reality. The clinical implications of the inferential confusion model focus primarily on the role of the imagination in obsessive-compulsive disorder rather than on cognitive beliefs.

  9. Effect of microaerobic fermentation in preprocessing fibrous lignocellulosic materials.

    Science.gov (United States)

    Alattar, Manar Arica; Green, Terrence R; Henry, Jordan; Gulca, Vitalie; Tizazu, Mikias; Bergstrom, Robby; Popa, Radu

    2012-06-01

    Amending soil with organic matter is common in agricultural and logging practices. Such amendments have benefits to soil fertility and crop yields. These benefits may be increased if material is preprocessed before introduction into soil. We analyzed the efficiency of microaerobic fermentation (MF), also referred to as Bokashi, in preprocessing fibrous lignocellulosic (FLC) organic materials using varying produce amendments and leachate treatments. Adding produce amendments increased leachate production and fermentation rates and decreased the biological oxygen demand of the leachate. Continuously draining leachate without returning it to the fermentors led to acidification and decreased concentrations of polysaccharides (PS) in leachates. PS fragmentation and the production of soluble metabolites and gases stabilized in fermentors in about 2-4 weeks. About 2 % of the carbon content was lost as CO(2). PS degradation rates, upon introduction of processed materials into soil, were similar to unfermented FLC. Our results indicate that MF is insufficient for adequate preprocessing of FLC material.

  10. Perceptual Confusions Among Consonants, Revisited: Cross-Spectral Integration of Phonetic-Feature Information and Consonant Recognition

    DEFF Research Database (Denmark)

    Christiansen, Thomas Ulrich; Greenberg, Steven

    2012-01-01

    The perceptual basis of consonant recognition was experimentally investigated through a study of how information associated with phonetic features (Voicing, Manner, and Place of Articulation) combines across the acoustic-frequency spectrum. The speech signals, 11 Danish consonants embedded...... in Consonant + Vowel + Liquid syllables, were partitioned into 3/4-octave bands (“slits”) centered at 750 Hz, 1500 Hz, and 3000 Hz, and presented individually and in two- or three-slit combinations. The amount of information transmitted (IT) was calculated from consonant- confusion matrices for each feature...... the bands are essentially independent in terms of decoding this feature. Because consonant recognition and Place decoding are highly correlated (correlation coefficient r2 = 0.99), these results imply that the auditory processes underlying consonant recognition are not strictly linear. This may account...

  11. Performance of Pre-processing Schemes with Imperfect Channel State Information

    DEFF Research Database (Denmark)

    Christensen, Søren Skovgaard; Kyritsi, Persa; De Carvalho, Elisabeth

    2006-01-01

    Pre-processing techniques have several benefits when the CSI is perfect. In this work we investigate three linear pre-processing filters, assuming imperfect CSI caused by noise degradation and channel temporal variation. Results indicate, that the LMMSE filter achieves the lowest BER and the high......Pre-processing techniques have several benefits when the CSI is perfect. In this work we investigate three linear pre-processing filters, assuming imperfect CSI caused by noise degradation and channel temporal variation. Results indicate, that the LMMSE filter achieves the lowest BER...... and the highest SINR when the CSI is perfect, whereas the simple matched filter may be a good choice when the CSI is imperfect. Additionally the results give insight into the inherent trade-off between robustness against CSI imperfections and spatial focusing ability....

  12. The 1989 ENDF pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.; McLaughlin, P.K.

    1989-12-01

    This document summarizes the 1989 version of the ENDF pre-processing codes which are required for processing evaluated nuclear data coded in the format ENDF-4, ENDF-5, or ENDF-6. The codes are available from the IAEA Nuclear Data Section, free of charge upon request. (author)

  13. (Con)fusing contours

    NARCIS (Netherlands)

    Lier, R.J. van; Wit, T.C.J. de; Koning, A.R.

    2005-01-01

    We have created patterns in which illusory Kanizsa squares are positioned on top of a background grid of bars. When the illusory contours and physical contours are misaligned, the resulting percept appears to be rather confusing (van Lier et al, 2004 Perception 33 Supplement, 77). Observers often

  14. New indicator for optimal preprocessing and wavelength selection of near-infrared spectra

    NARCIS (Netherlands)

    Skibsted, E. T. S.; Boelens, H. F. M.; Westerhuis, J. A.; Witte, D. T.; Smilde, A. K.

    2004-01-01

    Preprocessing of near-infrared spectra to remove unwanted, i.e., non-related spectral variation and selection of informative wavelengths is considered to be a crucial step prior to the construction of a quantitative calibration model. The standard methodology when comparing various preprocessing

  15. VanderLaan Circulant Type Matrices

    Directory of Open Access Journals (Sweden)

    Hongyan Pan

    2015-01-01

    Full Text Available Circulant matrices have become a satisfactory tools in control methods for modern complex systems. In the paper, VanderLaan circulant type matrices are presented, which include VanderLaan circulant, left circulant, and g-circulant matrices. The nonsingularity of these special matrices is discussed by the surprising properties of VanderLaan numbers. The exact determinants of VanderLaan circulant type matrices are given by structuring transformation matrices, determinants of well-known tridiagonal matrices, and tridiagonal-like matrices. The explicit inverse matrices of these special matrices are obtained by structuring transformation matrices, inverses of known tridiagonal matrices, and quasi-tridiagonal matrices. Three kinds of norms and lower bound for the spread of VanderLaan circulant and left circulant matrix are given separately. And we gain the spectral norm of VanderLaan g-circulant matrix.

  16. The confusion technique untangled: its theoretical rationale and preliminary classification.

    Science.gov (United States)

    Otani, A

    1989-01-01

    This article examines the historical development of Milton H. Erickson's theoretical approach to hypnosis using confusion. Review of the literature suggests that the Confusion Technique, in principle, consists of a two-stage "confusion-restructuring" process. The article also attempts to categorize several examples of confusion suggestions by seven linguistic characteristics: (1) antonyms, (2) homonyms, (3) synonyms, (4) elaboration, (5) interruption, (6) echoing, and (7) uncommon words. The Confusion Technique is an important yet little studied strategy developed by Erickson. More work is urged to investigate its nature and properties.

  17. An Intelligent Clustering Based Methodology for Confusable ...

    African Journals Online (AJOL)

    Journal of the Nigerian Association of Mathematical Physics ... The system assigns patients with severity levels in all the clusters. ... The system compares favorably with diagnosis arrived at by experienced physicians and also provides patients' level of severity in each confusable disease and the degree of confusability of ...

  18. Comparison of multivariate preprocessing techniques as applied to electronic tongue based pattern classification for black tea

    International Nuclear Information System (INIS)

    Palit, Mousumi; Tudu, Bipan; Bhattacharyya, Nabarun; Dutta, Ankur; Dutta, Pallab Kumar; Jana, Arun; Bandyopadhyay, Rajib; Chatterjee, Anutosh

    2010-01-01

    In an electronic tongue, preprocessing on raw data precedes pattern analysis and choice of the appropriate preprocessing technique is crucial for the performance of the pattern classifier. While attempting to classify different grades of black tea using a voltammetric electronic tongue, different preprocessing techniques have been explored and a comparison of their performances is presented in this paper. The preprocessing techniques are compared first by a quantitative measurement of separability followed by principle component analysis; and then two different supervised pattern recognition models based on neural networks are used to evaluate the performance of the preprocessing techniques.

  19. Value of Distributed Preprocessing of Biomass Feedstocks to a Bioenergy Industry

    Energy Technology Data Exchange (ETDEWEB)

    Christopher T Wright

    2006-07-01

    Biomass preprocessing is one of the primary operations in the feedstock assembly system and the front-end of a biorefinery. Its purpose is to chop, grind, or otherwise format the biomass into a suitable feedstock for conversion to ethanol and other bioproducts. Many variables such as equipment cost and efficiency, and feedstock moisture content, particle size, bulk density, compressibility, and flowability affect the location and implementation of this unit operation. Previous conceptual designs show this operation to be located at the front-end of the biorefinery. However, data are presented that show distributed preprocessing at the field-side or in a fixed preprocessing facility can provide significant cost benefits by producing a higher value feedstock with improved handling, transporting, and merchandising potential. In addition, data supporting the preferential deconstruction of feedstock materials due to their bio-composite structure identifies the potential for significant improvements in equipment efficiencies and compositional quality upgrades. Theses data are collected from full-scale low and high capacity hammermill grinders with various screen sizes. Multiple feedstock varieties with a range of moisture values were used in the preprocessing tests. The comparative values of the different grinding configurations, feedstock varieties, and moisture levels are assessed through post-grinding analysis of the different particle fractions separated with a medium-scale forage particle separator and a Rototap separator. The results show that distributed preprocessing produces a material that has bulk flowable properties and fractionation benefits that can improve the ease of transporting, handling and conveying the material to the biorefinery and improve the biochemical and thermochemical conversion processes.

  20. Pre-processing for Triangulation of Probabilistic Networks

    NARCIS (Netherlands)

    Bodlaender, H.L.; Koster, A.M.C.A.; Eijkhof, F. van den; Gaag, L.C. van der

    2001-01-01

    The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a networks graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum

  1. Comparative performance evaluation of transform coding in image pre-processing

    Science.gov (United States)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  2. The Likelihood of Confusion in the United State Ninth Circuit and the doctrine of Confusable Marks in the Andean Tribunal

    Directory of Open Access Journals (Sweden)

    Francisco José Cabrera Perdomo

    2016-06-01

    Full Text Available The article analysis the most important cases within the jurisdiction of California regarding the trademark infringement and its prerogative of the likelihood of confusion. Finally, it compares the conclusion with the confusable marks theory within the Andean community’s recent cases solving the issue.

  3. The confusion mechanism and the heterotic string

    International Nuclear Information System (INIS)

    Bennett, D.L.; Mizrachi, L.; Nielsen, H.B.; Brene, N.

    1987-01-01

    The confusion mechanism introduced earlier in connection with the gauge glass model is here discussed in the context of field theories involving symmetry groups which have outer automorphisms. The heterotic string with an E 8 x E 8 symmetry may be influenced by confusion with the result that only one E 8 group survives and the shadow world disappears. (orig.)

  4. The confusion mechanism and the heterotic string

    International Nuclear Information System (INIS)

    Bennett, D.L.; Nielsen, H.B.; Brene, N.; Mizrachi, L.

    1986-01-01

    The confusion mechanism introduced earlier in connection with the gauge glass model is here discussed in the context of field theories involving symmetry groups which have outer automorphisms. The heterotic string with an E 8 8xE 8 symmetry may be influence by confusion with the result that only one E 8 group survives and the shadow world disappears. (author)

  5. A New Indicator for Optimal Preprocessing and Wavelengths Selection of Near-Infrared Spectra

    NARCIS (Netherlands)

    Skibsted, E.; Boelens, H.F.M.; Westerhuis, J.A.; Witte, D.T.; Smilde, A.K.

    2004-01-01

    Preprocessing of near-infrared spectra to remove unwanted, i.e., non-related spectral variation and selection of informative wavelengths is considered to be a crucial step prior to the construction of a quantitative calibration model. The standard methodology when comparing various preprocessing

  6. Classification-based comparison of pre-processing methods for interpretation of mass spectrometry generated clinical datasets

    Directory of Open Access Journals (Sweden)

    Hoefsloot Huub CJ

    2009-05-01

    Full Text Available Abstract Background Mass spectrometry is increasingly being used to discover proteins or protein profiles associated with disease. Experimental design of mass-spectrometry studies has come under close scrutiny and the importance of strict protocols for sample collection is now understood. However, the question of how best to process the large quantities of data generated is still unanswered. Main challenges for the analysis are the choice of proper pre-processing and classification methods. While these two issues have been investigated in isolation, we propose to use the classification of patient samples as a clinically relevant benchmark for the evaluation of pre-processing methods. Results Two in-house generated clinical SELDI-TOF MS datasets are used in this study as an example of high throughput mass-spectrometry data. We perform a systematic comparison of two commonly used pre-processing methods as implemented in Ciphergen ProteinChip Software and in the Cromwell package. With respect to reproducibility, Ciphergen and Cromwell pre-processing are largely comparable. We find that the overlap between peaks detected by either Ciphergen ProteinChip Software or Cromwell is large. This is especially the case for the more stringent peak detection settings. Moreover, similarity of the estimated intensities between matched peaks is high. We evaluate the pre-processing methods using five different classification methods. Classification is done in a double cross-validation protocol using repeated random sampling to obtain an unbiased estimate of classification accuracy. No pre-processing method significantly outperforms the other for all peak detection settings evaluated. Conclusion We use classification of patient samples as a clinically relevant benchmark for the evaluation of pre-processing methods. Both pre-processing methods lead to similar classification results on an ovarian cancer and a Gaucher disease dataset. However, the settings for pre-processing

  7. Impact of data transformation and preprocessing in supervised ...

    African Journals Online (AJOL)

    Impact of data transformation and preprocessing in supervised learning ... Nowadays, the ideas of integrating machine learning techniques in power system has ... The proposed algorithm used Python-based split train and k-fold model ...

  8. Predator confusion is sufficient to evolve swarming behaviour.

    Science.gov (United States)

    Olson, Randal S; Hintze, Arend; Dyer, Fred C; Knoester, David B; Adami, Christoph

    2013-08-06

    Swarming behaviours in animals have been extensively studied owing to their implications for the evolution of cooperation, social cognition and predator-prey dynamics. An important goal of these studies is discerning which evolutionary pressures favour the formation of swarms. One hypothesis is that swarms arise because the presence of multiple moving prey in swarms causes confusion for attacking predators, but it remains unclear how important this selective force is. Using an evolutionary model of a predator-prey system, we show that predator confusion provides a sufficient selection pressure to evolve swarming behaviour in prey. Furthermore, we demonstrate that the evolutionary effect of predator confusion on prey could in turn exert pressure on the structure of the predator's visual field, favouring the frontally oriented, high-resolution visual systems commonly observed in predators that feed on swarming animals. Finally, we provide evidence that when prey evolve swarming in response to predator confusion, there is a change in the shape of the functional response curve describing the predator's consumption rate as prey density increases. Thus, we show that a relatively simple perceptual constraint--predator confusion--could have pervasive evolutionary effects on prey behaviour, predator sensory mechanisms and the ecological interactions between predators and prey.

  9. RTI: Court and Case Law--Confusion by Design

    Science.gov (United States)

    Daves, David P.; Walker, David W.

    2012-01-01

    Professional confusion, as well as case law confusion, exists concerning the fidelity and integrity of response to intervention (RTI) as a defensible procedure for identifying children as having a specific learning disability (SLD) under the Individuals with Disabilities Education Act (IDEA). Division is generated because of conflicting mandates…

  10. NOMENCLATURAL CONFUSION OF SOME SPECIES OF ANDROGRAPHIS WALL

    OpenAIRE

    Balu, S.; Alagesaboopathi, C.

    1995-01-01

    Andrographis paniculata Nees, Andrographis alata Nees and Andrographis lineate Nees. (Acanthaceae) are important medicinal plants useful in the treatment of various human ailments. Nomenclatural confusion prevails with regards to these medicinal plants in India medical literature and vernacular nomenclature. This nomenclatural confusion has been clarified in the present paper.

  11. Thinning: A Preprocessing Technique for an OCR System for the Brahmi Script

    Directory of Open Access Journals (Sweden)

    H. K. Anasuya Devi

    2006-12-01

    Full Text Available In this paper we study the methodology employed for preprocessing the archaeological images. We present the various algorithms used in the low level processing stage of image analysis for Optical Character Recognition System for Brahmi Script. The image preprocessing technique covered in this paper include Thinning method. We also try to analyze the results obtained by the pixel-level processing algorithms.

  12. Infinite matrices and sequence spaces

    CERN Document Server

    Cooke, Richard G

    2014-01-01

    This clear and correct summation of basic results from a specialized field focuses on the behavior of infinite matrices in general, rather than on properties of special matrices. Three introductory chapters guide students to the manipulation of infinite matrices, covering definitions and preliminary ideas, reciprocals of infinite matrices, and linear equations involving infinite matrices.From the fourth chapter onward, the author treats the application of infinite matrices to the summability of divergent sequences and series from various points of view. Topics include consistency, mutual consi

  13. Complex Wedge-Shaped Matrices: A Generalization of Jacobi Matrices

    Czech Academy of Sciences Publication Activity Database

    Hnětynková, Iveta; Plešinger, M.

    2015-01-01

    Roč. 487, 15 December (2015), s. 203-219 ISSN 0024-3795 R&D Projects: GA ČR GA13-06684S Keywords : eigenvalues * eigenvector * wedge-shaped matrices * generalized Jacobi matrices * band (or block) Krylov subspace methods Subject RIV: BA - General Mathematics Impact factor: 0.965, year: 2015

  14. Boosting reversible pushdown machines by preprocessing

    DEFF Research Database (Denmark)

    Axelsen, Holger Bock; Kutrib, Martin; Malcher, Andreas

    2016-01-01

    languages, whereas for reversible pushdown automata the accepted family of languages lies strictly in between the reversible deterministic context-free languages and the real-time deterministic context-free languages. Moreover, it is shown that the computational power of both types of machines...... is not changed by allowing the preprocessing sequential transducer to work irreversibly. Finally, we examine the closure properties of the family of languages accepted by such machines....

  15. On reflectionless equi-transmitting matrices

    Directory of Open Access Journals (Sweden)

    Pavel Kurasov

    2014-01-01

    Full Text Available Reflectionless equi-transmitting unitary matrices are studied in connection to matching conditions in quantum graphs. All possible such matrices of size 6 are described explicitly. It is shown that such matrices form 30 six-parameter families intersected along 12 five-parameter families closely connected to conference matrices.

  16. Pre-processing by data augmentation for improved ellipse fitting.

    Science.gov (United States)

    Kumar, Pankaj; Belchamber, Erika R; Miklavcic, Stanley J

    2018-01-01

    Ellipse fitting is a highly researched and mature topic. Surprisingly, however, no existing method has thus far considered the data point eccentricity in its ellipse fitting procedure. Here, we introduce the concept of eccentricity of a data point, in analogy with the idea of ellipse eccentricity. We then show empirically that, irrespective of ellipse fitting method used, the root mean square error (RMSE) of a fit increases with the eccentricity of the data point set. The main contribution of the paper is based on the hypothesis that if the data point set were pre-processed to strategically add additional data points in regions of high eccentricity, then the quality of a fit could be improved. Conditional validity of this hypothesis is demonstrated mathematically using a model scenario. Based on this confirmation we propose an algorithm that pre-processes the data so that data points with high eccentricity are replicated. The improvement of ellipse fitting is then demonstrated empirically in real-world application of 3D reconstruction of a plant root system for phenotypic analysis. The degree of improvement for different underlying ellipse fitting methods as a function of data noise level is also analysed. We show that almost every method tested, irrespective of whether it minimizes algebraic error or geometric error, shows improvement in the fit following data augmentation using the proposed pre-processing algorithm.

  17. A Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Wouters, Jan; Moonen, Marc

    2015-10-01

    Listening to music is still one of the more challenging aspects of using a cochlear implant (CI) for most users. Simple musical structures, a clear rhythm/beat, and lyrics that are easy to follow are among the top factors contributing to music appreciation for CI users. Modifying the audio mix of complex music potentially improves music enjoyment in CI users. A stereo music preprocessing scheme is described in which vocals, drums, and bass are emphasized based on the representation of the harmonic and the percussive components in the input spectrogram, combined with the spatial allocation of instruments in typical stereo recordings. The scheme is assessed with postlingually deafened CI subjects (N = 7) using pop/rock music excerpts with different complexity levels. The scheme is capable of modifying relative instrument level settings, with the aim of improving music appreciation in CI users, and allows individual preference adjustments. The assessment with CI subjects confirms the preference for more emphasis on vocals, drums, and bass as offered by the preprocessing scheme, especially for songs with higher complexity. The stereo music preprocessing scheme has the potential to improve music enjoyment in CI users by modifying the audio mix in widespread (stereo) music recordings. Since music enjoyment in CI users is generally poor, this scheme can assist the music listening experience of CI users as a training or rehabilitation tool.

  18. Uncertainty analysis in comparative NAA applied to geological and biological matrices

    International Nuclear Information System (INIS)

    Zahn, Guilherme S.; Ticianelli, Regina B.; Lange, Camila N.; Favaro, Deborah I.T.; Figueiredo, Ana M.G.

    2015-01-01

    Comparative nuclear activation analysis is a multielemental primary analytical technique that may be used in a rather broad spectrum of matrices with minimal-to-none sample preprocessing. Although the total activation of a chemical element in a sample depends on a rather large set of parameters, when the sample is irradiated together with a well-known comparator, most of these parameters are crossed out and the concentration of that element can be determined simply by using the activities and masses of the comparator and the sample, the concentration of this chemical element in the sample, the half-life of the formed radionuclide and the time between counting the sample and the comparator. This simplification greatly reduces not only the calculations required, but also the uncertainty associated with the measurement; nevertheless, a cautious analysis must be carried out in order to make sure all relevant uncertainties are properly treated, so that the final result can be as representative of the measurement as possible. In this work, this analysis was performed for geological matrices, where concentrations of the interest nuclides are rather high, but so is the density and average atomic number of the sample, as well as for a biological matrix, in order to allow for a comparison. The results show that the largest part of the uncertainty comes from the activity measurements and from the concentration of the comparator, and that while the influence of time-related terms in the final uncertainty can be safely neglected, the uncertainty in the masses may be relevant under specific circumstances. (author)

  19. Uncertainty analysis in comparative NAA applied to geological and biological matrices

    Energy Technology Data Exchange (ETDEWEB)

    Zahn, Guilherme S.; Ticianelli, Regina B.; Lange, Camila N.; Favaro, Deborah I.T.; Figueiredo, Ana M.G., E-mail: gzahn@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2015-07-01

    Comparative nuclear activation analysis is a multielemental primary analytical technique that may be used in a rather broad spectrum of matrices with minimal-to-none sample preprocessing. Although the total activation of a chemical element in a sample depends on a rather large set of parameters, when the sample is irradiated together with a well-known comparator, most of these parameters are crossed out and the concentration of that element can be determined simply by using the activities and masses of the comparator and the sample, the concentration of this chemical element in the sample, the half-life of the formed radionuclide and the time between counting the sample and the comparator. This simplification greatly reduces not only the calculations required, but also the uncertainty associated with the measurement; nevertheless, a cautious analysis must be carried out in order to make sure all relevant uncertainties are properly treated, so that the final result can be as representative of the measurement as possible. In this work, this analysis was performed for geological matrices, where concentrations of the interest nuclides are rather high, but so is the density and average atomic number of the sample, as well as for a biological matrix, in order to allow for a comparison. The results show that the largest part of the uncertainty comes from the activity measurements and from the concentration of the comparator, and that while the influence of time-related terms in the final uncertainty can be safely neglected, the uncertainty in the masses may be relevant under specific circumstances. (author)

  20. The confusion effect when attacking simulated three-dimensional starling flocks.

    Science.gov (United States)

    Hogan, Benedict G; Hildenbrandt, Hanno; Scott-Samuel, Nicholas E; Cuthill, Innes C; Hemelrijk, Charlotte K

    2017-01-01

    The confusion effect describes the phenomenon of decreasing predator attack success with increasing prey group size. However, there is a paucity of research into the influence of this effect in coherent groups, such as flocks of European starlings ( Sturnus vulgaris ). Here, for the first time, we use a computer game style experiment to investigate the confusion effect in three dimensions. To date, computerized studies on the confusion effect have used two-dimensional simulations with simplistic prey movement and dynamics. Our experiment is the first investigation of the effects of flock size and density on the ability of a (human) predator to track and capture a target starling in a realistically simulated three-dimensional flock of starlings. In line with the predictions of the confusion effect, modelled starlings appear to be safer from predation in larger and denser flocks. This finding lends credence to previous suggestions that starling flocks have anti-predator benefits and, more generally, it suggests that active increases in density in animal groups in response to predation may increase the effectiveness of the confusion effect.

  1. The relationship between magical thinking, inferential confusion and obsessive-compulsive symptoms.

    Science.gov (United States)

    Goods, N A R; Rees, C S; Egan, S J; Kane, R T

    2014-01-01

    Inferential confusion is an under-researched faulty reasoning process in obsessive-compulsive disorder (OCD). Based on an overreliance on imagined possibilities, it shares similarities with the extensively researched construct of thought-action fusion (TAF). While TAF has been proposed as a specific subset of the broader construct of magical thinking, the relationship between inferential confusion and magical thinking is unexplored. The present study investigated this relationship, and hypothesised that magical thinking would partially mediate the relationship between inferential confusion and obsessive-compulsive symptoms. A non-clinical sample of 201 participants (M = 34.94, SD = 15.88) were recruited via convenience sampling. Regression analyses found the hypothesised mediating relationship was supported, as magical thinking did partially mediate the relationship between inferential confusion and OC symptoms. Interestingly, inferential confusion had the stronger relationship with OC symptoms in comparison to the other predictor variables. Results suggest that inferential confusion can both directly and indirectly (via magical thinking) impact on OC symptoms. Future studies with clinical samples should further investigate these constructs to determine whether similar patterns emerge, as this may eventually inform which cognitive errors to target in treatment of OCD.

  2. Fungible Correlation Matrices: A Method for Generating Nonsingular, Singular, and Improper Correlation Matrices for Monte Carlo Research.

    Science.gov (United States)

    Waller, Niels G

    2016-01-01

    For a fixed set of standardized regression coefficients and a fixed coefficient of determination (R-squared), an infinite number of predictor correlation matrices will satisfy the implied quadratic form. I call such matrices fungible correlation matrices. In this article, I describe an algorithm for generating positive definite (PD), positive semidefinite (PSD), or indefinite (ID) fungible correlation matrices that have a random or fixed smallest eigenvalue. The underlying equations of this algorithm are reviewed from both algebraic and geometric perspectives. Two simulation studies illustrate that fungible correlation matrices can be profitably used in Monte Carlo research. The first study uses PD fungible correlation matrices to compare penalized regression algorithms. The second study uses ID fungible correlation matrices to compare matrix-smoothing algorithms. R code for generating fungible correlation matrices is presented in the supplemental materials.

  3. Optimal preprocessing of serum and urine metabolomic data fusion for staging prostate cancer through design of experiment

    International Nuclear Information System (INIS)

    Zheng, Hong; Cai, Aimin; Zhou, Qi; Xu, Pengtao; Zhao, Liangcai; Li, Chen; Dong, Baijun; Gao, Hongchang

    2017-01-01

    Accurate classification of cancer stages will achieve precision treatment for cancer. Metabolomics presents biological phenotypes at the metabolite level and holds a great potential for cancer classification. Since metabolomic data can be obtained from different samples or analytical techniques, data fusion has been applied to improve classification accuracy. Data preprocessing is an essential step during metabolomic data analysis. Therefore, we developed an innovative optimization method to select a proper data preprocessing strategy for metabolomic data fusion using a design of experiment approach for improving the classification of prostate cancer (PCa) stages. In this study, urine and serum samples were collected from participants at five phases of PCa and analyzed using a 1 H NMR-based metabolomic approach. Partial least squares-discriminant analysis (PLS-DA) was used as a classification model and its performance was assessed by goodness of fit (R 2 ) and predictive ability (Q 2 ). Results show that data preprocessing significantly affect classification performance and depends on data properties. Using the fused metabolomic data from urine and serum, PLS-DA model with the optimal data preprocessing (R 2  = 0.729, Q 2  = 0.504, P < 0.0001) can effectively improve model performance and achieve a better classification result for PCa stages as compared with that without data preprocessing (R 2  = 0.139, Q 2  = 0.006, P = 0.450). Therefore, we propose that metabolomic data fusion integrated with an optimal data preprocessing strategy can significantly improve the classification of cancer stages for precision treatment. - Highlights: • NMR metabolomic analysis of body fluids can be used for staging prostate cancer. • Data preprocessing is an essential step for metabolomic analysis. • Data fusion improves information recovery for cancer classification. • Design of experiment achieves optimal preprocessing of metabolomic data fusion.

  4. Data preprocessing methods of FT-NIR spectral data for the classification cooking oil

    Science.gov (United States)

    Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli

    2014-12-01

    This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.

  5. Optimal production scheduling for energy efficiency improvement in biofuel feedstock preprocessing considering work-in-process particle separation

    International Nuclear Information System (INIS)

    Li, Lin; Sun, Zeyi; Yao, Xufeng; Wang, Donghai

    2016-01-01

    Biofuel is considered a promising alternative to traditional liquid transportation fuels. The large-scale substitution of biofuel can greatly enhance global energy security and mitigate greenhouse gas emissions. One major concern of the broad adoption of biofuel is the intensive energy consumption in biofuel manufacturing. This paper focuses on the energy efficiency improvement of biofuel feedstock preprocessing, a major process of cellulosic biofuel manufacturing. An improved scheme of the feedstock preprocessing considering work-in-process particle separation is introduced to reduce energy waste and improve energy efficiency. A scheduling model based on the improved scheme is also developed to identify an optimal production schedule that can minimize the energy consumption of the feedstock preprocessing under production target constraint. A numerical case study is used to illustrate the effectiveness of the proposed method. The research outcome is expected to improve the energy efficiency and enhance the environmental sustainability of biomass feedstock preprocessing. - Highlights: • A novel method to schedule production in biofuel feedstock preprocessing process. • Systems modeling approach is used. • Capable of optimize preprocessing to reduce energy waste and improve energy efficiency. • A numerical case is used to illustrate the effectiveness of the method. • Energy consumption per unit production can be significantly reduced.

  6. Double stochastic matrices in quantum mechanics

    International Nuclear Information System (INIS)

    Louck, J.D.

    1997-01-01

    The general set of doubly stochastic matrices of order n corresponding to ordinary nonrelativistic quantum mechanical transition probability matrices is given. Lande's discussion of the nonquantal origin of such matrices is noted. Several concrete examples are presented for elementary and composite angular momentum systems with the focus on the unitary symmetry associated with such systems in the spirit of the recent work of Bohr and Ulfbeck. Birkhoff's theorem on doubly stochastic matrices of order n is reformulated in a geometrical language suitable for application to the subset of quantum mechanical doubly stochastic matrices. Specifically, it is shown that the set of points on the unit sphere in cartesian n'-space is subjective with the set of doubly stochastic matrices of order n. The question is raised, but not answered, as to what is the subset of points of this unit sphere that correspond to the quantum mechanical transition probability matrices, and what is the symmetry group of this subset of matrices

  7. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices

    Science.gov (United States)

    Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen

    2013-01-01

    In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588

  8. Matrices and linear transformations

    CERN Document Server

    Cullen, Charles G

    1990-01-01

    ""Comprehensive . . . an excellent introduction to the subject."" - Electronic Engineer's Design Magazine.This introductory textbook, aimed at sophomore- and junior-level undergraduates in mathematics, engineering, and the physical sciences, offers a smooth, in-depth treatment of linear algebra and matrix theory. The major objects of study are matrices over an arbitrary field. Contents include Matrices and Linear Systems; Vector Spaces; Determinants; Linear Transformations; Similarity: Part I and Part II; Polynomials and Polynomial Matrices; Matrix Analysis; and Numerical Methods. The first

  9. Reproducible cancer biomarker discovery in SELDI-TOF MS using different pre-processing algorithms.

    Directory of Open Access Journals (Sweden)

    Jinfeng Zou

    Full Text Available BACKGROUND: There has been much interest in differentiating diseased and normal samples using biomarkers derived from mass spectrometry (MS studies. However, biomarker identification for specific diseases has been hindered by irreproducibility. Specifically, a peak profile extracted from a dataset for biomarker identification depends on a data pre-processing algorithm. Until now, no widely accepted agreement has been reached. RESULTS: In this paper, we investigated the consistency of biomarker identification using differentially expressed (DE peaks from peak profiles produced by three widely used average spectrum-dependent pre-processing algorithms based on SELDI-TOF MS data for prostate and breast cancers. Our results revealed two important factors that affect the consistency of DE peak identification using different algorithms. One factor is that some DE peaks selected from one peak profile were not detected as peaks in other profiles, and the second factor is that the statistical power of identifying DE peaks in large peak profiles with many peaks may be low due to the large scale of the tests and small number of samples. Furthermore, we demonstrated that the DE peak detection power in large profiles could be improved by the stratified false discovery rate (FDR control approach and that the reproducibility of DE peak detection could thereby be increased. CONCLUSIONS: Comparing and evaluating pre-processing algorithms in terms of reproducibility can elucidate the relationship among different algorithms and also help in selecting a pre-processing algorithm. The DE peaks selected from small peak profiles with few peaks for a dataset tend to be reproducibly detected in large peak profiles, which suggests that a suitable pre-processing algorithm should be able to produce peaks sufficient for identifying useful and reproducible biomarkers.

  10. Lambda-matrices and vibrating systems

    CERN Document Server

    Lancaster, Peter; Stark, M; Kahane, J P

    1966-01-01

    Lambda-Matrices and Vibrating Systems presents aspects and solutions to problems concerned with linear vibrating systems with a finite degrees of freedom and the theory of matrices. The book discusses some parts of the theory of matrices that will account for the solutions of the problems. The text starts with an outline of matrix theory, and some theorems are proved. The Jordan canonical form is also applied to understand the structure of square matrices. Classical theorems are discussed further by applying the Jordan canonical form, the Rayleigh quotient, and simple matrix pencils with late

  11. Manin matrices and Talalaev's formula

    International Nuclear Information System (INIS)

    Chervov, A; Falqui, G

    2008-01-01

    In this paper we study properties of Lax and transfer matrices associated with quantum integrable systems. Our point of view stems from the fact that their elements satisfy special commutation properties, considered by Yu I Manin some 20 years ago at the beginning of quantum group theory. These are the commutation properties of matrix elements of linear homomorphisms between polynomial rings; more explicitly these read: (1) elements of the same column commute; (2) commutators of the cross terms are equal: [M ij , M kl ] [M kj , M il ] (e.g. [M 11 , M 22 ] = [M 21 , M 12 ]). The main aim of this paper is twofold: on the one hand we observe and prove that such matrices (which we call Manin matrices in short) behave almost as well as matrices with commutative elements. Namely, the theorems of linear algebra (e.g., a natural definition of the determinant, the Cayley-Hamilton theorem, the Newton identities and so on and so forth) have a straightforward counterpart in the case of Manin matrices. On the other hand, we remark that such matrices are somewhat ubiquitous in the theory of quantum integrability. For instance, Manin matrices (and their q-analogs) include matrices satisfying the Yang-Baxter relation 'RTT=TTR' and the so-called Cartier-Foata matrices. Also, they enter Talalaev's remarkable formulae: det(∂ z -L gaudin (z)), det(1-e -∂z T Yangian (z)) for the 'quantum spectral curve', and appear in the separation of variables problem and Capelli identities. We show that theorems of linear algebra, after being established for such matrices, have various applications to quantum integrable systems and Lie algebras, e.g. in the construction of new generators in Z(U crit (gl-hat n )) (and, in general, in the construction of quantum conservation laws), in the Knizhnik-Zamolodchikov equation, and in the problem of Wick ordering. We propose, in the appendix, a construction of quantum separated variables for the XXX-Heisenberg system

  12. Introduction into Hierarchical Matrices

    KAUST Repository

    Litvinenko, Alexander

    2013-01-01

    Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.

  13. Introduction into Hierarchical Matrices

    KAUST Repository

    Litvinenko, Alexander

    2013-12-05

    Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.

  14. MERSENNE AND HADAMARD MATRICES CALCULATION BY SCARPIS METHOD

    Directory of Open Access Journals (Sweden)

    N. A. Balonin

    2014-05-01

    Full Text Available Purpose. The paper deals with the problem of basic generalizations of Hadamard matrices associated with maximum determinant matrices or not optimal by determinant matrices with orthogonal columns (weighing matrices, Mersenne and Euler matrices, ets.; calculation methods for the quasi-orthogonal local maximum determinant Mersenne matrices are not studied enough sufficiently. The goal of this paper is to develop the theory of Mersenne and Hadamard matrices on the base of generalized Scarpis method research. Methods. Extreme solutions are found in general by minimization of maximum for absolute values of the elements of studied matrices followed by their subsequent classification according to the quantity of levels and their values depending on orders. Less universal but more effective methods are based on structural invariants of quasi-orthogonal matrices (Silvester, Paley, Scarpis methods, ets.. Results. Generalizations of Hadamard and Belevitch matrices as a family of quasi-orthogonal matrices of odd orders are observed; they include, in particular, two-level Mersenne matrices. Definitions of section and layer on the set of generalized matrices are proposed. Calculation algorithms for matrices of adjacent layers and sections by matrices of lower orders are described. Approximation examples of the Belevitch matrix structures up to 22-nd critical order by Mersenne matrix of the third order are given. New formulation of the modified Scarpis method to approximate Hadamard matrices of high orders by lower order Mersenne matrices is proposed. Williamson method is described by example of one modular level matrices approximation by matrices with a small number of levels. Practical relevance. The efficiency of developing direction for the band-pass filters creation is justified. Algorithms for Mersenne matrices design by Scarpis method are used in developing software of the research program complex. Mersenne filters are based on the suboptimal by

  15. Parallel pipeline algorithm of real time star map preprocessing

    Science.gov (United States)

    Wang, Hai-yong; Qin, Tian-mu; Liu, Jia-qi; Li, Zhi-feng; Li, Jian-hua

    2016-03-01

    To improve the preprocessing speed of star map and reduce the resource consumption of embedded system of star tracker, a parallel pipeline real-time preprocessing algorithm is presented. The two characteristics, the mean and the noise standard deviation of the background gray of a star map, are firstly obtained dynamically by the means that the intervene of the star image itself to the background is removed in advance. The criterion on whether or not the following noise filtering is needed is established, then the extraction threshold value is assigned according to the level of background noise, so that the centroiding accuracy is guaranteed. In the processing algorithm, as low as two lines of pixel data are buffered, and only 100 shift registers are used to record the connected domain label, by which the problems of resources wasting and connected domain overflow are solved. The simulating results show that the necessary data of the selected bright stars could be immediately accessed in a delay time as short as 10us after the pipeline processing of a 496×496 star map in 50Mb/s is finished, and the needed memory and registers resource total less than 80kb. To verify the accuracy performance of the algorithm proposed, different levels of background noise are added to the processed ideal star map, and the statistic centroiding error is smaller than 1/23 pixel under the condition that the signal to noise ratio is greater than 1. The parallel pipeline algorithm of real time star map preprocessing helps to increase the data output speed and the anti-dynamic performance of star tracker.

  16. Summary of ENDF/B pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1981-12-01

    This document contains the summary documentation for the ENDF/B pre-processing codes: LINEAR, RECENT, SIGMA1, GROUPIE, EVALPLOT, MERGER, DICTION, CONVERT. This summary documentation is merely a copy of the comment cards that appear at the beginning of each programme; these comment cards always reflect the latest status of input options, etc. For the latest published documentation on the methods used in these codes see UCRL-50400, Vol.17 parts A-E, Lawrence Livermore Laboratory (1979)

  17. Parallel finite elements with domain decomposition and its pre-processing

    International Nuclear Information System (INIS)

    Yoshida, A.; Yagawa, G.; Hamada, S.

    1993-01-01

    This paper describes a parallel finite element analysis using a domain decomposition method, and the pre-processing for the parallel calculation. Computer simulations are about to replace experiments in various fields, and the scale of model to be simulated tends to be extremely large. On the other hand, computational environment has drastically changed in these years. Especially, parallel processing on massively parallel computers or computer networks is considered to be promising techniques. In order to achieve high efficiency on such parallel computation environment, large granularity of tasks, a well-balanced workload distribution are key issues. It is also important to reduce the cost of pre-processing in such parallel FEM. From the point of view, the authors developed the domain decomposition FEM with the automatic and dynamic task-allocation mechanism and the automatic mesh generation/domain subdivision system for it. (author)

  18. Confusion noise from LISA capture sources

    International Nuclear Information System (INIS)

    Barack, Leor; Cutler, Curt

    2004-01-01

    Captures of compact objects (COs) by massive black holes (MBHs) in galactic nuclei will be an important source for LISA, the proposed space-based gravitational wave (GW) detector. However, a large fraction of captures will not be individually resolvable - either because they are too distant, have unfavorable orientation, or have too many years to go before final plunge - and so will constitute a source of 'confusion noise', obscuring other types of sources. In this paper we estimate the shape and overall magnitude of the GW background energy spectrum generated by CO captures. This energy spectrum immediately translates to a spectral density S h capt (f) for the amplitude of capture-generated GWs registered by LISA. The overall magnitude of S h capt (f) is linear in the CO capture rates, which are rather uncertain; therefore we present results for a plausible range of rates. S h capt (f) includes the contributions from both resolvable and unresolvable captures, and thus represents an upper limit on the confusion noise level. We then estimate what fraction of S h capt (f) is due to unresolvable sources and hence constitutes confusion noise. We find that almost all of the contribution to S h capt (f) coming from white dwarf and neutron star captures, and at least ∼30% of the contribution from black hole captures, is from sources that cannot be individually resolved. Nevertheless, we show that the impact of capture confusion noise on the total LISA noise curve ranges from insignificant to modest, depending on the rates. Capture rates at the high end of estimated ranges would raise LISA's overall (effective) noise level [fS h eff (f)] 1/2 by at most a factor ∼2 in the frequency range 1-10 mHz, where LISA is most sensitive. While this slightly elevated noise level would somewhat decrease LISA's sensitivity to other classes of sources, we argue that, overall, this would be a pleasant problem for LISA to have: It would also imply that detection rates for CO captures

  19. Investigating Confusion Between Perceptions of Relationship Education and Couples Therapy

    Directory of Open Access Journals (Sweden)

    Brandon K. Burr

    2017-03-01

    Full Text Available Although relationship education (RE and couples therapy (CT have similar goals in helping build and sustain healthy couple and family relationships, there remains confusion between the focus and structure of the two services. Literature on the marketing of family programs indicates that the awareness level of the target audience should dictate marketing and recruitment messages. Lack of awareness regarding RE and confusion over the difference between RE and CT most likely affects the decision to attend. In order to inform RE recruitment and marketing approaches, this study investigated overall perceptions of RE, RE awareness, and confusion regarding the difference between RE and CT in a sample of 1,977 individuals. Differences in perceptions were also explored by relationship satisfaction and gender. Results showed a fairly high lack of awareness of RE and confusion between RE and CT. Results also showed that respondents in more satisfying relationships see RE less positively, and men see RE less positively than women. Implications for practitioners and researchers are presented.

  20. Dazzle camouflage, target tracking, and the confusion effect.

    Science.gov (United States)

    Hogan, Benedict G; Cuthill, Innes C; Scott-Samuel, Nicholas E

    2016-01-01

    The influence of coloration on the ecology and evolution of moving animals in groups is poorly understood. Animals in groups benefit from the "confusion effect," where predator attack success is reduced with increasing group size or density. This is thought to be due to a sensory bottleneck: an increase in the difficulty of tracking one object among many. Motion dazzle camouflage has been hypothesized to disrupt accurate perception of the trajectory or speed of an object or animal. The current study investigates the suggestion that dazzle camouflage may enhance the confusion effect. Utilizing a computer game style experiment with human predators, we found that when moving in groups, targets with stripes parallel to the targets' direction of motion interact with the confusion effect to a greater degree, and are harder to track, than those with more conventional background matching patterns. The findings represent empirical evidence that some high-contrast patterns may benefit animals in groups. The results also highlight the possibility that orientation and turning may be more relevant in the mechanisms of dazzle camouflage than previously recognized.

  1. Intrinsic character of Stokes matrices

    Science.gov (United States)

    Gagnon, Jean-François; Rousseau, Christiane

    2017-02-01

    Two germs of linear analytic differential systems x k + 1Y‧ = A (x) Y with a non-resonant irregular singularity are analytically equivalent if and only if they have the same eigenvalues and equivalent collections of Stokes matrices. The Stokes matrices are the transition matrices between sectors on which the system is analytically equivalent to its formal normal form. Each sector contains exactly one separating ray for each pair of eigenvalues. A rotation in S allows supposing that R+ lies in the intersection of two sectors. Reordering of the coordinates of Y allows ordering the real parts of the eigenvalues, thus yielding triangular Stokes matrices. However, the choice of the rotation in x is not canonical. In this paper we establish how the collection of Stokes matrices depends on this rotation, and hence on a chosen order of the projection of the eigenvalues on a line through the origin.

  2. Hierarchical matrices algorithms and analysis

    CERN Document Server

    Hackbusch, Wolfgang

    2015-01-01

    This self-contained monograph presents matrix algorithms and their analysis. The new technique enables not only the solution of linear systems but also the approximation of matrix functions, e.g., the matrix exponential. Other applications include the solution of matrix equations, e.g., the Lyapunov or Riccati equation. The required mathematical background can be found in the appendix. The numerical treatment of fully populated large-scale matrices is usually rather costly. However, the technique of hierarchical matrices makes it possible to store matrices and to perform matrix operations approximately with almost linear cost and a controllable degree of approximation error. For important classes of matrices, the computational cost increases only logarithmically with the approximation error. The operations provided include the matrix inversion and LU decomposition. Since large-scale linear algebra problems are standard in scientific computing, the subject of hierarchical matrices is of interest to scientists ...

  3. Discrete pre-processing step effects in registration-based pipelines, a preliminary volumetric study on T1-weighted images.

    Science.gov (United States)

    Muncy, Nathan M; Hedges-Muncy, Ariana M; Kirwan, C Brock

    2017-01-01

    Pre-processing MRI scans prior to performing volumetric analyses is common practice in MRI studies. As pre-processing steps adjust the voxel intensities, the space in which the scan exists, and the amount of data in the scan, it is possible that the steps have an effect on the volumetric output. To date, studies have compared between and not within pipelines, and so the impact of each step is unknown. This study aims to quantify the effects of pre-processing steps on volumetric measures in T1-weighted scans within a single pipeline. It was our hypothesis that pre-processing steps would significantly impact ROI volume estimations. One hundred fifteen participants from the OASIS dataset were used, where each participant contributed three scans. All scans were then pre-processed using a step-wise pipeline. Bilateral hippocampus, putamen, and middle temporal gyrus volume estimations were assessed following each successive step, and all data were processed by the same pipeline 5 times. Repeated-measures analyses tested for a main effects of pipeline step, scan-rescan (for MRI scanner consistency) and repeated pipeline runs (for algorithmic consistency). A main effect of pipeline step was detected, and interestingly an interaction between pipeline step and ROI exists. No effect for either scan-rescan or repeated pipeline run was detected. We then supply a correction for noise in the data resulting from pre-processing.

  4. CONFUSION WITH TELEPHONE NUMBERS

    CERN Multimedia

    Telecom Service

    2002-01-01

    he area code is now required for all telephone calls within Switzerland. Unfortunately this is causing some confusion. CERN has received complaints that incoming calls intended for CERN mobile phones are being directed to private subscribers. This is caused by mistakenly dialing the WRONG code (e.g. 022) in front of the mobile number. In order to avoid these problems, please inform your correspondents that the correct numbers are: 079 201 XXXX from Switzerland; 0041 79 201 XXXX from other countries. Telecom Service

  5. Special matrices of mathematical physics stochastic, circulant and Bell matrices

    CERN Document Server

    Aldrovandi, R

    2001-01-01

    This book expounds three special kinds of matrices that are of physical interest, centering on physical examples. Stochastic matrices describe dynamical systems of many different types, involving (or not) phenomena like transience, dissipation, ergodicity, nonequilibrium, and hypersensitivity to initial conditions. The main characteristic is growth by agglomeration, as in glass formation. Circulants are the building blocks of elementary Fourier analysis and provide a natural gateway to quantum mechanics and noncommutative geometry. Bell polynomials offer closed expressions for many formulas co

  6. Right word making sense of the words that confuse

    CERN Document Server

    Morrison, Elizabeth

    2012-01-01

    'Affect' or 'effect'? 'Right', 'write' or 'rite'? English can certainly be a confusing language, whether you're a native speaker or learning it as a second language. 'The Right Word' is the essential reference to help people master its subtleties and avoid making mistakes. Divided into three sections, it first examines homophones - those tricky words that sound the same but are spelled differently - then looks at words that often confuse before providing a list of commonly misspelled words.

  7. Evaluation of a Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    Science.gov (United States)

    Buyens, Wim; van Dijk, Bas; Moonen, Marc; Wouters, Jan

    2018-01-01

    Although for most cochlear implant (CI) users good speech understanding is reached (at least in quiet environments), the perception and the appraisal of music are generally unsatisfactory. The improvement in music appraisal was evaluated in CI participants by using a stereo music preprocessing scheme implemented on a take-home device, in a comfortable listening environment. The preprocessing allowed adjusting the balance among vocals/bass/drums and other instruments, and was evaluated for different genres of music. The correlation between the preferred settings and the participants' speech and pitch detection performance was investigated. During the initial visit preceding the take-home test, the participants' speech-in-noise perception and pitch detection performance were measured, and a questionnaire about their music involvement was completed. The take-home device was provided, including the stereo music preprocessing scheme and seven playlists with six songs each. The participants were asked to adjust the balance by means of a turning wheel to make the music sound most enjoyable, and to repeat this three times for all songs. Twelve postlingually deafened CI users participated in the study. The data were collected by means of a take-home device, which preserved all the preferred settings for the different songs. Statistical analysis was done with a Friedman test (with post hoc Wilcoxon signed-rank test) to check the effect of "Genre." The correlations were investigated with Pearson's and Spearman's correlation coefficients. All participants preferred a balance significantly different from the original balance. Differences across participants were observed which could not be explained by perceptual abilities. An effect of "Genre" was found, showing significantly smaller preferred deviation from the original balance for Golden Oldies compared to the other genres. The stereo music preprocessing scheme showed an improvement in music appraisal with complex music and

  8. CONFUSION WITH TELEPHONE NUMBERS

    CERN Multimedia

    Telecom Service

    2002-01-01

    The area code is now required for all telephone calls within Switzerland. Unfortunately this is causing some confusion. CERN has received complaints that incoming calls intended for CERN mobile phones are being directed to private subscribers. This is caused by mistakenly dialing the WRONG code (e.g. 022) in front of the mobile number. In order to avoid these problems, please inform your correspondents that the correct numbers are: 079 201 XXXX from Switzerland; 0041 79 201 XXXX from other countries. Telecom Service  

  9. Summary of ENDF/B Pre-Processing Codes June 1983

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1983-06-01

    This is the summary documentation for the 1983 version of the ENDF/B Pre-Processing Codes LINEAR, RECENT, SIGMA1, GROUPIE, EVALPLOT, MERGER, DICTION, COMPLOT, CONVERT. This summary documentation is merely a copy of the comment cards that appear at the beginning of each programme; these comment cards always reflect the latest status of input options, etc

  10. The use of source memory to identify one's own episodic confusion errors.

    Science.gov (United States)

    Smith, S M; Tindell, D R; Pierce, B H; Gilliland, T R; Gerkens, D R

    2001-03-01

    In 4 category cued recall experiments, participants falsely recalled nonlist common members, a semantic confusion error. Errors were more likely if critical nonlist words were presented on an incidental task, causing source memory failures called episodic confusion errors. Participants could better identify the source of falsely recalled words if they had deeply processed the words on the incidental task. For deep but not shallow processing, participants could reliably include or exclude incidentally shown category members in recall. The illusion that critical items actually appeared on categorized lists was diminished but not eradicated when participants identified episodic confusion errors post hoc among their own recalled responses; participants often believed that critical items had been on both the incidental task and the study list. Improved source monitoring can potentially mitigate episodic (but not semantic) confusion errors.

  11. Frequency filtering decompositions for unsymmetric matrices and matrices with strongly varying coefficients

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, C.

    1996-12-31

    In 1992, Wittum introduced the frequency filtering decompositions (FFD), which yield a fast method for the iterative solution of large systems of linear equations. Based on this method, the tangential frequency filtering decompositions (TFFD) have been developed. The TFFD allow the robust and efficient treatment of matrices with strongly varying coefficients. The existence and the convergence of the TFFD can be shown for symmetric and positive definite matrices. For a large class of matrices, it is possible to prove that the convergence rate of the TFFD and of the FFD is independent of the number of unknowns. For both methods, schemes for the construction of frequency filtering decompositions for unsymmetric matrices have been developed. Since, in contrast to Wittums`s FFD, the TFFD needs only one test vector, an adaptive test vector can be used. The TFFD with respect to the adaptive test vector can be combined with other iterative methods, e.g. multi-grid methods, in order to improve the robustness of these methods. The frequency filtering decompositions have been successfully applied to the problem of the decontamination of a heterogeneous porous medium by flushing.

  12. Inside Out: Detecting Learners' Confusion to Improve Interactive Digital Learning Environments

    Science.gov (United States)

    Arguel, Amaël; Lockyer, Lori; Lipp, Ottmar V.; Lodge, Jason M.; Kennedy, Gregor

    2017-01-01

    Confusion is an emotion that is likely to occur while learning complex information. This emotion can be beneficial to learners in that it can foster engagement, leading to deeper understanding. However, if learners fail to resolve confusion, its effect can be detrimental to learning. Such detrimental learning experiences are particularly…

  13. Source Memory in Korsakoff Syndrome: Disentangling the Mechanisms of Temporal Confusion.

    Science.gov (United States)

    Brion, Mélanie; de Timary, Philippe; Pitel, Anne-Lise; Maurage, Pierre

    2017-03-01

    Korsakoff syndrome (KS), most frequently resulting from alcohol dependence (ALC), is characterized by severe anterograde amnesia. It has been suggested that these deficits may extend to other memory components, and notably source memory deficits involved in the disorientation and temporal confusion frequently observed in KS. However, the extent of this source memory impairment in KS and its usefulness for the differential diagnosis between ALC and KS remain unexplored. Nineteen patients with KS were compared with 19 alcohol-dependent individuals and 19 controls in a source memory test exploring temporal context confusions ("continuous recognition task"). Episodic memory and psychopathological comorbidities were controlled for. While no source memory deficit was observed in ALC, KS was associated with a significant presence of temporal context confusion, even when the influence of comorbidities was taken into account. This source memory impairment did not appear to be related to performances on episodic memory or executive functions. Patients with KS displayed source memory deficits, as indexed by temporal context confusions. The absence of a relationship with episodic memory performances seems to indicate that source memory impairment is not a mere by-product of amnesia. As ALC was associated with preserved source memory, the presence of temporal context confusion may serve as a complementary tool for the differential diagnosis between ALC and KS. Copyright © 2017 by the Research Society on Alcoholism.

  14. Introduction to matrices and vectors

    CERN Document Server

    Schwartz, Jacob T

    2001-01-01

    In this concise undergraduate text, the first three chapters present the basics of matrices - in later chapters the author shows how to use vectors and matrices to solve systems of linear equations. 1961 edition.

  15. Exploring the Effect of Student Confusion in Massive Open Online Courses

    Science.gov (United States)

    Yang, Diyi; Kraut, Robert E.; Rose, Carolyn P.

    2016-01-01

    Although thousands of students enroll in Massive Open Online Courses (MOOCs) for learning and self-improvement, many get confused, harming learning and increasing dropout rates. In this paper, we quantify these effects in two large MOOCs. We first describe how we automatically estimate students' confusion by looking at their clicking behavior on…

  16. Poisson pre-processing of nonstationary photonic signals: Signals with equality between mean and variance.

    Science.gov (United States)

    Poplová, Michaela; Sovka, Pavel; Cifra, Michal

    2017-01-01

    Photonic signals are broadly exploited in communication and sensing and they typically exhibit Poisson-like statistics. In a common scenario where the intensity of the photonic signals is low and one needs to remove a nonstationary trend of the signals for any further analysis, one faces an obstacle: due to the dependence between the mean and variance typical for a Poisson-like process, information about the trend remains in the variance even after the trend has been subtracted, possibly yielding artifactual results in further analyses. Commonly available detrending or normalizing methods cannot cope with this issue. To alleviate this issue we developed a suitable pre-processing method for the signals that originate from a Poisson-like process. In this paper, a Poisson pre-processing method for nonstationary time series with Poisson distribution is developed and tested on computer-generated model data and experimental data of chemiluminescence from human neutrophils and mung seeds. The presented method transforms a nonstationary Poisson signal into a stationary signal with a Poisson distribution while preserving the type of photocount distribution and phase-space structure of the signal. The importance of the suggested pre-processing method is shown in Fano factor and Hurst exponent analysis of both computer-generated model signals and experimental photonic signals. It is demonstrated that our pre-processing method is superior to standard detrending-based methods whenever further signal analysis is sensitive to variance of the signal.

  17. Pre-processing data using wavelet transform and PCA based on ...

    Indian Academy of Sciences (India)

    Abazar Solgi

    2017-07-14

    Jul 14, 2017 ... Pre-processing data using wavelet transform and PCA based on support vector regression and gene expression programming for river flow simulation. Abazar Solgi1,*, Amir Pourhaghi1, Ramin Bahmani2 and Heidar Zarei3. 1. Department of Water Resources Engineering, Shahid Chamran University of ...

  18. Scientific data products and the data pre-processing subsystem of the Chang'e-3 mission

    International Nuclear Information System (INIS)

    Tan Xu; Liu Jian-Jun; Li Chun-Lai; Feng Jian-Qing; Ren Xin; Wang Fen-Fei; Yan Wei; Zuo Wei; Wang Xiao-Qian; Zhang Zhou-Bin

    2014-01-01

    The Chang'e-3 (CE-3) mission is China's first exploration mission on the surface of the Moon that uses a lander and a rover. Eight instruments that form the scientific payloads have the following objectives: (1) investigate the morphological features and geological structures at the landing site; (2) integrated in-situ analysis of minerals and chemical compositions; (3) integrated exploration of the structure of the lunar interior; (4) exploration of the lunar-terrestrial space environment, lunar surface environment and acquire Moon-based ultraviolet astronomical observations. The Ground Research and Application System (GRAS) is in charge of data acquisition and pre-processing, management of the payload in orbit, and managing the data products and their applications. The Data Pre-processing Subsystem (DPS) is a part of GRAS. The task of DPS is the pre-processing of raw data from the eight instruments that are part of CE-3, including channel processing, unpacking, package sorting, calibration and correction, identification of geographical location, calculation of probe azimuth angle, probe zenith angle, solar azimuth angle, and solar zenith angle and so on, and conducting quality checks. These processes produce Level 0, Level 1 and Level 2 data. The computing platform of this subsystem is comprised of a high-performance computing cluster, including a real-time subsystem used for processing Level 0 data and a post-time subsystem for generating Level 1 and Level 2 data. This paper describes the CE-3 data pre-processing method, the data pre-processing subsystem, data classification, data validity and data products that are used for scientific studies

  19. Pathological rate matrices: from primates to pathogens

    Directory of Open Access Journals (Sweden)

    Knight Rob

    2008-12-01

    Full Text Available Abstract Background Continuous-time Markov models allow flexible, parametrically succinct descriptions of sequence divergence. Non-reversible forms of these models are more biologically realistic but are challenging to develop. The instantaneous rate matrices defined for these models are typically transformed into substitution probability matrices using a matrix exponentiation algorithm that employs eigendecomposition, but this algorithm has characteristic vulnerabilities that lead to significant errors when a rate matrix possesses certain 'pathological' properties. Here we tested whether pathological rate matrices exist in nature, and consider the suitability of different algorithms to their computation. Results We used concatenated protein coding gene alignments from microbial genomes, primate genomes and independent intron alignments from primate genomes. The Taylor series expansion and eigendecomposition matrix exponentiation algorithms were compared to the less widely employed, but more robust, Padé with scaling and squaring algorithm for nucleotide, dinucleotide, codon and trinucleotide rate matrices. Pathological dinucleotide and trinucleotide matrices were evident in the microbial data set, affecting the eigendecomposition and Taylor algorithms respectively. Even using a conservative estimate of matrix error (occurrence of an invalid probability, both Taylor and eigendecomposition algorithms exhibited substantial error rates: ~100% of all exonic trinucleotide matrices were pathological to the Taylor algorithm while ~10% of codon positions 1 and 2 dinucleotide matrices and intronic trinucleotide matrices, and ~30% of codon matrices were pathological to eigendecomposition. The majority of Taylor algorithm errors derived from occurrence of multiple unobserved states. A small number of negative probabilities were detected from the Pad�� algorithm on trinucleotide matrices that were attributable to machine precision. Although the Pad

  20. Gravitational-wave confusion background from cosmological compact binaries: Implications for future terrestrial detectors

    International Nuclear Information System (INIS)

    Regimbau, T.; Hughes, Scott A.

    2009-01-01

    Increasing the sensitivity of a gravitational-wave (GW) detector improves our ability to measure the characteristics of detected sources. It also increases the number of weak signals that contribute to the data. Because GW detectors have nearly all-sky sensitivity, they can be subject to a confusion limit: Many sources which cannot be distinguished may be measured simultaneously, defining a stochastic noise floor to the sensitivity. For GW detectors operating at present and for their planned upgrades, the projected event rate is sufficiently low that we are far from the confusion-limited regime. However, some detectors currently under discussion may have large enough reach to binary inspiral that they enter the confusion-limited regime. In this paper, we examine the binary inspiral confusion limit for terrestrial detectors. We consider a broad range of inspiral rates in the literature, several planned advanced gravitational-wave detectors, and the highly advanced 'Einstein telescope' design. Though most advanced detectors will not be impacted by this limit, the Einstein telescope with a very low-frequency 'seismic wall' may be subject to confusion noise. At a minimum, careful data analysis will be require to separate signals which will appear confused. This result should be borne in mind when designing highly advanced future instruments.

  1. Relative effects of statistical preprocessing and postprocessing on a regional hydrological ensemble prediction system

    Science.gov (United States)

    Sharma, Sanjib; Siddique, Ridwan; Reed, Seann; Ahnert, Peter; Mendoza, Pablo; Mejia, Alfonso

    2018-03-01

    The relative roles of statistical weather preprocessing and streamflow postprocessing in hydrological ensemble forecasting at short- to medium-range forecast lead times (day 1-7) are investigated. For this purpose, a regional hydrologic ensemble prediction system (RHEPS) is developed and implemented. The RHEPS is comprised of the following components: (i) hydrometeorological observations (multisensor precipitation estimates, gridded surface temperature, and gauged streamflow); (ii) weather ensemble forecasts (precipitation and near-surface temperature) from the National Centers for Environmental Prediction 11-member Global Ensemble Forecast System Reforecast version 2 (GEFSRv2); (iii) NOAA's Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM); (iv) heteroscedastic censored logistic regression (HCLR) as the statistical preprocessor; (v) two statistical postprocessors, an autoregressive model with a single exogenous variable (ARX(1,1)) and quantile regression (QR); and (vi) a comprehensive verification strategy. To implement the RHEPS, 1 to 7 days weather forecasts from the GEFSRv2 are used to force HL-RDHM and generate raw ensemble streamflow forecasts. Forecasting experiments are conducted in four nested basins in the US Middle Atlantic region, ranging in size from 381 to 12 362 km2. Results show that the HCLR preprocessed ensemble precipitation forecasts have greater skill than the raw forecasts. These improvements are more noticeable in the warm season at the longer lead times (> 3 days). Both postprocessors, ARX(1,1) and QR, show gains in skill relative to the raw ensemble streamflow forecasts, particularly in the cool season, but QR outperforms ARX(1,1). The scenarios that implement preprocessing and postprocessing separately tend to perform similarly, although the postprocessing-alone scenario is often more effective. The scenario involving both preprocessing and postprocessing consistently outperforms the other scenarios. In some cases

  2. Preprocessing for Optimization of Probabilistic-Logic Models for Sequence Analysis

    DEFF Research Database (Denmark)

    Christiansen, Henning; Lassen, Ole Torp

    2009-01-01

    and approximation are needed. The first steps are taken towards a methodology for optimizing such models by approximations using auxiliary models for preprocessing or splitting them into submodels. Evaluation of such approximating models is challenging as authoritative test data may be sparse. On the other hand...

  3. Spectra of sparse random matrices

    International Nuclear Information System (INIS)

    Kuehn, Reimer

    2008-01-01

    We compute the spectral density for ensembles of sparse symmetric random matrices using replica. Our formulation of the replica-symmetric ansatz shares the symmetries of that suggested in a seminal paper by Rodgers and Bray (symmetry with respect to permutation of replica and rotation symmetry in the space of replica), but uses a different representation in terms of superpositions of Gaussians. It gives rise to a pair of integral equations which can be solved by a stochastic population-dynamics algorithm. Remarkably our representation allows us to identify pure-point contributions to the spectral density related to the existence of normalizable eigenstates. Our approach is not restricted to matrices defined on graphs with Poissonian degree distribution. Matrices defined on regular random graphs or on scale-free graphs, are easily handled. We also look at matrices with row constraints such as discrete graph Laplacians. Our approach naturally allows us to unfold the total density of states into contributions coming from vertices of different local coordinations and an example of such an unfolding is presented. Our results are well corroborated by numerical diagonalization studies of large finite random matrices

  4. The construction of next-generation matrices for compartmental epidemic models.

    Science.gov (United States)

    Diekmann, O; Heesterbeek, J A P; Roberts, M G

    2010-06-06

    The basic reproduction number (0) is arguably the most important quantity in infectious disease epidemiology. The next-generation matrix (NGM) is the natural basis for the definition and calculation of (0) where finitely many different categories of individuals are recognized. We clear up confusion that has been around in the literature concerning the construction of this matrix, specifically for the most frequently used so-called compartmental models. We present a detailed easy recipe for the construction of the NGM from basic ingredients derived directly from the specifications of the model. We show that two related matrices exist which we define to be the NGM with large domain and the NGM with small domain. The three matrices together reflect the range of possibilities encountered in the literature for the characterization of (0). We show how they are connected and how their construction follows from the basic model ingredients, and establish that they have the same non-zero eigenvalues, the largest of which is the basic reproduction number (0). Although we present formal recipes based on linear algebra, we encourage the construction of the NGM by way of direct epidemiological reasoning, using the clear interpretation of the elements of the NGM and of the model ingredients. We present a selection of examples as a practical guide to our methods. In the appendix we present an elementary but complete proof that (0) defined as the dominant eigenvalue of the NGM for compartmental systems and the Malthusian parameter r, the real-time exponential growth rate in the early phase of an outbreak, are connected by the properties that (0) > 1 if and only if r > 0, and (0) = 1 if and only if r = 0.

  5. Terminological confusions and problems at the interface between the crystal field Hamiltonians and the zero-field splitting Hamiltonians—Survey of the CF=ZFS confusion in recent literature

    Energy Technology Data Exchange (ETDEWEB)

    Rudowicz, Czesław, E-mail: crudowicz@zut.edu.pl [Institute of Physics, West Pomeranian University of Technology, Al. Piastów 17, 70-310 Szczecin (Poland); Karbowiak, Mirosław [Faculty of Chemistry, University of Wrocław, ul. F. Joliot-Curie 14, 50-383 Wrocław (Poland)

    2014-10-15

    The single transition ions in various crystals or molecules as well as the exchange coupled systems (ECS) of transition ions, especially the single molecule magnets (SMM) or molecular nanomagnets (MNM), have been extensively studied in recent decades using electron magnetic resonance (EMR), optical spectroscopy, and magnetic measurements. Interpretation of magnetic and spectroscopic properties of transition ions is based on two physically distinct types of Hamiltonians: the physical crystal field (CF), or equivalently ligand field (LF), Hamiltonians and the effective spin Hamiltonians (SH), which include the zero-field splitting (ZFS) Hamiltonians. Survey of recent literature has revealed a number of terminological confusions and specific problems occurring at the interface between these Hamiltonians (denoted CF (LF)↔SH (ZFS)). Elucidation of sloppy or incorrect usage of crucial notions, especially those describing or parameterizing crystal fields and zero field splittings, is a very challenging task that requires several reviews. Here we focus on the prevailing confusion between the CF (LF) and SH (ZFS) quantities, denoted as the CF=ZFS confusion, which consists in referring to the parameters (or Hamiltonians), which are the true ZFS (or SH) quantities, as purportedly the CF (LF) quantities. The inverse ZFS=CF confusion, which pertains to the cases of labeling the true CF (LF) quantities as purportedly the ZFS quantities, is considered in a follow-up paper. The two reviews prepare grounds for a systematization of nomenclature aimed at bringing order to the zoo of different Hamiltonians. Specific cases of the CF=ZFS confusion identified in the recent textbooks, review articles, and SMM (MNM)- and EMR-related papers are surveyed and the pertinent misconceptions are outlined. The consequences of the terminological confusions go far beyond simple semantic issues or misleading keyword classifications of papers in journals and scientific databases. Serious

  6. Chemiluminescence in cryogenic matrices

    Science.gov (United States)

    Lotnik, S. V.; Kazakov, Valeri P.

    1989-04-01

    The literature data on chemiluminescence (CL) in cryogenic matrices have been classified and correlated for the first time. The role of studies on phosphorescence and CL at low temperatures in the development of cryochemistry is shown. The features of low-temperature CL in matrices of nitrogen and inert gases (fine structure of spectra, matrix effects) and the data on the mobility and reactivity of atoms and radicals at very low temperatures are examined. The trends in the development of studies on CL in cryogenic matrices, such as the search for systems involving polyatomic molecules and extending the forms of CL reactions, are followed. The reactions of active nitrogen with hydrocarbons that are accompanied by light emission and CL in the oxidation of carbenes at T >= 77 K are examined. The bibliography includes 112 references.

  7. Thresholding: A Pixel-Level Image Processing Methodology Preprocessing Technique for an OCR System for the Brahmi Script

    Directory of Open Access Journals (Sweden)

    H. K. Anasuya Devi

    2006-12-01

    Full Text Available In this paper we study the methodology employed for preprocessing the archaeological images. We present the various algorithms used in the low-level processing stage of image analysis for Optical Character Recognition System for Brahmi Script. The image preprocessing technique covered in this paper is thresholding. We also try to analyze the results obtained by the pixel-level processing algorithms.

  8. Comparison of planar images and SPECT with bayesean preprocessing for the demonstration of facial anatomy and craniomandibular disorders

    International Nuclear Information System (INIS)

    Kircos, L.T.; Ortendahl, D.A.; Hattner, R.S.; Faulkner, D.; Taylor, R.L.

    1984-01-01

    Craniomandiublar disorders involving the facial anatomy may be difficult to demonstrate in planar images. Although bone scanning is generally more sensitive than radiography, facial bone anatomy is complex and focal areas of increased or decreased radiotracer may become obscured by overlapping structures in planar images. Thus SPECT appears ideally suited to examination of the facial skeleton. A series of patients with craniomandibular disorders of unknown origin were imaged using 20 mCi Tc-99m MDP. Planar and SPECT (Siemens 7500 ZLC Orbiter) images were obtained four hours after injection. The SPECT images were reconstructed with a filtered back-projection algorithm. In order to improve image contrast and resolution in SPECT images, the rotation views were pre-processed with a Bayesean deblurring algorithm which has previously been show to offer improved contrast and resolution in planar images. SPECT images using the pre-processed rotation views were obtained and compared to the SPECT images without pre-processing and the planar images. TMJ arthropathy involving either the glenoid fossa or the mandibular condyle, orthopedic changes involving the mandible or maxilla, localized dental pathosis, as well as changes in structures peripheral to the facial skeleton were identified. Bayesean pre-processed SPECT depicted the facial skeleton more clearly as well as providing a more obvious demonstration of the bony changes associated with craniomandibular disorders than either planar images or SPECT without pre-processing

  9. Skew-adjacency matrices of graphs

    NARCIS (Netherlands)

    Cavers, M.; Cioaba, S.M.; Fallat, S.; Gregory, D.A.; Haemers, W.H.; Kirkland, S.J.; McDonald, J.J.; Tsatsomeros, M.

    2012-01-01

    The spectra of the skew-adjacency matrices of a graph are considered as a possible way to distinguish adjacency cospectral graphs. This leads to the following topics: graphs whose skew-adjacency matrices are all cospectral; relations between the matchings polynomial of a graph and the characteristic

  10. The invariant theory of matrices

    CERN Document Server

    Concini, Corrado De

    2017-01-01

    This book gives a unified, complete, and self-contained exposition of the main algebraic theorems of invariant theory for matrices in a characteristic free approach. More precisely, it contains the description of polynomial functions in several variables on the set of m\\times m matrices with coefficients in an infinite field or even the ring of integers, invariant under simultaneous conjugation. Following Hermann Weyl's classical approach, the ring of invariants is described by formulating and proving the first fundamental theorem that describes a set of generators in the ring of invariants, and the second fundamental theorem that describes relations between these generators. The authors study both the case of matrices over a field of characteristic 0 and the case of matrices over a field of positive characteristic. While the case of characteristic 0 can be treated following a classical approach, the case of positive characteristic (developed by Donkin and Zubkov) is much harder. A presentation of this case...

  11. Linguistic Preprocessing and Tagging for Problem Report Trend Analysis

    Science.gov (United States)

    Beil, Robert J.; Malin, Jane T.

    2012-01-01

    Mr. Robert Beil, Systems Engineer at Kennedy Space Center (KSC), requested the NASA Engineering and Safety Center (NESC) develop a prototype tool suite that combines complementary software technology used at Johnson Space Center (JSC) and KSC for problem report preprocessing and semantic tag extraction, to improve input to data mining and trend analysis. This document contains the outcome of the assessment and the Findings, Observations and NESC Recommendations.

  12. Validation of DWI pre-processing procedures for reliable differentiation between human brain gliomas.

    Science.gov (United States)

    Vellmer, Sebastian; Tonoyan, Aram S; Suter, Dieter; Pronin, Igor N; Maximov, Ivan I

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) is a powerful tool in clinical applications, in particular, in oncology screening. dMRI demonstrated its benefit and efficiency in the localisation and detection of different types of human brain tumours. Clinical dMRI data suffer from multiple artefacts such as motion and eddy-current distortions, contamination by noise, outliers etc. In order to increase the image quality of the derived diffusion scalar metrics and the accuracy of the subsequent data analysis, various pre-processing approaches are actively developed and used. In the present work we assess the effect of different pre-processing procedures such as a noise correction, different smoothing algorithms and spatial interpolation of raw diffusion data, with respect to the accuracy of brain glioma differentiation. As a set of sensitive biomarkers of the glioma malignancy grades we chose the derived scalar metrics from diffusion and kurtosis tensor imaging as well as the neurite orientation dispersion and density imaging (NODDI) biophysical model. Our results show that the application of noise correction, anisotropic diffusion filtering, and cubic-order spline interpolation resulted in the highest sensitivity and specificity for glioma malignancy grading. Thus, these pre-processing steps are recommended for the statistical analysis in brain tumour studies. Copyright © 2017. Published by Elsevier GmbH.

  13. Supervised pre-processing approaches in multiple class variables classification for fish recruitment forecasting

    KAUST Repository

    Fernandes, José Antonio

    2013-02-01

    A multi-species approach to fisheries management requires taking into account the interactions between species in order to improve recruitment forecasting of the fish species. Recent advances in Bayesian networks direct the learning of models with several interrelated variables to be forecasted simultaneously. These models are known as multi-dimensional Bayesian network classifiers (MDBNs). Pre-processing steps are critical for the posterior learning of the model in these kinds of domains. Therefore, in the present study, a set of \\'state-of-the-art\\' uni-dimensional pre-processing methods, within the categories of missing data imputation, feature discretization and feature subset selection, are adapted to be used with MDBNs. A framework that includes the proposed multi-dimensional supervised pre-processing methods, coupled with a MDBN classifier, is tested with synthetic datasets and the real domain of fish recruitment forecasting. The correctly forecasting of three fish species (anchovy, sardine and hake) simultaneously is doubled (from 17.3% to 29.5%) using the multi-dimensional approach in comparison to mono-species models. The probability assessments also show high improvement reducing the average error (estimated by means of Brier score) from 0.35 to 0.27. Finally, these differences are superior to the forecasting of species by pairs. © 2012 Elsevier Ltd.

  14. Exact Inverse Matrices of Fermat and Mersenne Circulant Matrix

    Directory of Open Access Journals (Sweden)

    Yanpeng Zheng

    2015-01-01

    Full Text Available The well known circulant matrices are applied to solve networked systems. In this paper, circulant and left circulant matrices with the Fermat and Mersenne numbers are considered. The nonsingularity of these special matrices is discussed. Meanwhile, the exact determinants and inverse matrices of these special matrices are presented.

  15. The 1992 ENDF Pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1992-01-01

    This document summarizes the 1992 version of the ENDF pre-processing codes which are required for processing evaluated nuclear data coded in the format ENDF-4, ENDF-5, or ENDF-6. Included are the codes CONVERT, MERGER, LINEAR, RECENT, SIGMA1, LEGEND, FIXUP, GROUPIE, DICTION, MIXER, VIRGIN, COMPLOT, EVALPLOT, RELABEL. Some of the functions of these codes are: to calculate cross-sections from resonance parameters; to calculate angular distributions, group average, mixtures of cross-sections, etc; to produce graphical plottings and data comparisons. The codes are designed to operate on virtually any type of computer including PC's. They are available from the IAEA Nuclear Data Section, free of charge upon request, on magnetic tape or a set of HD diskettes. (author)

  16. Informal caregivers and detection of delirium in postacute care: a correlational study of the confusion assessment method (CAM), confusion assessment method-family assessment method (CAM-FAM) and DSM-IV criteria.

    Science.gov (United States)

    Flanagan, Nina M; Spencer, Gale

    2016-09-01

    Delirium is a common, serious and potentially life-threatening syndrome affecting older adults. This syndrome continues to be under-recognised and under treated by healthcare professionals across all care settings. Older adults who develop delirium have poorer outcomes, higher mortality and higher care costs. The purposes of this study were to correlate the confusion assessment method-family assessment method and confusion assessment method in the detection of delirium in postacute care, to correlate the confusion assessment method-family assessment method and diagnostic and statistical manual of mental disorders text revision criteria in detection of delirium in postacute care, to determine the prevalence of delirium in postacute care elders and to describe the relationship of level of cognitive impairment and delirium in the postacute care setting. Implications for Practice Delirium is disturbing for patients and caregivers. Frequently . family members want to provide information about their loved one. The use of the CAM-FAM and CAM can give a more definitive determination of baseline status. Frequent observations using both instruments may lead to better recognition of delirium and implementation of interventions to prevent lasting sequelae. Descriptive studies determined the strengths of relationship between the confusion assessment method, confusion assessment method-family assessment method, Mini-Cog and diagnostic and statistical manual of mental disorders text revision criteria in detection of delirium in the postacute care setting. Prevalence of delirium in this study was 35%. The confusion assessment method-family assessment method highly correlates with the confusion assessment method and diagnostic and statistical manual of mental disorders text revision criteria for detecting delirium in older adults in the postacute care setting. Persons with cognitive impairment are more likely to develop delirium. Family members recognise symptoms of delirium when

  17. Enhancing Understanding of Transformation Matrices

    Science.gov (United States)

    Dick, Jonathan; Childrey, Maria

    2012-01-01

    With the Common Core State Standards' emphasis on transformations, teachers need a variety of approaches to increase student understanding. Teaching matrix transformations by focusing on row vectors gives students tools to create matrices to perform transformations. This empowerment opens many doors: Students are able to create the matrices for…

  18. Input data preprocessing method for exchange rate forecasting via neural network

    Directory of Open Access Journals (Sweden)

    Antić Dragan S.

    2014-01-01

    Full Text Available The aim of this paper is to present a method for neural network input parameters selection and preprocessing. The purpose of this network is to forecast foreign exchange rates using artificial intelligence. Two data sets are formed for two different economic systems. Each system is represented by six categories with 70 economic parameters which are used in the analysis. Reduction of these parameters within each category was performed by using the principal component analysis method. Component interdependencies are established and relations between them are formed. Newly formed relations were used to create input vectors of a neural network. The multilayer feed forward neural network is formed and trained using batch training. Finally, simulation results are presented and it is concluded that input data preparation method is an effective way for preprocessing neural network data. [Projekat Ministarstva nauke Republike Srbije, br.TR 35005, br. III 43007 i br. III 44006

  19. CudaPre3D: An Alternative Preprocessing Algorithm for Accelerating 3D Convex Hull Computation on the GPU

    Directory of Open Access Journals (Sweden)

    MEI, G.

    2015-05-01

    Full Text Available In the calculating of convex hulls for point sets, a preprocessing procedure that is to filter the input points by discarding non-extreme points is commonly used to improve the computational efficiency. We previously proposed a quite straightforward preprocessing approach for accelerating 2D convex hull computation on the GPU. In this paper, we extend that algorithm to being used in 3D cases. The basic ideas behind these two preprocessing algorithms are similar: first, several groups of extreme points are found according to the original set of input points and several rotated versions of the input set; then, a convex polyhedron is created using the found extreme points; and finally those interior points locating inside the formed convex polyhedron are discarded. Experimental results show that: when employing the proposed preprocessing algorithm, it achieves the speedups of about 4x on average and 5x to 6x in the best cases over the cases where the proposed approach is not used. In addition, more than 95 percent of the input points can be discarded in most experimental tests.

  20. Group inverses of M-matrices and their applications

    CERN Document Server

    Kirkland, Stephen J

    2013-01-01

    Group inverses for singular M-matrices are useful tools not only in matrix analysis, but also in the analysis of stochastic processes, graph theory, electrical networks, and demographic models. Group Inverses of M-Matrices and Their Applications highlights the importance and utility of the group inverses of M-matrices in several application areas. After introducing sample problems associated with Leslie matrices and stochastic matrices, the authors develop the basic algebraic and spectral properties of the group inverse of a general matrix. They then derive formulas for derivatives of matrix f

  1. Status of pre-processing of waste electrical and electronic equipment in Germany and its influence on the recovery of gold.

    Science.gov (United States)

    Chancerel, Perrine; Bolland, Til; Rotter, Vera Susanne

    2011-03-01

    Waste electrical and electronic equipment (WEEE) contains gold in low but from an environmental and economic point of view relevant concentration. After collection, WEEE is pre-processed in order to generate appropriate material fractions that are sent to the subsequent end-processing stages (recovery, reuse or disposal). The goal of this research is to quantify the overall recovery rates of pre-processing technologies used in Germany for the reference year 2007. To achieve this goal, facilities operating in Germany were listed and classified according to the technology they apply. Information on their processing capacity was gathered by evaluating statistical databases. Based on a literature review of experimental results for gold recovery rates of different pre-processing technologies, the German overall recovery rate of gold at the pre-processing level was quantified depending on the characteristics of the treated WEEE. The results reveal that - depending on the equipment groups - pre-processing recovery rates of gold of 29 to 61% are achieved in Germany. Some practical recommendations to reduce the losses during pre-processing could be formulated. Defining mass-based recovery targets in the legislation does not set incentives to recover trace elements. Instead, the priorities for recycling could be defined based on other parameters like the environmental impacts of the materials. The implementation of measures to reduce the gold losses would also improve the recovery of several other non-ferrous metals like tin, nickel, and palladium.

  2. De-confusing the THOG problem: the Pythagorean solution.

    Science.gov (United States)

    Griggs, R A; Koenig, C S; Alea, N L

    2001-08-01

    Sources of facilitation for Needham and Amado's (1995) Pythagoras version of Wason's THOG problem were systematically examined in three experiments with 174 participants. Although both the narrative structure and figural notation used in the Pythagoras problem independently led to significant facilitation (40-50% correct), pairing hypothesis generation with either factor or pairing the two factors together was found to be necessary to obtain substantial facilitation (> 50% correct). Needham and Amado's original finding for the complete Pythagoras problem was also replicated. These results are discussed in terms of the "confusion theory" explanation for performance on the standard THOG problem. The possible role of labelling as a de-confusing factor in other versions of the THOG problem and the implications of the present findings for human reasoning are also considered.

  3. Confusion between Odds and Probability, a Pandemic?

    Science.gov (United States)

    Fulton, Lawrence V.; Mendez, Francis A.; Bastian, Nathaniel D.; Musal, R. Muzaffer

    2012-01-01

    This manuscript discusses the common confusion between the terms probability and odds. To emphasize the importance and responsibility of being meticulous in the dissemination of information and knowledge, this manuscript reveals five cases of sources of inaccurate statistical language imbedded in the dissemination of information to the general…

  4. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines.

    Science.gov (United States)

    del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J; Raboso, Mariano

    2015-06-17

    Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation-based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking-to reduce the dimensions of images-and binarization-to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.

  5. Inference for High-dimensional Differential Correlation Matrices.

    Science.gov (United States)

    Cai, T Tony; Zhang, Anru

    2016-01-01

    Motivated by differential co-expression analysis in genomics, we consider in this paper estimation and testing of high-dimensional differential correlation matrices. An adaptive thresholding procedure is introduced and theoretical guarantees are given. Minimax rate of convergence is established and the proposed estimator is shown to be adaptively rate-optimal over collections of paired correlation matrices with approximately sparse differences. Simulation results show that the procedure significantly outperforms two other natural methods that are based on separate estimation of the individual correlation matrices. The procedure is also illustrated through an analysis of a breast cancer dataset, which provides evidence at the gene co-expression level that several genes, of which a subset has been previously verified, are associated with the breast cancer. Hypothesis testing on the differential correlation matrices is also considered. A test, which is particularly well suited for testing against sparse alternatives, is introduced. In addition, other related problems, including estimation of a single sparse correlation matrix, estimation of the differential covariance matrices, and estimation of the differential cross-correlation matrices, are also discussed.

  6. A simpler method of preprocessing MALDI-TOF MS data for differential biomarker analysis: stem cell and melanoma cancer studies

    Directory of Open Access Journals (Sweden)

    Tong Dong L

    2011-09-01

    Full Text Available Abstract Introduction Raw spectral data from matrix-assisted laser desorption/ionisation time-of-flight (MALDI-TOF with MS profiling techniques usually contains complex information not readily providing biological insight into disease. The association of identified features within raw data to a known peptide is extremely difficult. Data preprocessing to remove uncertainty characteristics in the data is normally required before performing any further analysis. This study proposes an alternative yet simple solution to preprocess raw MALDI-TOF-MS data for identification of candidate marker ions. Two in-house MALDI-TOF-MS data sets from two different sample sources (melanoma serum and cord blood plasma are used in our study. Method Raw MS spectral profiles were preprocessed using the proposed approach to identify peak regions in the spectra. The preprocessed data was then analysed using bespoke machine learning algorithms for data reduction and ion selection. Using the selected ions, an ANN-based predictive model was constructed to examine the predictive power of these ions for classification. Results Our model identified 10 candidate marker ions for both data sets. These ion panels achieved over 90% classification accuracy on blind validation data. Receiver operating characteristics analysis was performed and the area under the curve for melanoma and cord blood classifiers was 0.991 and 0.986, respectively. Conclusion The results suggest that our data preprocessing technique removes unwanted characteristics of the raw data, while preserving the predictive components of the data. Ion identification analysis can be carried out using MALDI-TOF-MS data with the proposed data preprocessing technique coupled with bespoke algorithms for data reduction and ion selection.

  7. Effect of pre-processing on the physico-chemical properties of ...

    African Journals Online (AJOL)

    The findings indicated that the pre-processing treatments produced significant differences (p < 0.05) in protein (1.50 ± 0.18g/100g) and carbohydrate (1.09 ± 0.94g/100g) composition of the baking soda blanched milk sample. The viscosity of the baking soda blanched milk (18.91 ± 3.38cps) was significantly higher than that ...

  8. Orthogonal feature selection method. [For preprocessing of man spectral data

    Energy Technology Data Exchange (ETDEWEB)

    Kowalski, B R [Univ. of Washington, Seattle; Bender, C F

    1976-01-01

    A new method of preprocessing spectral data for extraction of molecular structural information is desired. This SELECT method generates orthogonal features that are important for classification purposes and that also retain their identity to the original measurements. A brief introduction to chemical pattern recognition is presented. A brief description of the method and an application to mass spectral data analysis follow. (BLM)

  9. Phenomenological mass matrices with a democratic warp

    International Nuclear Information System (INIS)

    Kleppe, A.

    2018-01-01

    Taking into account all available data on the mass sector, we obtain unitary rotation matrices that diagonalize the quark matrices by using a specific parametrization of the Cabibbo-Kobayashi-Maskawa mixing matrix. In this way, we find mass matrices for the up- and down-quark sectors of a specific, symmetric form, with traces of a democratic texture.

  10. Data pre-processing: a case study in predicting student's retention in ...

    African Journals Online (AJOL)

    dataset with features that are ready for data mining task. The study also proposed a process model and suggestions, which can be applied to support more comprehensible tools for educational domain who is the end user. Subsequently, the data pre-processing become more efficient for predicting student's retention in ...

  11. Diagonalization of the mass matrices

    International Nuclear Information System (INIS)

    Rhee, S.S.

    1984-01-01

    It is possible to make 20 types of 3x3 mass matrices which are hermitian. We have obtained unitary matrices which could diagonalize each mass matrix. Since the three elements of mass matrix can be expressed in terms of the three eigenvalues, msub(i), we can also express the unitary matrix in terms of msub(i). (Author)

  12. Parameter diagnostics of phases and phase transition learning by neural networks

    Science.gov (United States)

    Suchsland, Philippe; Wessel, Stefan

    2018-05-01

    We present an analysis of neural network-based machine learning schemes for phases and phase transitions in theoretical condensed matter research, focusing on neural networks with a single hidden layer. Such shallow neural networks were previously found to be efficient in classifying phases and locating phase transitions of various basic model systems. In order to rationalize the emergence of the classification process and for identifying any underlying physical quantities, it is feasible to examine the weight matrices and the convolutional filter kernels that result from the learning process of such shallow networks. Furthermore, we demonstrate how the learning-by-confusing scheme can be used, in combination with a simple threshold-value classification method, to diagnose the learning parameters of neural networks. In particular, we study the classification process of both fully-connected and convolutional neural networks for the two-dimensional Ising model with extended domain wall configurations included in the low-temperature regime. Moreover, we consider the two-dimensional XY model and contrast the performance of the learning-by-confusing scheme and convolutional neural networks trained on bare spin configurations to the case of preprocessed samples with respect to vortex configurations. We discuss these findings in relation to similar recent investigations and possible further applications.

  13. The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI.

    Science.gov (United States)

    Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain

    2018-01-01

    Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg) . Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency.

  14. The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI

    Directory of Open Access Journals (Sweden)

    Fatma Gargouri

    2018-02-01

    Full Text Available Resting state functional MRI (rs-fMRI is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step and the scr (where we applied realignment, tCompCor and smoothing as a final step strategies had the highest mean values of global efficiency (eg. Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step, had the highest mean local efficiency (el values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency.

  15. The Influence of Preprocessing Steps on Graph Theory Measures Derived from Resting State fMRI

    Science.gov (United States)

    Gargouri, Fatma; Kallel, Fathi; Delphine, Sebastien; Ben Hamida, Ahmed; Lehéricy, Stéphane; Valabregue, Romain

    2018-01-01

    Resting state functional MRI (rs-fMRI) is an imaging technique that allows the spontaneous activity of the brain to be measured. Measures of functional connectivity highly depend on the quality of the BOLD signal data processing. In this study, our aim was to study the influence of preprocessing steps and their order of application on small-world topology and their efficiency in resting state fMRI data analysis using graph theory. We applied the most standard preprocessing steps: slice-timing, realign, smoothing, filtering, and the tCompCor method. In particular, we were interested in how preprocessing can retain the small-world economic properties and how to maximize the local and global efficiency of a network while minimizing the cost. Tests that we conducted in 54 healthy subjects showed that the choice and ordering of preprocessing steps impacted the graph measures. We found that the csr (where we applied realignment, smoothing, and tCompCor as a final step) and the scr (where we applied realignment, tCompCor and smoothing as a final step) strategies had the highest mean values of global efficiency (eg). Furthermore, we found that the fscr strategy (where we applied realignment, tCompCor, smoothing, and filtering as a final step), had the highest mean local efficiency (el) values. These results confirm that the graph theory measures of functional connectivity depend on the ordering of the processing steps, with the best results being obtained using smoothing and tCompCor as the final steps for global efficiency with additional filtering for local efficiency. PMID:29497372

  16. Annual modulation of the galactic binary confusion noise background and LISA data analysis

    International Nuclear Information System (INIS)

    Seto, Naoki

    2004-01-01

    We study the anisotropies of the galactic confusion noise background and its effects on LISA data analysis. LISA has two data streams of gravitational wave signals relevant for the low frequency regime. Because of the anisotropies of the background, the matrix for their confusion noises has off-diagonal components and depends strongly on the orientation of the detector plane. We find that the sky-averaged confusion noise level √(S(f)) could change by a factor of 2 in 3 months and would be minimum when the orbital position of LISA is around either the spring or autumn equinox

  17. Is It Kingdom or Domains? Confusion & Solutions

    Science.gov (United States)

    Blackwell, Will H.

    2004-01-01

    A confusion regarding the number of kingdoms that should be recognized and the inclusion of domains in the traditional kingdom-based classification found in the higher levels of classification of organisms is presented. Hence, it is important to keep in mind future modifications that may occur in the classification systems and to recognize…

  18. A Study on Mode Confusions in Adaptive Cruise Control Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Dae Ryong; Yang, Ji Hyun; Lee, Sang Hun [Kookmin University, Seoul (Korea, Republic of)

    2015-05-15

    Recent development in science and technology has enabled vehicles to be equipped with advanced autonomous functions. ADAS (Advanced Driver Assistance Systems) are examples of such advanced autonomous systems added. Advanced systems have several operational modes and it has been observed that drivers could be unaware of the mode they are in during vehicle operation, which can be a contributing factor of traffic accidents. In this study, possible mode confusions in a simulated environment when vehicles are equipped with an adaptive cruise control system were investigated. The mental model of the system was designed and verified using the formal analysis method. Then, the user interface was designed on the basis of those of the current cruise control systems. A set of human-in-loop experiments was conducted to observe possible mode confusions and redesign the user interface to reduce them. In conclusion, the clarity and transparency of the user interface was proved to be as important as the correctness and compactness of the mental model when reducing mode confusions.

  19. A Study on Mode Confusions in Adaptive Cruise Control Systems

    International Nuclear Information System (INIS)

    Ahn, Dae Ryong; Yang, Ji Hyun; Lee, Sang Hun

    2015-01-01

    Recent development in science and technology has enabled vehicles to be equipped with advanced autonomous functions. ADAS (Advanced Driver Assistance Systems) are examples of such advanced autonomous systems added. Advanced systems have several operational modes and it has been observed that drivers could be unaware of the mode they are in during vehicle operation, which can be a contributing factor of traffic accidents. In this study, possible mode confusions in a simulated environment when vehicles are equipped with an adaptive cruise control system were investigated. The mental model of the system was designed and verified using the formal analysis method. Then, the user interface was designed on the basis of those of the current cruise control systems. A set of human-in-loop experiments was conducted to observe possible mode confusions and redesign the user interface to reduce them. In conclusion, the clarity and transparency of the user interface was proved to be as important as the correctness and compactness of the mental model when reducing mode confusions

  20. The construction of factorized S-matrices

    International Nuclear Information System (INIS)

    Chudnovsky, D.V.

    1981-01-01

    We study the relationships between factorized S-matrices given as representations of the Zamolodchikov algebra and exactly solvable models constructed using the Baxter method. Several new examples of symmetric and non-symmetric factorized S-matrices are proposed. (orig.)

  1. Learning and Generalisation in Neural Networks with Local Preprocessing

    OpenAIRE

    Kutsia, Merab

    2007-01-01

    We study learning and generalisation ability of a specific two-layer feed-forward neural network and compare its properties to that of a simple perceptron. The input patterns are mapped nonlinearly onto a hidden layer, much larger than the input layer, and this mapping is either fixed or may result from an unsupervised learning process. Such preprocessing of initially uncorrelated random patterns results in the correlated patterns in the hidden layer. The hidden-to-output mapping of the net...

  2. A clinical evaluation of the RNCA study using Fourier filtering as a preprocessing method

    Energy Technology Data Exchange (ETDEWEB)

    Robeson, W.; Alcan, K.E.; Graham, M.C.; Palestro, C.; Oliver, F.H.; Benua, R.S.

    1984-06-01

    Forty-one patients (25 male, 16 female) were studied by Radionuclide Cardangiography (RNCA) in our institution. There were 42 rest studies and 24 stress studies (66 studies total). Sixteen patients were normal, 15 had ASHD, seven had a cardiomyopathy, and three had left-sided valvular regurgitation. Each study was preprocessed using both the standard nine-point smoothing method and Fourier filtering. Amplitude and phase images were also generated. Both preprocessing methods were compared with respect to image quality, border definition, reliability and reproducibility of the LVEF, and cine wall motion interpretation. Image quality and border definition were judged superior by the consensus of two independent observers in 65 of 66 studies (98%) using Fourier filtered data. The LVEF differed between the two processes by greater than .05 in 17 of 66 studies (26%) including five studies in which the LVEF could not be determined using nine-point smoothed data. LV wall motion was normal by both techniques in all control patients by cine analysis. However, cine wall motion analysis using Fourier filtered data demonstrated additional abnormalities in 17 of 25 studies (68%) in the ASHD group, including three uninterpretable studies using nine-point smoothed data. In the cardiomyopathy/valvular heart disease group, ten of 18 studies (56%) had additional wall motion abnormalities using Fourier filtered data (including four uninterpretable studies using nine-point smoothed data). We conclude that Fourier filtering is superior to the nine-point smooth preprocessing method now in general use in terms of image quality, border definition, generation of an LVEF, and cine wall motion analysis. The advent of the array processor makes routine preprocessing by Fourier filtering a feasible technologic advance in the development of the RNCA study.

  3. A clinical evaluation of the RNCA study using Fourier filtering as a preprocessing method

    International Nuclear Information System (INIS)

    Robeson, W.; Alcan, K.E.; Graham, M.C.; Palestro, C.; Oliver, F.H.; Benua, R.S.

    1984-01-01

    Forty-one patients (25 male, 16 female) were studied by Radionuclide Cardangiography (RNCA) in our institution. There were 42 rest studies and 24 stress studies (66 studies total). Sixteen patients were normal, 15 had ASHD, seven had a cardiomyopathy, and three had left-sided valvular regurgitation. Each study was preprocessed using both the standard nine-point smoothing method and Fourier filtering. Amplitude and phase images were also generated. Both preprocessing methods were compared with respect to image quality, border definition, reliability and reproducibility of the LVEF, and cine wall motion interpretation. Image quality and border definition were judged superior by the consensus of two independent observers in 65 of 66 studies (98%) using Fourier filtered data. The LVEF differed between the two processes by greater than .05 in 17 of 66 studies (26%) including five studies in which the LVEF could not be determined using nine-point smoothed data. LV wall motion was normal by both techniques in all control patients by cine analysis. However, cine wall motion analysis using Fourier filtered data demonstrated additional abnormalities in 17 of 25 studies (68%) in the ASHD group, including three uninterpretable studies using nine-point smoothed data. In the cardiomyopathy/valvular heart disease group, ten of 18 studies (56%) had additional wall motion abnormalities using Fourier filtered data (including four uninterpretable studies using nine-point smoothed data). We conclude that Fourier filtering is superior to the nine-point smooth preprocessing method now in general use in terms of image quality, border definition, generation of an LVEF, and cine wall motion analysis. The advent of the array processor makes routine preprocessing by Fourier filtering a feasible technologic advance in the development of the RNCA study

  4. Evaluation of the robustness of the preprocessing technique improving reversible compressibility of CT images: Tested on various CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Chang Ho; Kim, Bohyoung; Gu, Bon Seung; Lee, Jong Min [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of); Kim, Kil Joong [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Department of Radiation Applied Life Science, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul 110-799 (Korea, Republic of); Lee, Kyoung Ho [Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707, South Korea and Institute of Radiation Medicine, Seoul National University Medical Research Center, and Clinical Research Institute, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 110-744 (Korea, Republic of); Kim, Tae Ki [Medical Information Center, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 300 Gumi-ro, Bundang-gu, Seongnam-si, Gyeonggi-do 463-707 (Korea, Republic of)

    2013-10-15

    Purpose: To modify the preprocessing technique, which was previously proposed, improving compressibility of computed tomography (CT) images to cover the diversity of three dimensional configurations of different body parts and to evaluate the robustness of the technique in terms of segmentation correctness and increase in reversible compression ratio (CR) for various CT examinations.Methods: This study had institutional review board approval with waiver of informed patient consent. A preprocessing technique was previously proposed to improve the compressibility of CT images by replacing pixel values outside the body region with a constant value resulting in maximizing data redundancy. Since the technique was developed aiming at only chest CT images, the authors modified the segmentation method to cover the diversity of three dimensional configurations of different body parts. The modified version was evaluated as follows. In randomly selected 368 CT examinations (352 787 images), each image was preprocessed by using the modified preprocessing technique. Radiologists visually confirmed whether the segmented region covers the body region or not. The images with and without the preprocessing were reversibly compressed using Joint Photographic Experts Group (JPEG), JPEG2000 two-dimensional (2D), and JPEG2000 three-dimensional (3D) compressions. The percentage increase in CR per examination (CR{sub I}) was measured.Results: The rate of correct segmentation was 100.0% (95% CI: 99.9%, 100.0%) for all the examinations. The median of CR{sub I} were 26.1% (95% CI: 24.9%, 27.1%), 40.2% (38.5%, 41.1%), and 34.5% (32.7%, 36.2%) in JPEG, JPEG2000 2D, and JPEG2000 3D, respectively.Conclusions: In various CT examinations, the modified preprocessing technique can increase in the CR by 25% or more without concerning about degradation of diagnostic information.

  5. Matrices in Engineering Problems

    CERN Document Server

    Tobias, Marvin

    2011-01-01

    This book is intended as an undergraduate text introducing matrix methods as they relate to engineering problems. It begins with the fundamentals of mathematics of matrices and determinants. Matrix inversion is discussed, with an introduction of the well known reduction methods. Equation sets are viewed as vector transformations, and the conditions of their solvability are explored. Orthogonal matrices are introduced with examples showing application to many problems requiring three dimensional thinking. The angular velocity matrix is shown to emerge from the differentiation of the 3-D orthogo

  6. Development and integration of block operations for data invariant automation of digital preprocessing and analysis of biological and biomedical Raman spectra.

    Science.gov (United States)

    Schulze, H Georg; Turner, Robin F B

    2015-06-01

    High-throughput information extraction from large numbers of Raman spectra is becoming an increasingly taxing problem due to the proliferation of new applications enabled using advances in instrumentation. Fortunately, in many of these applications, the entire process can be automated, yielding reproducibly good results with significant time and cost savings. Information extraction consists of two stages, preprocessing and analysis. We focus here on the preprocessing stage, which typically involves several steps, such as calibration, background subtraction, baseline flattening, artifact removal, smoothing, and so on, before the resulting spectra can be further analyzed. Because the results of some of these steps can affect the performance of subsequent ones, attention must be given to the sequencing of steps, the compatibility of these sequences, and the propensity of each step to generate spectral distortions. We outline here important considerations to effect full automation of Raman spectral preprocessing: what is considered full automation; putative general principles to effect full automation; the proper sequencing of processing and analysis steps; conflicts and circularities arising from sequencing; and the need for, and approaches to, preprocessing quality control. These considerations are discussed and illustrated with biological and biomedical examples reflecting both successful and faulty preprocessing.

  7. A Brief Historical Introduction to Matrices and Their Applications

    Science.gov (United States)

    Debnath, L.

    2014-01-01

    This paper deals with the ancient origin of matrices, and the system of linear equations. Included are algebraic properties of matrices, determinants, linear transformations, and Cramer's Rule for solving the system of algebraic equations. Special attention is given to some special matrices, including matrices in graph theory and electrical…

  8. Confusion in the Periodic Table of the Elements.

    Science.gov (United States)

    Fernelius, W. C.; Powell, W. H.

    1982-01-01

    Discusses long (expanded), short (condensed), and pyramidal periodic table formats and documents events leading to a periodic table in which subgroups (families) are designated with the letters A and B, suggesting that this format is confusing for those consulting the table. (JN)

  9. Hypercyclic Abelian Semigroups of Matrices on Cn

    International Nuclear Information System (INIS)

    Ayadi, Adlene; Marzougui, Habib

    2010-07-01

    We give a complete characterization of existence of dense orbit for any abelian semigroup of matrices on C n . For finitely generated semigroups, this characterization is explicit and is used to determine the minimal number of matrices in normal form over C which forms a hypercyclic abelian semigroup on C n . In particular, we show that no abelian semigroup generated by n matrices on C n can be hypercyclic. (author)

  10. Generalized Perron--Frobenius Theorem for Nonsquare Matrices

    OpenAIRE

    Avin, Chen; Borokhovich, Michael; Haddad, Yoram; Kantor, Erez; Lotker, Zvi; Parter, Merav; Peleg, David

    2013-01-01

    The celebrated Perron--Frobenius (PF) theorem is stated for irreducible nonnegative square matrices, and provides a simple characterization of their eigenvectors and eigenvalues. The importance of this theorem stems from the fact that eigenvalue problems on such matrices arise in many fields of science and engineering, including dynamical systems theory, economics, statistics and optimization. However, many real-life scenarios give rise to nonsquare matrices. A natural question is whether the...

  11. Characteristics of Patients Who Report Confusion After Reading Their Primary Care Clinic Notes Online.

    Science.gov (United States)

    Root, Joseph; Oster, Natalia V; Jackson, Sara L; Mejilla, Roanne; Walker, Jan; Elmore, Joann G

    2016-01-01

    Patient access to online electronic medical records (EMRs) is increasing and may offer benefits to patients. However, the inherent complexity of medicine may cause confusion. We elucidate characteristics and health behaviors of patients who report confusion after reading their doctors' notes online. We analyzed data from 4,528 patients in Boston, MA, central Pennsylvania, and Seattle, WA, who were granted online access to their primary care doctors' clinic notes and who viewed at least one note during the 1-year intervention. Three percent of patients reported confusion after reading their visit notes. These patients were more likely to be at least 70 years of age (p education (p reading visit notes (relative risk [RR] 4.83; confidence interval [CI] 3.17, 7.36) compared to patients who were not confused. In adjusted analyses, they were less likely to report feeling more in control of their health (RR 0.42; CI 0.25, 0.71), remembering their care plan (RR 0.26; CI 0.17, 0.42), and understanding their medical conditions (RR 0.32; CI 0.19, 0.54) as a result of reading their doctors' notes compared to patients who were not confused. Patients who were confused by reading their doctors' notes were less likely to report benefits in health behaviors. Understanding this small subset of patients is a critical step in reducing gaps in provider-patient communication and in efforts to tailor educational approaches for patients.

  12. Contour extraction of echocardiographic images based on pre-processing

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana [Department of Multimedia, Faculty of Computer Science and Information Technology, Department of Computer and Communication Systems Engineering, Faculty of Engineering University Putra Malaysia 43400 Serdang, Selangor (Malaysia); Zamrin, D M [Department of Surgery, Faculty of Medicine, National University of Malaysia, 56000 Cheras, Kuala Lumpur (Malaysia); Saripan, M Iqbal

    2011-02-15

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  13. Contour extraction of echocardiographic images based on pre-processing

    International Nuclear Information System (INIS)

    Hussein, Zinah Rajab; Rahmat, Rahmita Wirza; Abdullah, Lili Nurliyana; Zamrin, D M; Saripan, M Iqbal

    2011-01-01

    In this work we present a technique to extract the heart contours from noisy echocardiograph images. Our technique is based on improving the image before applying contours detection to reduce heavy noise and get better image quality. To perform that, we combine many pre-processing techniques (filtering, morphological operations, and contrast adjustment) to avoid unclear edges and enhance low contrast of echocardiograph images, after implementing these techniques we can get legible detection for heart boundaries and valves movement by traditional edge detection methods.

  14. Formal matrices

    CERN Document Server

    Krylov, Piotr

    2017-01-01

    This monograph is a comprehensive account of formal matrices, examining homological properties of modules over formal matrix rings and summarising the interplay between Morita contexts and K theory. While various special types of formal matrix rings have been studied for a long time from several points of view and appear in various textbooks, for instance to examine equivalences of module categories and to illustrate rings with one-sided non-symmetric properties, this particular class of rings has, so far, not been treated systematically. Exploring formal matrix rings of order 2 and introducing the notion of the determinant of a formal matrix over a commutative ring, this monograph further covers the Grothendieck and Whitehead groups of rings. Graduate students and researchers interested in ring theory, module theory and operator algebras will find this book particularly valuable. Containing numerous examples, Formal Matrices is a largely self-contained and accessible introduction to the topic, assuming a sol...

  15. ESTIMATION OF FUNCTIONALS OF SPARSE COVARIANCE MATRICES.

    Science.gov (United States)

    Fan, Jianqing; Rigollet, Philippe; Wang, Weichen

    High-dimensional statistical tests often ignore correlations to gain simplicity and stability leading to null distributions that depend on functionals of correlation matrices such as their Frobenius norm and other ℓ r norms. Motivated by the computation of critical values of such tests, we investigate the difficulty of estimation the functionals of sparse correlation matrices. Specifically, we show that simple plug-in procedures based on thresholded estimators of correlation matrices are sparsity-adaptive and minimax optimal over a large class of correlation matrices. Akin to previous results on functional estimation, the minimax rates exhibit an elbow phenomenon. Our results are further illustrated in simulated data as well as an empirical study of data arising in financial econometrics.

  16. Parallel preprocessing in a nuclear data acquisition system

    International Nuclear Information System (INIS)

    Pichot, G.; Auriol, E.; Lemarchand, G.; Millaud, J.

    1977-01-01

    The appearance of microprocessors and large memory chips has somewhat modified the spectrum of tools usable by the data acquisition system designer. This is particular true in the nuclear research field where the data flow has been continuously growing as a consequence of the increasing capabilities of new detectors. This paper deals with the insertion, between a data acquisition system and a computer, of a preprocessing structure based on microprocessors and large capacity high speed memories. The results shows a significant improvement on several aspects in the operation of the system with returns paying back the investments in 18 months

  17. TargetSearch--a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data.

    Science.gov (United States)

    Cuadros-Inostroza, Alvaro; Caldana, Camila; Redestig, Henning; Kusano, Miyako; Lisec, Jan; Peña-Cortés, Hugo; Willmitzer, Lothar; Hannah, Matthew A

    2009-12-16

    Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data.

  18. Preprocessing of 18F-DMFP-PET Data Based on Hidden Markov Random Fields and the Gaussian Distribution

    Directory of Open Access Journals (Sweden)

    Fermín Segovia

    2017-10-01

    Full Text Available 18F-DMFP-PET is an emerging neuroimaging modality used to diagnose Parkinson's disease (PD that allows us to examine postsynaptic dopamine D2/3 receptors. Like other neuroimaging modalities used for PD diagnosis, most of the total intensity of 18F-DMFP-PET images is concentrated in the striatum. However, other regions can also be useful for diagnostic purposes. An appropriate delimitation of the regions of interest contained in 18F-DMFP-PET data is crucial to improve the automatic diagnosis of PD. In this manuscript we propose a novel methodology to preprocess 18F-DMFP-PET data that improves the accuracy of computer aided diagnosis systems for PD. First, the data were segmented using an algorithm based on Hidden Markov Random Field. As a result, each neuroimage was divided into 4 maps according to the intensity and the neighborhood of the voxels. The maps were then individually normalized so that the shape of their histograms could be modeled by a Gaussian distribution with equal parameters for all the neuroimages. This approach was evaluated using a dataset with neuroimaging data from 87 parkinsonian patients. After these preprocessing steps, a Support Vector Machine classifier was used to separate idiopathic and non-idiopathic PD. Data preprocessed by the proposed method provided higher accuracy results than the ones preprocessed with previous approaches.

  19. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    Science.gov (United States)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  20. THE ALGORITHM AND PROGRAM OF M-MATRICES SEARCH AND STUDY

    Directory of Open Access Journals (Sweden)

    Y. N. Balonin

    2013-05-01

    Full Text Available The algorithm and software for search and study of orthogonal bases matrices – minimax matrices (M-matrix are considered. The algorithm scheme is shown, comments on calculation blocks are given, and interface of the MMatrix software system developed with participation of the authors is explained. The results of the universal algorithm work are presented as Hadamard matrices, Belevitch matrices (C-matrices, conference matrices and matrices of even and odd orders complementary and closely related to those ones by their properties, in particular, the matrix of the 22-th order for which there is no C-matrix. Examples of portraits for alternative matrices of the 255-th and the 257-th orders are given corresponding to the sequences of Mersenne and Fermat numbers. A new way to get Hadamard matrices is explained, different from the previously known procedures based on iterative processes and calculations of Lagrange symbols, with theoretical and practical meaning.

  1. The Legal Dimension of RTI--Confusion Confirmed: A Response to Walker and Daves

    Science.gov (United States)

    Zirkel, Perry A.

    2012-01-01

    In this issue of "Learning Disability Quarterly" (LDQ), Professors Daves and Walker reply to my earlier LDQ article on confusion in the cases and commentary about the legal dimension of RTI. In this brief rejoinder, I show that their reply confirms rather than resolves the confusion in their original commentary in 2010. This persistent…

  2. Annual Percentage Rate and Annual Effective Rate: Resolving Confusion in Intermediate Accounting Textbooks

    Science.gov (United States)

    Vicknair, David; Wright, Jeffrey

    2015-01-01

    Evidence of confusion in intermediate accounting textbooks regarding the annual percentage rate (APR) and annual effective rate (AER) is presented. The APR and AER are briefly discussed in the context of a note payable and correct formulas for computing each is provided. Representative examples of the types of confusion that we found is presented…

  3. Protein from preprocessed waste activated sludge as a nutritional supplement in chicken feed.

    Science.gov (United States)

    Chirwa, Evans M N; Lebitso, Moses T

    2014-01-01

    Five groups of broiler chickens were raised on feed containing varying substitutions of single cell protein from preprocessed waste activated sludge (pWAS) in varying compositions of 0:100, 25:75, 50:50, 75:25, and 100:0 pWAS: fishmeal by mass. Forty chickens per batch were evaluated for growth rate, mortality rate, and feed conversion efficiency (ηє). The initial mass gain rate, mortality rate, initial and operational cost analyses showed that protein from pWAS could successfully replace the commercial feed supplements with a significant cost saving without adversely affecting the health of the birds. The chickens raised on preprocessed WAS weighed 19% more than those raised on fishmeal protein supplement over a 45 day test period. Growing chickens on pWAS translated into a 46% cost saving due to the fast growth rate and minimal death losses before maturity.

  4. The modified Gauss diagonalization of polynomial matrices

    International Nuclear Information System (INIS)

    Saeed, K.

    1982-10-01

    The Gauss algorithm for diagonalization of constant matrices is modified for application to polynomial matrices. Due to this modification the diagonal elements become pure polynomials rather than rational functions. (author)

  5. Quantum Hilbert matrices and orthogonal polynomials

    DEFF Research Database (Denmark)

    Andersen, Jørgen Ellegaard; Berg, Christian

    2009-01-01

    Using the notion of quantum integers associated with a complex number q≠0 , we define the quantum Hilbert matrix and various extensions. They are Hankel matrices corresponding to certain little q -Jacobi polynomials when |q|<1 , and for the special value they are closely related to Hankel matrice...

  6. Discrete canonical transforms that are Hadamard matrices

    International Nuclear Information System (INIS)

    Healy, John J; Wolf, Kurt Bernardo

    2011-01-01

    The group Sp(2,R) of symplectic linear canonical transformations has an integral kernel which has quadratic and linear phases, and which is realized by the geometric paraxial optical model. The discrete counterpart of this model is a finite Hamiltonian system that acts on N-point signals through N x N matrices whose elements also have a constant absolute value, although they do not form a representation of that group. Those matrices that are also unitary are Hadamard matrices. We investigate the manifolds of these N x N matrices under the Sp(2,R) equivalence imposed by the model, and find them to be on two-sided cosets. By means of an algorithm we determine representatives that lead to collections of mutually unbiased bases.

  7. Psychometric properties of the Flemish translation of the NEECHAM Confusion Scale

    Directory of Open Access Journals (Sweden)

    Abraham Ivo L

    2005-03-01

    Full Text Available Abstract Background Determination of a patient's cognitive status by use of a valid and reliable screening instrument is of major importance as early recognition and accurate diagnosis of delirium is necessary for effective management. This study determined the reliability, validity and diagnostic value of the Flemish translation of the NEECHAM Confusion Scale. Methods A sample of 54 elderly hip fracture patients with a mean age of 80.9 years (SD = 7.85 were included. To test the psychometric properties of the NEECHAM Confusion Scale, performance on the NEECHAM was compared to the Confusion Assessment Method (CAM and the Mini-Mental State Examination (MMSE, by using aggregated data based on 5 data collection measurement points (repeated measures. The CAM and MMSE served as gold standards. Results The alpha coefficient for the total NEECHAM score was high (0.88. Principal components analysis yielded a two-component solution accounting for 70.8% of the total variance. High correlations were found between the total NEECHAM scores and total MMSE (0.75 and total CAM severity scores (-0.73, respectively. Diagnostic values using the CAM algorithm as gold standard showed 76.9% sensitivity, 64.6% specificity, 13.5% positive and 97.5% negative predictive values, respectively. Conclusion This validation of the Flemish version of the NEECHAM Confusion Scale adds to previous evidence suggesting that this scale holds promise as a valuable screening instrument for delirium in clinical practice. Further validation studies in diverse clinical populations; however, are needed.

  8. Abel-grassmann's groupoids of modulo matrices

    International Nuclear Information System (INIS)

    Javaid, Q.; Awan, M.D.; Naqvi, S.H.A.

    2016-01-01

    The binary operation of usual addition is associative in all matrices over R. However, a binary operation of addition in matrices over Z/sub n/ of a nonassociative structures of AG-groupoids and AG-groups are defined and investigated here. It is shown that both these structures exist for every integer n >≥ 3. Various properties of these structures are explored like: (i) Every AG-groupoid of matrices over Z/sub n/ is transitively commutative AG-groupoid and is a cancellative AG-groupoid if n is prime. (ii) Every AG-groupoid of matrices over Z/sub n/ of Type-II is a T/sup 3/-AG-groupoid. (iii) An AG-groupoid of matrices over Z/sub n/ ; G /sub nAG/(t,u), is an AG-band, if t+u=1(mod n). (author)

  9. Confusion in practice: on nuclear safety responsibility subject of our nation

    International Nuclear Information System (INIS)

    Wang Jia

    2014-01-01

    Nuclear safety responsibility subject seems a unquestionable issue, but when I took part in the CNNC searching team of 'nuclear law legislation', I found that there are confusions on understanding of this concept and in application. The paper focuses on the content of nuclear safety responsibility, using legal and practical method to dig out the differences with the related and frequently confusing concepts, on which basis to analyze the situation of nuclear safety responsibility subject of our nation. In conclusion, I give suggestions on who shall be the nuclear safety responsibility subject. (author)

  10. Free probability and random matrices

    CERN Document Server

    Mingo, James A

    2017-01-01

    This volume opens the world of free probability to a wide variety of readers. From its roots in the theory of operator algebras, free probability has intertwined with non-crossing partitions, random matrices, applications in wireless communications, representation theory of large groups, quantum groups, the invariant subspace problem, large deviations, subfactors, and beyond. This book puts a special emphasis on the relation of free probability to random matrices, but also touches upon the operator algebraic, combinatorial, and analytic aspects of the theory. The book serves as a combination textbook/research monograph, with self-contained chapters, exercises scattered throughout the text, and coverage of important ongoing progress of the theory. It will appeal to graduate students and all mathematicians interested in random matrices and free probability from the point of view of operator algebras, combinatorics, analytic functions, or applications in engineering and statistical physics.

  11. Number of generations related to coupling constants by confusion

    International Nuclear Information System (INIS)

    Bennett, D.L.; Nielsen, H.B.

    1987-01-01

    In the context of random dynamics, the mechanism of confusion is used to obtain a relation between the number of generations and standard model coupling constants. Preliminary results predict the existence of four generations. (orig.)

  12. Quantum matrices in two dimensions

    International Nuclear Information System (INIS)

    Ewen, H.; Ogievetsky, O.; Wess, J.

    1991-01-01

    Quantum matrices in two-dimensions, admitting left and right quantum spaces, are classified: they fall into two families, the 2-parametric family GL p,q (2) and a 1-parametric family GL α J (2). Phenomena previously found for GL p,q (2) hold in this general situation: (a) powers of quantum matrices are again quantum and (b) entries of the logarithm of a two-dimensional quantum matrix form a Lie algebra. (orig.)

  13. [Study of near infrared spectral preprocessing and wavelength selection methods for endometrial cancer tissue].

    Science.gov (United States)

    Zhao, Li-Ting; Xiang, Yu-Hong; Dai, Yin-Mei; Zhang, Zhuo-Yong

    2010-04-01

    Near infrared spectroscopy was applied to measure the tissue slice of endometrial tissues for collecting the spectra. A total of 154 spectra were obtained from 154 samples. The number of normal, hyperplasia, and malignant samples was 36, 60, and 58, respectively. Original near infrared spectra are composed of many variables, for example, interference information including instrument errors and physical effects such as particle size and light scatter. In order to reduce these influences, original spectra data should be performed with different spectral preprocessing methods to compress variables and extract useful information. So the methods of spectral preprocessing and wavelength selection have played an important role in near infrared spectroscopy technique. In the present paper the raw spectra were processed using various preprocessing methods including first derivative, multiplication scatter correction, Savitzky-Golay first derivative algorithm, standard normal variate, smoothing, and moving-window median. Standard deviation was used to select the optimal spectral region of 4 000-6 000 cm(-1). Then principal component analysis was used for classification. Principal component analysis results showed that three types of samples could be discriminated completely and the accuracy almost achieved 100%. This study demonstrated that near infrared spectroscopy technology and chemometrics method could be a fast, efficient, and novel means to diagnose cancer. The proposed methods would be a promising and significant diagnosis technique of early stage cancer.

  14. TargetSearch - a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data

    Science.gov (United States)

    2009-01-01

    Background Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS). The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. Results We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. Conclusions TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data. PMID:20015393

  15. TargetSearch - a Bioconductor package for the efficient preprocessing of GC-MS metabolite profiling data

    Directory of Open Access Journals (Sweden)

    Lisec Jan

    2009-12-01

    Full Text Available Abstract Background Metabolite profiling, the simultaneous quantification of multiple metabolites in an experiment, is becoming increasingly popular, particularly with the rise of systems-level biology. The workhorse in this field is gas-chromatography hyphenated with mass spectrometry (GC-MS. The high-throughput of this technology coupled with a demand for large experiments has led to data pre-processing, i.e. the quantification of metabolites across samples, becoming a major bottleneck. Existing software has several limitations, including restricted maximum sample size, systematic errors and low flexibility. However, the biggest limitation is that the resulting data usually require extensive hand-curation, which is subjective and can typically take several days to weeks. Results We introduce the TargetSearch package, an open source tool which is a flexible and accurate method for pre-processing even very large numbers of GC-MS samples within hours. We developed a novel strategy to iteratively correct and update retention time indices for searching and identifying metabolites. The package is written in the R programming language with computationally intensive functions written in C for speed and performance. The package includes a graphical user interface to allow easy use by those unfamiliar with R. Conclusions TargetSearch allows fast and accurate data pre-processing for GC-MS experiments and overcomes the sample number limitations and manual curation requirements of existing software. We validate our method by carrying out an analysis against both a set of known chemical standard mixtures and of a biological experiment. In addition we demonstrate its capabilities and speed by comparing it with other GC-MS pre-processing tools. We believe this package will greatly ease current bottlenecks and facilitate the analysis of metabolic profiling data.

  16. Synchronous correlation matrices and Connes’ embedding conjecture

    Energy Technology Data Exchange (ETDEWEB)

    Dykema, Kenneth J., E-mail: kdykema@math.tamu.edu [Department of Mathematics, Texas A& M University, College Station, Texas 77843-3368 (United States); Paulsen, Vern, E-mail: vern@math.uh.edu [Department of Mathematics, University of Houston, Houston, Texas 77204 (United States)

    2016-01-15

    In the work of Paulsen et al. [J. Funct. Anal. (in press); preprint arXiv:1407.6918], the concept of synchronous quantum correlation matrices was introduced and these were shown to correspond to traces on certain C*-algebras. In particular, synchronous correlation matrices arose in their study of various versions of quantum chromatic numbers of graphs and other quantum versions of graph theoretic parameters. In this paper, we develop these ideas further, focusing on the relations between synchronous correlation matrices and microstates. We prove that Connes’ embedding conjecture is equivalent to the equality of two families of synchronous quantum correlation matrices. We prove that if Connes’ embedding conjecture has a positive answer, then the tracial rank and projective rank are equal for every graph. We then apply these results to more general non-local games.

  17. The recursive combination filter approach of pre-processing for the estimation of standard deviation of RR series.

    Science.gov (United States)

    Mishra, Alok; Swati, D

    2015-09-01

    Variation in the interval between the R-R peaks of the electrocardiogram represents the modulation of the cardiac oscillations by the autonomic nervous system. This variation is contaminated by anomalous signals called ectopic beats, artefacts or noise which mask the true behaviour of heart rate variability. In this paper, we have proposed a combination filter of recursive impulse rejection filter and recursive 20% filter, with recursive application and preference of replacement over removal of abnormal beats to improve the pre-processing of the inter-beat intervals. We have tested this novel recursive combinational method with median method replacement to estimate the standard deviation of normal to normal (SDNN) beat intervals of congestive heart failure (CHF) and normal sinus rhythm subjects. This work discusses the improvement in pre-processing over single use of impulse rejection filter and removal of abnormal beats for heart rate variability for the estimation of SDNN and Poncaré plot descriptors (SD1, SD2, and SD1/SD2) in detail. We have found the 22 ms value of SDNN and 36 ms value of SD2 descriptor of Poincaré plot as clinical indicators in discriminating the normal cases from CHF cases. The pre-processing is also useful in calculation of Lyapunov exponent which is a nonlinear index as Lyapunov exponents calculated after proposed pre-processing modified in a way that it start following the notion of less complex behaviour of diseased states.

  18. Realm of Matrices

    Indian Academy of Sciences (India)

    IAS Admin

    harmonic analysis and complex analysis, in ... gebra describes not only the study of linear transforma- tions and .... special case of the Jordan canonical form of matrices. ..... Richard Bronson, Schaum's Outline Series Theory And Problems Of.

  19. Virial expansion for almost diagonal random matrices

    Science.gov (United States)

    Yevtushenko, Oleg; Kravtsov, Vladimir E.

    2003-08-01

    Energy level statistics of Hermitian random matrices hat H with Gaussian independent random entries Higeqj is studied for a generic ensemble of almost diagonal random matrices with langle|Hii|2rangle ~ 1 and langle|Hi\

  20. The Lost Lamb: A Literature Review on the Confusion of College Students in China

    Science.gov (United States)

    Dong, Jianmei; Han, Fubin

    2010-01-01

    With the development of mass higher education in China, confusion--a contradictory state between college students' awareness of employment, learning, morality, and their own behavior and societal requirements--is proving a ubiquitous problem among college students. His confusion has garnered much social attention. In this paper, the origins of…

  1. A Real-Time Embedded System for Stereo Vision Preprocessing Using an FPGA

    DEFF Research Database (Denmark)

    Kjær-Nielsen, Anders; Jensen, Lars Baunegaard With; Sørensen, Anders Stengaard

    2008-01-01

    In this paper a low level vision processing node for use in existing IEEE 1394 camera setups is presented. The processing node is a small embedded system, that utilizes an FPGA to perform stereo vision preprocessing at rates limited by the bandwidth of IEEE 1394a (400Mbit). The system is used...

  2. Proper comparison among methods using a confusion matrix

    CSIR Research Space (South Africa)

    Salmon

    2015-07-01

    Full Text Available -1 IGARSS 2015, Milan, Italy, 26-31 July 2015 Proper comparison among methods using a confusion matrix 1,2 B.P. Salmon, 2,3 W. Kleynhans, 2,3 C.P. Schwegmann and 1J.C. Olivier 1School of Engineering and ICT, University of Tasmania, Australia 2...

  3. The Role of Response Confusion in Proactive Interference

    Science.gov (United States)

    Dillon, Richard F.; Thomas, Heather

    1975-01-01

    In two experiments using the Brown-Peterson memory paradigm, instructions to guess had small effects on recall, but sizeable effects on incidence of prior list intrustion. However, results indicate that proactive interference is primarily the result of inability to generate correct items, rather than confusion between present and previous items.…

  4. Performance Comparison of Several Pre-Processing Methods in a Hand Gesture Recognition System based on Nearest Neighbor for Different Background Conditions

    Directory of Open Access Journals (Sweden)

    Iwan Setyawan

    2012-12-01

    Full Text Available This paper presents a performance analysis and comparison of several pre-processing methods used in a hand gesture recognition system. The pre-processing methods are based on the combinations of several image processing operations, namely edge detection, low pass filtering, histogram equalization, thresholding and desaturation. The hand gesture recognition system is designed to classify an input image into one of six possible classes. The input images are taken with various background conditions. Our experiments showed that the best result is achieved when the pre-processing method consists of only a desaturation operation, achieving a classification accuracy of up to 83.15%.

  5. Chequered surfaces and complex matrices

    International Nuclear Information System (INIS)

    Morris, T.R.; Southampton Univ.

    1991-01-01

    We investigate a large-N matrix model involving general complex matrices. It can be reinterpreted as a model of two hermitian matrices with specific couplings, and as a model of positive definite hermitian matrices. Large-N perturbation theory generates dynamical triangulations in which the triangles can be chequered (i.e. coloured so that neighbours are opposite colours). On a sphere there is a simple relation between such triangulations and those generated by the single hermitian matrix model. For the torus (and a quartic potential) we solve the counting problem for the number of triangulations that cannot be quechered. The critical physics of chequered triangulations is the same as that of the hermitian matrix model. We show this explicitly by solving non-perturbatively pure two-dimensional ''chequered'' gravity. The interpretative framework given here applies to a number of other generalisations of the hermitian matrix model. (orig.)

  6. A Conversation on Data Mining Strategies in LC-MS Untargeted Metabolomics: Pre-Processing and Pre-Treatment Steps

    Directory of Open Access Journals (Sweden)

    Fidele Tugizimana

    2016-11-01

    Full Text Available Untargeted metabolomic studies generate information-rich, high-dimensional, and complex datasets that remain challenging to handle and fully exploit. Despite the remarkable progress in the development of tools and algorithms, the “exhaustive” extraction of information from these metabolomic datasets is still a non-trivial undertaking. A conversation on data mining strategies for a maximal information extraction from metabolomic data is needed. Using a liquid chromatography-mass spectrometry (LC-MS-based untargeted metabolomic dataset, this study explored the influence of collection parameters in the data pre-processing step, scaling and data transformation on the statistical models generated, and feature selection, thereafter. Data obtained in positive mode generated from a LC-MS-based untargeted metabolomic study (sorghum plants responding dynamically to infection by a fungal pathogen were used. Raw data were pre-processed with MarkerLynxTM software (Waters Corporation, Manchester, UK. Here, two parameters were varied: the intensity threshold (50–100 counts and the mass tolerance (0.005–0.01 Da. After the pre-processing, the datasets were imported into SIMCA (Umetrics, Umea, Sweden for more data cleaning and statistical modeling. In addition, different scaling (unit variance, Pareto, etc. and data transformation (log and power methods were explored. The results showed that the pre-processing parameters (or algorithms influence the output dataset with regard to the number of defined features. Furthermore, the study demonstrates that the pre-treatment of data prior to statistical modeling affects the subspace approximation outcome: e.g., the amount of variation in X-data that the model can explain and predict. The pre-processing and pre-treatment steps subsequently influence the number of statistically significant extracted/selected features (variables. Thus, as informed by the results, to maximize the value of untargeted metabolomic data

  7. Acquiring and preprocessing leaf images for automated plant identification: understanding the tradeoff between effort and information gain

    Directory of Open Access Journals (Sweden)

    Michael Rzanny

    2017-11-01

    Full Text Available Abstract Background Automated species identification is a long term research subject. Contrary to flowers and fruits, leaves are available throughout most of the year. Offering margin and texture to characterize a species, they are the most studied organ for automated identification. Substantially matured machine learning techniques generate the need for more training data (aka leaf images. Researchers as well as enthusiasts miss guidance on how to acquire suitable training images in an efficient way. Methods In this paper, we systematically study nine image types and three preprocessing strategies. Image types vary in terms of in-situ image recording conditions: perspective, illumination, and background, while the preprocessing strategies compare non-preprocessed, cropped, and segmented images to each other. Per image type-preprocessing combination, we also quantify the manual effort required for their implementation. We extract image features using a convolutional neural network, classify species using the resulting feature vectors and discuss classification accuracy in relation to the required effort per combination. Results The most effective, non-destructive way to record herbaceous leaves is to take an image of the leaf’s top side. We yield the highest classification accuracy using destructive back light images, i.e., holding the plucked leaf against the sky for image acquisition. Cropping the image to the leaf’s boundary substantially improves accuracy, while precise segmentation yields similar accuracy at a substantially higher effort. The permanent use or disuse of a flash light has negligible effects. Imaging the typically stronger textured backside of a leaf does not result in higher accuracy, but notably increases the acquisition cost. Conclusions In conclusion, the way in which leaf images are acquired and preprocessed does have a substantial effect on the accuracy of the classifier trained on them. For the first time, this

  8. Intrinsic Density Matrices of the Nuclear Shell Model

    International Nuclear Information System (INIS)

    Deveikis, A.; Kamuntavichius, G.

    1996-01-01

    A new method for calculation of shell model intrinsic density matrices, defined as two-particle density matrices integrated over the centre-of-mass position vector of two last particles and complemented with isospin variables, has been developed. The intrinsic density matrices obtained are completely antisymmetric, translation-invariant, and do not employ a group-theoretical classification of antisymmetric states. They are used for exact realistic density matrix expansion within the framework of the reduced Hamiltonian method. The procedures based on precise arithmetic for calculation of the intrinsic density matrices that involve no numerical diagonalization or orthogonalization have been developed and implemented in the computer code. (author). 11 refs., 2 tabs

  9. Characterizing the continuously acquired cardiovascular time series during hemodialysis, using median hybrid filter preprocessing noise reduction.

    Science.gov (United States)

    Wilson, Scott; Bowyer, Andrea; Harrap, Stephen B

    2015-01-01

    The clinical characterization of cardiovascular dynamics during hemodialysis (HD) has important pathophysiological implications in terms of diagnostic, cardiovascular risk assessment, and treatment efficacy perspectives. Currently the diagnosis of significant intradialytic systolic blood pressure (SBP) changes among HD patients is imprecise and opportunistic, reliant upon the presence of hypotensive symptoms in conjunction with coincident but isolated noninvasive brachial cuff blood pressure (NIBP) readings. Considering hemodynamic variables as a time series makes a continuous recording approach more desirable than intermittent measures; however, in the clinical environment, the data signal is susceptible to corruption due to both impulsive and Gaussian-type noise. Signal preprocessing is an attractive solution to this problem. Prospectively collected continuous noninvasive SBP data over the short-break intradialytic period in ten patients was preprocessed using a novel median hybrid filter (MHF) algorithm and compared with 50 time-coincident pairs of intradialytic NIBP measures from routine HD practice. The median hybrid preprocessing technique for continuously acquired cardiovascular data yielded a dynamic regression without significant noise and artifact, suitable for high-level profiling of time-dependent SBP behavior. Signal accuracy is highly comparable with standard NIBP measurement, with the added clinical benefit of dynamic real-time hemodynamic information.

  10. Pre-Processing and Modeling Tools for Bigdata

    Directory of Open Access Journals (Sweden)

    Hashem Hadi

    2016-09-01

    Full Text Available Modeling tools and operators help the user / developer to identify the processing field on the top of the sequence and to send into the computing module only the data related to the requested result. The remaining data is not relevant and it will slow down the processing. The biggest challenge nowadays is to get high quality processing results with a reduced computing time and costs. To do so, we must review the processing sequence, by adding several modeling tools. The existing processing models do not take in consideration this aspect and focus on getting high calculation performances which will increase the computing time and costs. In this paper we provide a study of the main modeling tools for BigData and a new model based on pre-processing.

  11. Visualization of Confusion Matrix for Non-Expert Users (Poster)

    NARCIS (Netherlands)

    E.M.A.L. Beauxis-Aussalet (Emmanuelle); L. Hardman (Lynda)

    2014-01-01

    htmlabstractMachine Learning techniques can automatically extract information from a variety of multimedia sources, e.g., image, text, sound, video. But it produces imperfect results since the multimedia content can be misinterpreted. Machine Learning errors are commonly measured using confusion

  12. Random matrices and random difference equations

    International Nuclear Information System (INIS)

    Uppuluri, V.R.R.

    1975-01-01

    Mathematical models leading to products of random matrices and random difference equations are discussed. A one-compartment model with random behavior is introduced, and it is shown how the average concentration in the discrete time model converges to the exponential function. This is of relevance to understanding how radioactivity gets trapped in bone structure in blood--bone systems. The ideas are then generalized to two-compartment models and mammillary systems, where products of random matrices appear in a natural way. The appearance of products of random matrices in applications in demography and control theory is considered. Then random sequences motivated from the following problems are studied: constant pulsing and random decay models, random pulsing and constant decay models, and random pulsing and random decay models

  13. Quantum Entanglement and Reduced Density Matrices

    Science.gov (United States)

    Purwanto, Agus; Sukamto, Heru; Yuwana, Lila

    2018-05-01

    We investigate entanglement and separability criteria of multipartite (n-partite) state by examining ranks of its reduced density matrices. Firstly, we construct the general formula to determine the criterion. A rank of origin density matrix always equals one, meanwhile ranks of reduced matrices have various ranks. Next, separability and entanglement criterion of multipartite is determined by calculating ranks of reduced density matrices. In this article we diversify multipartite state criteria into completely entangled state, completely separable state, and compound state, i.e. sub-entangled state and sub-entangledseparable state. Furthermore, we also shorten the calculation proposed by the previous research to determine separability of multipartite state and expand the methods to be able to differ multipartite state based on criteria above.

  14. Application of preprocessing filtering on Decision Tree C4.5 and rough set theory

    Science.gov (United States)

    Chan, Joseph C. C.; Lin, Tsau Y.

    2001-03-01

    This paper compares two artificial intelligence methods: the Decision Tree C4.5 and Rough Set Theory on the stock market data. The Decision Tree C4.5 is reviewed with the Rough Set Theory. An enhanced window application is developed to facilitate the pre-processing filtering by introducing the feature (attribute) transformations, which allows users to input formulas and create new attributes. Also, the application produces three varieties of data set with delaying, averaging, and summation. The results prove the improvement of pre-processing by applying feature (attribute) transformations on Decision Tree C4.5. Moreover, the comparison between Decision Tree C4.5 and Rough Set Theory is based on the clarity, automation, accuracy, dimensionality, raw data, and speed, which is supported by the rules sets generated by both algorithms on three different sets of data.

  15. Malware analysis using visualized image matrices.

    Science.gov (United States)

    Han, KyoungSoo; Kang, BooJoong; Im, Eul Gyu

    2014-01-01

    This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API) calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  16. Malware Analysis Using Visualized Image Matrices

    Directory of Open Access Journals (Sweden)

    KyoungSoo Han

    2014-01-01

    Full Text Available This paper proposes a novel malware visual analysis method that contains not only a visualization method to convert binary files into images, but also a similarity calculation method between these images. The proposed method generates RGB-colored pixels on image matrices using the opcode sequences extracted from malware samples and calculates the similarities for the image matrices. Particularly, our proposed methods are available for packed malware samples by applying them to the execution traces extracted through dynamic analysis. When the images are generated, we can reduce the overheads by extracting the opcode sequences only from the blocks that include the instructions related to staple behaviors such as functions and application programming interface (API calls. In addition, we propose a technique that generates a representative image for each malware family in order to reduce the number of comparisons for the classification of unknown samples and the colored pixel information in the image matrices is used to calculate the similarities between the images. Our experimental results show that the image matrices of malware can effectively be used to classify malware families both statically and dynamically with accuracy of 0.9896 and 0.9732, respectively.

  17. Production and characterization of cornstarch/cellulose acetate/silver sulfadiazine extrudate matrices

    Energy Technology Data Exchange (ETDEWEB)

    Zepon, Karine Modolon [CIMJECT, Departamento de Engenharia Mecânica, Universidade Federal de Santa Catarina, 88040-900 Florianópolis, SC (Brazil); TECFARMA, Universidade do Sul de Santa Catarina, 88704-900 Tubarão, SC (Brazil); Petronilho, Fabricia [FICEXP, Universidade do Sul de Santa Catarina, 88704-900 Tubarão, SC (Brazil); Soldi, Valdir [POLIMAT, Universidade Federal de Santa Catarina, 88040-900 Florianópolis, SC (Brazil); Salmoria, Gean Vitor [CIMJECT, Departamento de Engenharia Mecânica, Universidade Federal de Santa Catarina, 88040-900 Florianópolis, SC (Brazil); Kanis, Luiz Alberto, E-mail: luiz.kanis@unisul.br [TECFARMA, Universidade do Sul de Santa Catarina, 88704-900 Tubarão, SC (Brazil)

    2014-11-01

    The production and evaluation of cornstarch/cellulose acetate/silver sulfadiazine extrudate matrices are reported herein. The matrices were melt extruded under nine different conditions, altering the temperature and the screw speed values. The surface morphology of the matrices was examined by scanning electron microscopy. The micrographs revealed the presence of non-melted silver sulfadiazine microparticles in the matrices extruded at lower temperature and screw speed values. The thermal properties were evaluated and the results for both the biopolymer and the drug indicated no thermal degradation during the melt extrusion process. The differential scanning analysis of the extrudate matrices showed a shift to lower temperatures for the silver sulfadiazine melting point compared with the non-extruded drug. The starch/cellulose acetate matrices containing silver sulfadiazine demonstrated significant inhibition of the growth of Pseudomonas aeruginosa and Staphylococcus aureus. In vivo inflammatory response tests showed that the extrudate matrices, with or without silver sulfadiazine, did not trigger chronic inflammatory processes. - Highlights: • Melt extruded bio-based matrices containing silver sulfadiazine was produced. • The silver sulfadiazine is stable during melt-extrusion. • The extrudate matrices shown bacterial growth inhibition. • The matrices obtained have potential to development wound healing membranes.

  18. Polynomial sequences generated by infinite Hessenberg matrices

    Directory of Open Access Journals (Sweden)

    Verde-Star Luis

    2017-01-01

    Full Text Available We show that an infinite lower Hessenberg matrix generates polynomial sequences that correspond to the rows of infinite lower triangular invertible matrices. Orthogonal polynomial sequences are obtained when the Hessenberg matrix is tridiagonal. We study properties of the polynomial sequences and their corresponding matrices which are related to recurrence relations, companion matrices, matrix similarity, construction algorithms, and generating functions. When the Hessenberg matrix is also Toeplitz the polynomial sequences turn out to be of interpolatory type and we obtain additional results. For example, we show that every nonderogative finite square matrix is similar to a unique Toeplitz-Hessenberg matrix.

  19. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    Science.gov (United States)

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. S-matrices and integrability

    International Nuclear Information System (INIS)

    Bombardelli, Diego

    2016-01-01

    In these notes we review the S-matrix theory in (1+1)-dimensional integrable models, focusing mainly on the relativistic case. Once the main definitions and physical properties are introduced, we discuss the factorization of scattering processes due to integrability. We then focus on the analytic properties of the two-particle scattering amplitude and illustrate the derivation of the S-matrices for all the possible bound states using the so-called bootstrap principle. General algebraic structures underlying the S-matrix theory and its relation with the form factors axioms are briefly mentioned. Finally, we discuss the S-matrices of sine-Gordon and SU (2), SU (3) chiral Gross–Neveu models. (topical review)

  1. Matrices Aléatoires Tri-diagonales et Par Blocs.

    OpenAIRE

    MEKKI, Slimane

    2014-01-01

    Dans ce mémoire l'étude porte sur la densité de matrice aléatoire, les densités des valeurs propres d'une matrice pour les trois ensembles G.O.E, G.U.E, G.S.E. Après nous avons explicité les formules des densités de valeurs propres des matrices tri-diagonales dans les cas HERMITE et LAGUERRE Des simulations sur les constantes de normalisations pour les densités des matrices aléatoires ou des valeurs propres sont présentées.

  2. A NOVEL TECHNIQUE TO IMPROVE PHOTOMETRY IN CONFUSED IMAGES USING GRAPHS AND BAYESIAN PRIORS

    International Nuclear Information System (INIS)

    Safarzadeh, Mohammadtaher; Ferguson, Henry C.; Lu, Yu; Inami, Hanae; Somerville, Rachel S.

    2015-01-01

    We present a new technique for overcoming confusion noise in deep far-infrared Herschel space telescope images making use of prior information from shorter λ < 2 μm wavelengths. For the deepest images obtained by Herschel, the flux limit due to source confusion is about a factor of three brighter than the flux limit due to instrumental noise and (smooth) sky background. We have investigated the possibility of de-confusing simulated Herschel PACS 160 μm images by using strong Bayesian priors on the positions and weak priors on the flux of sources. We find the blended sources and group them together and simultaneously fit their fluxes. We derive the posterior probability distribution function of fluxes subject to these priors through Monte Carlo Markov Chain (MCMC) sampling by fitting the image. Assuming we can predict the FIR flux of sources based on the ultraviolet-optical part of their SEDs to within an order of magnitude, the simulations show that we can obtain reliable fluxes and uncertainties at least a factor of three fainter than the confusion noise limit of 3σ c = 2.7 mJy in our simulated PACS-160 image. This technique could in principle be used to mitigate the effects of source confusion in any situation where one has prior information of positions and plausible fluxes of blended sources. For Herschel, application of this technique will improve our ability to constrain the dust content in normal galaxies at high redshift

  3. Cloudy Solar Software - Enhanced Capabilities for Finding, Pre-processing, and Visualizing Solar Data

    Science.gov (United States)

    Istvan Etesi, Laszlo; Tolbert, K.; Schwartz, R.; Zarro, D.; Dennis, B.; Csillaghy, A.

    2010-05-01

    In our project "Extending the Virtual Solar Observatory (VSO)” we have combined some of the features available in Solar Software (SSW) to produce an integrated environment for data analysis, supporting the complete workflow from data location, retrieval, preparation, and analysis to creating publication-quality figures. Our goal is an integrated analysis experience in IDL, easy-to-use but flexible enough to allow more sophisticated procedures such as multi-instrument analysis. To that end, we have made the transition from a locally oriented setting where all the analysis is done on the user's computer, to an extended analysis environment where IDL has access to services available on the Internet. We have implemented a form of Cloud Computing that uses the VSO search and a new data retrieval and pre-processing server (PrepServer) that provides remote execution of instrument-specific data preparation. We have incorporated the interfaces to the VSO search and the PrepServer into an IDL widget (SHOW_SYNOP) that provides user-friendly searching and downloading of raw solar data and optionally sends search results for pre-processing to the PrepServer prior to downloading the data. The raw and pre-processed data can be displayed with our plotting suite, PLOTMAN, which can handle different data types (light curves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. PLOTMAN is highly configurable and suited for visual data analysis and for creating publishable figures. PLOTMAN and SHOW_SYNOP work hand-in-hand for a convenient working environment. Our environment supports a growing number of solar instruments that currently includes RHESSI, SOHO/EIT, TRACE, SECCHI/EUVI, HINODE/XRT, and HINODE/EIS.

  4. Evaluation of Microarray Preprocessing Algorithms Based on Concordance with RT-PCR in Clinical Samples

    DEFF Research Database (Denmark)

    Hansen, Kasper Lage; Szallasi, Zoltan Imre; Eklund, Aron Charles

    2009-01-01

    evaluated consistency using the Pearson correlation between measurements obtained on the two platforms. Also, we introduce the log-ratio discrepancy as a more relevant measure of discordance between gene expression platforms. Of nine preprocessing algorithms tested, PLIER+16 produced expression values...

  5. Hierarchical quark mass matrices

    International Nuclear Information System (INIS)

    Rasin, A.

    1998-02-01

    I define a set of conditions that the most general hierarchical Yukawa mass matrices have to satisfy so that the leading rotations in the diagonalization matrix are a pair of (2,3) and (1,2) rotations. In addition to Fritzsch structures, examples of such hierarchical structures include also matrices with (1,3) elements of the same order or even much larger than the (1,2) elements. Such matrices can be obtained in the framework of a flavor theory. To leading order, the values of the angle in the (2,3) plane (s 23 ) and the angle in the (1,2) plane (s 12 ) do not depend on the order in which they are taken when diagonalizing. We find that any of the Cabbibo-Kobayashi-Maskawa matrix parametrizations that consist of at least one (1,2) and one (2,3) rotation may be suitable. In the particular case when the s 13 diagonalization angles are sufficiently small compared to the product s 12 s 23 , two special CKM parametrizations emerge: the R 12 R 23 R 12 parametrization follows with s 23 taken before the s 12 rotation, and vice versa for the R 23 R 12 R 23 parametrization. (author)

  6. Laminin active peptide/agarose matrices as multifunctional biomaterials for tissue engineering.

    Science.gov (United States)

    Yamada, Yuji; Hozumi, Kentaro; Aso, Akihiro; Hotta, Atsushi; Toma, Kazunori; Katagiri, Fumihiko; Kikkawa, Yamato; Nomizu, Motoyoshi

    2012-06-01

    Cell adhesive peptides derived from extracellular matrix components are potential candidates to afford bio-adhesiveness to cell culture scaffolds for tissue engineering. Previously, we covalently conjugated bioactive laminin peptides to polysaccharides, such as chitosan and alginate, and demonstrated their advantages as biomaterials. Here, we prepared functional polysaccharide matrices by mixing laminin active peptides and agarose gel. Several laminin peptide/agarose matrices showed cell attachment activity. In particular, peptide AG73 (RKRLQVQLSIRT)/agarose matrices promoted strong cell attachment and the cell behavior depended on the stiffness of agarose matrices. Fibroblasts formed spheroid structures on the soft AG73/agarose matrices while the cells formed a monolayer with elongated morphologies on the stiff matrices. On the stiff AG73/agarose matrices, neuronal cells extended neuritic processes and endothelial cells formed capillary-like networks. In addition, salivary gland cells formed acini-like structures on the soft matrices. These results suggest that the peptide/agarose matrices are useful for both two- and three-dimensional cell culture systems as a multifunctional biomaterial for tissue engineering. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Applying Enhancement Filters in the Pre-processing of Images of Lymphoma

    International Nuclear Information System (INIS)

    Silva, Sérgio Henrique; Do Nascimento, Marcelo Zanchetta; Neves, Leandro Alves; Batista, Valério Ramos

    2015-01-01

    Lymphoma is a type of cancer that affects the immune system, and is classified as Hodgkin or non-Hodgkin. It is one of the ten types of cancer that are the most common on earth. Among all malignant neoplasms diagnosed in the world, lymphoma ranges from three to four percent of them. Our work presents a study of some filters devoted to enhancing images of lymphoma at the pre-processing step. Here the enhancement is useful for removing noise from the digital images. We have analysed the noise caused by different sources like room vibration, scraps and defocusing, and in the following classes of lymphoma: follicular, mantle cell and B-cell chronic lymphocytic leukemia. The filters Gaussian, Median and Mean-Shift were applied to different colour models (RGB, Lab and HSV). Afterwards, we performed a quantitative analysis of the images by means of the Structural Similarity Index. This was done in order to evaluate the similarity between the images. In all cases we have obtained a certainty of at least 75%, which rises to 99% if one considers only HSV. Namely, we have concluded that HSV is an important choice of colour model at pre-processing histological images of lymphoma, because in this case the resulting image will get the best enhancement

  8. A Technical Review on Biomass Processing: Densification, Preprocessing, Modeling and Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Jaya Shankar Tumuluru; Christopher T. Wright

    2010-06-01

    It is now a well-acclaimed fact that burning fossil fuels and deforestation are major contributors to climate change. Biomass from plants can serve as an alternative renewable and carbon-neutral raw material for the production of bioenergy. Low densities of 40–60 kg/m3 for lignocellulosic and 200–400 kg/m3 for woody biomass limits their application for energy purposes. Prior to use in energy applications these materials need to be densified. The densified biomass can have bulk densities over 10 times the raw material helping to significantly reduce technical limitations associated with storage, loading and transportation. Pelleting, briquetting, or extrusion processing are commonly used methods for densification. The aim of the present research is to develop a comprehensive review of biomass processing that includes densification, preprocessing, modeling and optimization. The specific objective include carrying out a technical review on (a) mechanisms of particle bonding during densification; (b) methods of densification including extrusion, briquetting, pelleting, and agglomeration; (c) effects of process and feedstock variables and biomass biochemical composition on the densification (d) effects of preprocessing such as grinding, preheating, steam explosion, and torrefaction on biomass quality and binding characteristics; (e) models for understanding the compression characteristics; and (f) procedures for response surface modeling and optimization.

  9. Accelerating Matrix-Vector Multiplication on Hierarchical Matrices Using Graphical Processing Units

    KAUST Repository

    Boukaram, W.

    2015-03-25

    Large dense matrices arise from the discretization of many physical phenomena in computational sciences. In statistics very large dense covariance matrices are used for describing random fields and processes. One can, for instance, describe distribution of dust particles in the atmosphere, concentration of mineral resources in the earth\\'s crust or uncertain permeability coefficient in reservoir modeling. When the problem size grows, storing and computing with the full dense matrix becomes prohibitively expensive both in terms of computational complexity and physical memory requirements. Fortunately, these matrices can often be approximated by a class of data sparse matrices called hierarchical matrices (H-matrices) where various sub-blocks of the matrix are approximated by low rank matrices. These matrices can be stored in memory that grows linearly with the problem size. In addition, arithmetic operations on these H-matrices, such as matrix-vector multiplication, can be completed in almost linear time. Originally the H-matrix technique was developed for the approximation of stiffness matrices coming from partial differential and integral equations. Parallelizing these arithmetic operations on the GPU has been the focus of this work and we will present work done on the matrix vector operation on the GPU using the KSPARSE library.

  10. DECISION LEVEL FUSION OF ORTHOPHOTO AND LIDAR DATA USING CONFUSION MATRIX INFORMATION FOR LNAD COVER CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    S. Daneshtalab

    2017-09-01

    Full Text Available Automatic urban objects extraction from airborne remote sensing data is essential to process and efficiently interpret the vast amount of airborne imagery and Lidar data available today. The aim of this study is to propose a new approach for the integration of high-resolution aerial imagery and Lidar data to improve the accuracy of classification in the city complications. In the proposed method, first, the classification of each data is separately performed using Support Vector Machine algorithm. In this case, extracted Normalized Digital Surface Model (nDSM and pulse intensity are used in classification of LiDAR data, and three spectral visible bands (Red, Green, Blue are considered as feature vector for the orthoimage classification. Moreover, combining the extracted features of the image and Lidar data another classification is also performed using all the features. The outputs of these classifications are integrated in a decision level fusion system according to the their confusion matrices to find the final classification result. The proposed method was evaluated using an urban area of Zeebruges, Belgium. The obtained results represented several advantages of image fusion with respect to a single shot dataset. With the capabilities of the proposed decision level fusion method, most of the object extraction difficulties and uncertainty were decreased and, the overall accuracy and the kappa values were improved 7% and 10%, respectively.

  11. ENDF/B Pre-Processing Codes: Implementing and testing on a Personal Computer

    International Nuclear Information System (INIS)

    McLaughlin, P.K.

    1987-05-01

    This document describes the contents of the diskettes containing the ENDF/B Pre-Processing codes by D.E. Cullen, and example data for use in implementing and testing these codes on a Personal Computer of the type IBM-PC/AT. Upon request the codes are available from the IAEA Nuclear Data Section, free of charge, on a series of 7 diskettes. (author)

  12. The Antitriangular Factorization of Saddle Point Matrices

    KAUST Repository

    Pestana, J.

    2014-01-01

    Mastronardi and Van Dooren [SIAM J. Matrix Anal. Appl., 34 (2013), pp. 173-196] recently introduced the block antitriangular ("Batman") decomposition for symmetric indefinite matrices. Here we show the simplification of this factorization for saddle point matrices and demonstrate how it represents the common nullspace method. We show that rank-1 updates to the saddle point matrix can be easily incorporated into the factorization and give bounds on the eigenvalues of matrices important in saddle point theory. We show the relation of this factorization to constraint preconditioning and how it transforms but preserves the structure of block diagonal and block triangular preconditioners. © 2014 Society for Industrial and Applied Mathematics.

  13. A girl with headache, confusion and green urine.

    Science.gov (United States)

    Hufschmidt, Andreas; Krisch, Alexandra; Peschen, I

    2009-07-01

    The case of a 17-year-old girl with a history of headache, blurred vision, confusion, ataxia and syncope is presented. On admission, she had already recovered except for a slurring of speech. Her urine was found to be green. Screening for illegal drugs was negative, but gas chromatography with subsequent mass spectroscopy (GC-MS) revealed an extremely high concentration of flupirtine.

  14. Rigid Body Attitude Control Based on a Manifold Representation of Direction Cosine Matrices

    International Nuclear Information System (INIS)

    Nakath, David; Clemens, Joachim; Rachuy, Carsten

    2017-01-01

    Autonomous systems typically actively observe certain aspects of their surroundings, which makes them dependent on a suitable controller. However, building an attitude controller for three degrees of freedom is a challenging task, mainly due to singularities in the different parametrizations of the three dimensional rotation group SO (3). Thus, we propose an attitude controller based on a manifold representation of direction cosine matrices: In state space, the attitude is globally and uniquely represented as a direction cosine matrix R ∈ SO (3). However, differences in the state space, i.e., the attitude errors, are exposed to the controller in the vector space ℝ 3 . This is achieved by an operator, which integrates the matrix logarithm mapping from SO (3) to so(3) and the map from so(3) to ℝ 3 . Based on this representation, we derive a proportional and derivative feedback controller, whose output has an upper bound to prevent actuator saturation. Additionally, the feedback is preprocessed by a particle filter to account for measurement and state transition noise. We evaluate our approach in a simulator in three different spacecraft maneuver scenarios: (i) stabilizing, (ii) rest-to-rest, and (iii) nadir-pointing. The controller exhibits stable behavior from initial attitudes near and far from the setpoint. Furthermore, it is able to stabilize a spacecraft and can be used for nadir-pointing maneuvers. (paper)

  15. Graphs and matrices

    CERN Document Server

    Bapat, Ravindra B

    2014-01-01

    This new edition illustrates the power of linear algebra in the study of graphs. The emphasis on matrix techniques is greater than in other texts on algebraic graph theory. Important matrices associated with graphs (for example, incidence, adjacency and Laplacian matrices) are treated in detail. Presenting a useful overview of selected topics in algebraic graph theory, early chapters of the text focus on regular graphs, algebraic connectivity, the distance matrix of a tree, and its generalized version for arbitrary graphs, known as the resistance matrix. Coverage of later topics include Laplacian eigenvalues of threshold graphs, the positive definite completion problem and matrix games based on a graph. Such an extensive coverage of the subject area provides a welcome prompt for further exploration. The inclusion of exercises enables practical learning throughout the book. In the new edition, a new chapter is added on the line graph of a tree, while some results in Chapter 6 on Perron-Frobenius theory are reo...

  16. Bayesian Nonparametric Clustering for Positive Definite Matrices.

    Science.gov (United States)

    Cherian, Anoop; Morellas, Vassilios; Papanikolopoulos, Nikolaos

    2016-05-01

    Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which soft-clustering algorithms (K-Means, expectation maximization, etc.) are generally used. As is well-known, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the log-determinant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the Wishart-Inverse-Wishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several state-of-the-art parametric and nonparametric clustering algorithms.

  17. Preprocessing of A-scan GPR data based on energy features

    Science.gov (United States)

    Dogan, Mesut; Turhan-Sayan, Gonul

    2016-05-01

    There is an increasing demand for noninvasive real-time detection and classification of buried objects in various civil and military applications. The problem of detection and annihilation of landmines is particularly important due to strong safety concerns. The requirement for a fast real-time decision process is as important as the requirements for high detection rates and low false alarm rates. In this paper, we introduce and demonstrate a computationally simple, timeefficient, energy-based preprocessing approach that can be used in ground penetrating radar (GPR) applications to eliminate reflections from the air-ground boundary and to locate the buried objects, simultaneously, at one easy step. The instantaneous power signals, the total energy values and the cumulative energy curves are extracted from the A-scan GPR data. The cumulative energy curves, in particular, are shown to be useful to detect the presence and location of buried objects in a fast and simple way while preserving the spectral content of the original A-scan data for further steps of physics-based target classification. The proposed method is demonstrated using the GPR data collected at the facilities of IPA Defense, Ankara at outdoor test lanes. Cylindrically shaped plastic containers were buried in fine-medium sand to simulate buried landmines. These plastic containers were half-filled by ammonium nitrate including metal pins. Results of this pilot study are demonstrated to be highly promising to motivate further research for the use of energy-based preprocessing features in landmine detection problem.

  18. On Skew Circulant Type Matrices Involving Any Continuous Fibonacci Numbers

    Directory of Open Access Journals (Sweden)

    Zhaolin Jiang

    2014-01-01

    inverse matrices of them by constructing the transformation matrices. Furthermore, the maximum column sum matrix norm, the spectral norm, the Euclidean (or Frobenius norm, and the maximum row sum matrix norm and bounds for the spread of these matrices are given, respectively.

  19. Speech perception for adult cochlear implant recipients in a realistic background noise: effectiveness of preprocessing strategies and external options for improving speech recognition in noise.

    Science.gov (United States)

    Gifford, René H; Revit, Lawrence J

    2010-01-01

    Although cochlear implant patients are achieving increasingly higher levels of performance, speech perception in noise continues to be problematic. The newest generations of implant speech processors are equipped with preprocessing and/or external accessories that are purported to improve listening in noise. Most speech perception measures in the clinical setting, however, do not provide a close approximation to real-world listening environments. To assess speech perception for adult cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE) array in order to determine whether commercially available preprocessing strategies and/or external accessories yield improved sentence recognition in noise. Single-subject, repeated-measures design with two groups of participants: Advanced Bionics and Cochlear Corporation recipients. Thirty-four subjects, ranging in age from 18 to 90 yr (mean 54.5 yr), participated in this prospective study. Fourteen subjects were Advanced Bionics recipients, and 20 subjects were Cochlear Corporation recipients. Speech reception thresholds (SRTs) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the subjects' preferred listening programs as well as with the addition of either Beam preprocessing (Cochlear Corporation) or the T-Mic accessory option (Advanced Bionics). In Experiment 1, adaptive SRTs with the Hearing in Noise Test sentences were obtained for all 34 subjects. For Cochlear Corporation recipients, SRTs were obtained with their preferred everyday listening program as well as with the addition of Focus preprocessing. For Advanced Bionics recipients, SRTs were obtained with the integrated behind-the-ear (BTE) mic as well as with the T-Mic. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the preprocessing strategy or external accessory in reducing the SRT in noise. In addition

  20. Immanant Conversion on Symmetric Matrices

    Directory of Open Access Journals (Sweden)

    Purificação Coelho M.

    2014-01-01

    Full Text Available Letr Σn(C denote the space of all n χ n symmetric matrices over the complex field C. The main objective of this paper is to prove that the maps Φ : Σn(C -> Σn (C satisfying for any fixed irre- ducible characters X, X' -SC the condition dx(A +aB = dχ·(Φ(Α + αΦ(Β for all matrices A,В ε Σ„(С and all scalars a ε C are automatically linear and bijective. As a corollary of the above result we characterize all such maps Φ acting on ΣИ(С.

  1. EARLINET Single Calculus Chain - technical - Part 1: Pre-processing of raw lidar data

    Science.gov (United States)

    D'Amico, Giuseppe; Amodeo, Aldo; Mattis, Ina; Freudenthaler, Volker; Pappalardo, Gelsomina

    2016-02-01

    In this paper we describe an automatic tool for the pre-processing of aerosol lidar data called ELPP (EARLINET Lidar Pre-Processor). It is one of two calculus modules of the EARLINET Single Calculus Chain (SCC), the automatic tool for the analysis of EARLINET data. ELPP is an open source module that executes instrumental corrections and data handling of the raw lidar signals, making the lidar data ready to be processed by the optical retrieval algorithms. According to the specific lidar configuration, ELPP automatically performs dead-time correction, atmospheric and electronic background subtraction, gluing of lidar signals, and trigger-delay correction. Moreover, the signal-to-noise ratio of the pre-processed signals can be improved by means of configurable time integration of the raw signals and/or spatial smoothing. ELPP delivers the statistical uncertainties of the final products by means of error propagation or Monte Carlo simulations. During the development of ELPP, particular attention has been payed to make the tool flexible enough to handle all lidar configurations currently used within the EARLINET community. Moreover, it has been designed in a modular way to allow an easy extension to lidar configurations not yet implemented. The primary goal of ELPP is to enable the application of quality-assured procedures in the lidar data analysis starting from the raw lidar data. This provides the added value of full traceability of each delivered lidar product. Several tests have been performed to check the proper functioning of ELPP. The whole SCC has been tested with the same synthetic data sets, which were used for the EARLINET algorithm inter-comparison exercise. ELPP has been successfully employed for the automatic near-real-time pre-processing of the raw lidar data measured during several EARLINET inter-comparison campaigns as well as during intense field campaigns.

  2. The 'golden' matrices and a new kind of cryptography

    International Nuclear Information System (INIS)

    Stakhov, A.P.

    2007-01-01

    We consider a new class of square matrices called the 'golden' matrices. They are a generalization of the classical Fibonacci Q-matrix for continuous domain. The 'golden' matrices can be used for creation of a new kind of cryptography called the 'golden' cryptography. The method is very fast and simple for technical realization and can be used for cryptographic protection of digital signals (telecommunication and measurement systems)

  3. Advanced incomplete factorization algorithms for Stiltijes matrices

    Energy Technology Data Exchange (ETDEWEB)

    Il`in, V.P. [Siberian Division RAS, Novosibirsk (Russian Federation)

    1996-12-31

    The modern numerical methods for solving the linear algebraic systems Au = f with high order sparse matrices A, which arise in grid approximations of multidimensional boundary value problems, are based mainly on accelerated iterative processes with easily invertible preconditioning matrices presented in the form of approximate (incomplete) factorization of the original matrix A. We consider some recent algorithmic approaches, theoretical foundations, experimental data and open questions for incomplete factorization of Stiltijes matrices which are {open_quotes}the best{close_quotes} ones in the sense that they have the most advanced results. Special attention is given to solving the elliptic differential equations with strongly variable coefficients, singular perturbated diffusion-convection and parabolic equations.

  4. On the Eigenvalues and Eigenvectors of Block Triangular Preconditioned Block Matrices

    KAUST Repository

    Pestana, Jennifer

    2014-01-01

    Block lower triangular matrices and block upper triangular matrices are popular preconditioners for 2×2 block matrices. In this note we show that a block lower triangular preconditioner gives the same spectrum as a block upper triangular preconditioner and that the eigenvectors of the two preconditioned matrices are related. © 2014 Society for Industrial and Applied Mathematics.

  5. Data pre-processing for web log mining: Case study of commercial bank website usage analysis

    Directory of Open Access Journals (Sweden)

    Jozef Kapusta

    2013-01-01

    Full Text Available We use data cleaning, integration, reduction and data conversion methods in the pre-processing level of data analysis. Data processing techniques improve the overall quality of the patterns mined. The paper describes using of standard pre-processing methods for preparing data of the commercial bank website in the form of the log file obtained from the web server. Data cleaning, as the simplest step of data pre-processing, is non–trivial as the analysed content is highly specific. We had to deal with the problem of frequent changes of the content and even frequent changes of the structure. Regular changes in the structure make use of the sitemap impossible. We presented approaches how to deal with this problem. We were able to create the sitemap dynamically just based on the content of the log file. In this case study, we also examined just the one part of the website over the standard analysis of an entire website, as we did not have access to all log files for the security reason. As the result, the traditional practices had to be adapted for this special case. Analysing just the small fraction of the website resulted in the short session time of regular visitors. We were not able to use recommended methods to determine the optimal value of session time. Therefore, we proposed new methods based on outliers identification for raising the accuracy of the session length in this paper.

  6. On Investigating GMRES Convergence using Unitary Matrices

    Czech Academy of Sciences Publication Activity Database

    Duintjer Tebbens, Jurjen; Meurant, G.; Sadok, H.; Strakoš, Z.

    2014-01-01

    Roč. 450, 1 June (2014), s. 83-107 ISSN 0024-3795 Grant - others:GA AV ČR(CZ) M100301201; GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : GMRES convergence * unitary matrices * unitary spectra * normal matrices * Krylov residual subspace * Schur parameters Subject RIV: BA - General Mathematics Impact factor: 0.939, year: 2014

  7. CONVERGENCE OF POWERS OF CONTROLLABLE INTUITIONISTIC FUZZY MATRICES

    OpenAIRE

    Riyaz Ahmad Padder; P. Murugadas

    2016-01-01

    Convergences of powers of controllable intuitionistic fuzzy matrices have been stud¬ied. It is shown that they oscillate with period equal to 2, in general. Some equalities and sequences of inequalities about powers of controllable intuitionistic fuzzy matrices have been obtained.

  8. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  9. Loop diagrams without γ matrices

    International Nuclear Information System (INIS)

    McKeon, D.G.C.; Rebhan, A.

    1993-01-01

    By using a quantum-mechanical path integral to compute matrix elements of the form left-angle x|exp(-iHt)|y right-angle, radiative corrections in quantum-field theory can be evaluated without encountering loop-momentum integrals. In this paper we demonstrate how Dirac γ matrices that occur in the proper-time ''Hamiltonian'' H lead to the introduction of a quantum-mechanical path integral corresponding to a superparticle analogous to one proposed recently by Fradkin and Gitman. Direct evaluation of this path integral circumvents many of the usual algebraic manipulations of γ matrices in the computation of quantum-field-theoretical Green's functions involving fermions

  10. Productive confusions: learning from simulations of pandemic virus outbreaks in Second Life

    Science.gov (United States)

    Cárdenas, Micha; Greci, Laura S.; Hurst, Samantha; Garman, Karen; Hoffman, Helene; Huang, Ricky; Gates, Michael; Kho, Kristen; Mehrmand, Elle; Porteous, Todd; Calvitti, Alan; Higginbotham, Erin; Agha, Zia

    2011-03-01

    Users of immersive virtual reality environments have reported a wide variety of side and after effects including the confusion of characteristics of the real and virtual worlds. Perhaps this side effect of confusing the virtual and real can be turned around to explore the possibilities for immersion with minimal technological support in virtual world group training simulations. This paper will describe observations from my time working as an artist/researcher with the UCSD School of Medicine (SoM) and Veterans Administration San Diego Healthcare System (VASDHS) to develop trainings for nurses, doctors and Hospital Incident Command staff that simulate pandemic virus outbreaks. By examining moments of slippage between realities, both into and out of the virtual environment, moments of the confusion of boundaries between real and virtual, we can better understand methods for creating immersion. I will use the mixing of realities as a transversal line of inquiry, borrowing from virtual reality studies, game studies, and anthropological studies to better understand the mechanisms of immersion in virtual worlds. Focusing on drills conducted in Second Life, I will examine moments of training to learn the software interface, moments within the drill and interviews after the drill.

  11. Classification en référence à une matrice stochastique

    OpenAIRE

    Verdun , Stéphane; Cariou , Véronique; Qannari , El Mostafa

    2009-01-01

    International audience; Etant donné un tableau de données X portant sur un ensemble de n objets, et une matrice stochastique S qui peut être assimilée à une matrice de transition d'une chaîne de Markov, nous proposons une méthode de partitionnement consistant à appliquer la matrice S sur X de manière itérative jusqu'à convergence. Les classes formant la partition sont déterminées à partir des états stationnaires de la matrice stochastique. Cette matrice stochastique peut être issue d'une matr...

  12. Oromo Oral Pun (Miliqqee): Confusion with Oromo Idiom (Jechama ...

    African Journals Online (AJOL)

    The nature of the study was of a qualitative and quantitative type and the data were analysed by describing the existing qualities of the puns on theoretical basis. The tools used were content analysis, questionnaire and interview. The result shows that idiomatic meanings have been used as puns, was confusion of puns with ...

  13. Capture Matrices Handbook

    Science.gov (United States)

    2014-04-01

    materials, the affinity ligand would need identification , as well as chemistries that graft the affinity ligand onto the surface of magnetic...ACTIVE CAPTURE MATRICES FOR THE DETECTION/ IDENTIFICATION OF PHARMACEUTICALS...6 As shown in Figure 2.3-1a, the spectra exhibit similar baselines and the spectral peaks lineup . Under these circumstances, the spectral

  14. Binary Positive Semidefinite Matrices and Associated Integer Polytopes

    DEFF Research Database (Denmark)

    Letchford, Adam N.; Sørensen, Michael Malmros

    2012-01-01

    We consider the positive semidefinite (psd) matrices with binary entries, along with the corresponding integer polytopes. We begin by establishing some basic properties of these matrices and polytopes. Then, we show that several families of integer polytopes in the literature-the cut, boolean qua...

  15. Visual hallucinations in schizophrenia: confusion between imagination and perception.

    Science.gov (United States)

    Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S

    2008-05-01

    An association between hallucinations and reality-monitoring deficit has been repeatedly observed in patients with schizophrenia. Most data concern auditory/verbal hallucinations. The aim of this study was to investigate the association between visual hallucinations and a specific type of reality-monitoring deficit, namely confusion between imagined and perceived pictures. Forty-one patients with schizophrenia and 43 healthy control participants completed a reality-monitoring task. Thirty-two items were presented either as written words or as pictures. After the presentation phase, participants had to recognize the target words and pictures among distractors, and then remember their mode of presentation. All groups of participants recognized the pictures better than the words, except the patients with visual hallucinations, who presented the opposite pattern. The participants with visual hallucinations made more misattributions to pictures than did the others, and higher ratings of visual hallucinations were correlated with increased tendency to remember words as pictures. No association with auditory hallucinations was revealed. Our data suggest that visual hallucinations are associated with confusion between visual mental images and perception.

  16. Theoretical Properties for Neural Networks with Weight Matrices of Low Displacement Rank

    OpenAIRE

    Zhao, Liang; Liao, Siyu; Wang, Yanzhi; Li, Zhe; Tang, Jian; Pan, Victor; Yuan, Bo

    2017-01-01

    Recently low displacement rank (LDR) matrices, or so-called structured matrices, have been proposed to compress large-scale neural networks. Empirical results have shown that neural networks with weight matrices of LDR matrices, referred as LDR neural networks, can achieve significant reduction in space and computational complexity while retaining high accuracy. We formally study LDR matrices in deep learning. First, we prove the universal approximation property of LDR neural networks with a ...

  17. Impact of functional MRI data preprocessing pipeline on default-mode network detectability in patients with disorders of consciousness

    Directory of Open Access Journals (Sweden)

    Adrian eAndronache

    2013-08-01

    Full Text Available An emerging application of resting-state functional MRI is the study of patients with disorders of consciousness (DoC, where integrity of default-mode network (DMN activity is associated to the clinical level of preservation of consciousness. Due to the inherent inability to follow verbal instructions, arousal induced by scanning noise and postural pain, these patients tend to exhibit substantial levels of movement. This results in spurious, non-neural fluctuations of the blood-oxygen level-dependent (BOLD signal, which impair the evaluation of residual functional connectivity. Here, the effect of data preprocessing choices on the detectability of the DMN was systematically evaluated in a representative cohort of 30 clinically and etiologically heterogeneous DoC patients and 33 healthy controls. Starting from a standard preprocessing pipeline, additional steps were gradually inserted, namely band-pass filtering, removal of co-variance with the movement vectors, removal of co-variance with the global brain parenchyma signal, rejection of realignment outlier volumes and ventricle masking. Both independent-component analysis (ICA and seed-based analysis (SBA were performed, and DMN detectability was assessed quantitatively as well as visually. The results of the present study strongly show that the detection of DMN activity in the sub-optimal fMRI series acquired on DoC patients is contingent on the use of adequate filtering steps. ICA and SBA are differently affected but give convergent findings for high-grade preprocessing. We propose that future studies in this area should adopt the described preprocessing procedures as a minimum standard to reduce the probability of wrongly inferring that DMN activity is absent.

  18. Dream-reality confusion in Borderline Personality Disorder: A theoretical analysis

    Directory of Open Access Journals (Sweden)

    Dagna eSkrzypińska

    2015-09-01

    Full Text Available This paper presents an analysis of dream-reality confusion (DRC in relation to the characteristics of borderline personality disorder (BPD, based on research findings and theoretical considerations. It is hypothesized that people with BPD are more likely to experience DRC compared to people in non-clinical population. Several variables related to this hypothesis were identified through a theoretical analysis of the scientific literature. Sleep disturbances: Problems with sleep are found in 15-95.5% of people with BPD (Hafizi, 2013, and unstable sleep and wake cycles, which occur in BPD (Fleischer et al., 2012, are linked to DRC. Dissociation: Nearly two-thirds of people with BPD experience dissociative symptoms (Korzekwa and Pain, 2009 and dissociative symptoms are correlated with a fantasy proneness; both dissociative symptoms and fantasy proneness are related to DRC (Giesbrecht and Merckelbach, 2006. Negative dream content: People with BPD have nightmares more often than other people (Semiz et al., 2008; dreams that are more likely to be confused with reality tend to be more realistic and unpleasant, and are reflected in waking behavior (Rassin et al., 2001. Cognitive disturbances: Many BPD patients experience various cognitive disturbances, including problems with reality testing (Fiqueierdo, 2006; Mosquera et al., 2011, which can foster DRC. Thin boundaries: People with thin boundaries are more prone to DRC than people with thick boundaries, and people with BPD tend to have thin boundaries (Hartmann, 2011. The theoretical analysis on the basis of these findings suggests that people who suffer from BPD may be more susceptible to confusing dream content with actual waking events.

  19. The Modern Origin of Matrices and Their Applications

    Science.gov (United States)

    Debnath, L.

    2014-01-01

    This paper deals with the modern development of matrices, linear transformations, quadratic forms and their applications to geometry and mechanics, eigenvalues, eigenvectors and characteristic equations with applications. Included are the representations of real and complex numbers, and quaternions by matrices, and isomorphism in order to show…

  20. Flux Jacobian Matrices For Equilibrium Real Gases

    Science.gov (United States)

    Vinokur, Marcel

    1990-01-01

    Improved formulation includes generalized Roe average and extension to three dimensions. Flux Jacobian matrices derived for use in numerical solutions of conservation-law differential equations of inviscid flows of ideal gases extended to real gases. Real-gas formulation of these matrices retains simplifying assumptions of thermodynamic and chemical equilibrium, but adds effects of vibrational excitation, dissociation, and ionization of gas molecules via general equation of state.

  1. Delusional Confusion of Dreaming and Reality in Narcolepsy

    Science.gov (United States)

    Wamsley, Erin; Donjacour, Claire E.H.M.; Scammell, Thomas E.; Lammers, Gert Jan; Stickgold, Robert

    2014-01-01

    Study Objectives: We investigated a generally unappreciated feature of the sleep disorder narcolepsy, in which patients mistake the memory of a dream for a real experience and form sustained delusions about significant events. Design: We interviewed patients with narcolepsy and healthy controls to establish the prevalence of this complaint and identify its predictors. Setting: Academic medical centers in Boston, Massachusetts and Leiden, The Netherlands. Participants: Patients (n = 46) with a diagnosis of narcolepsy with cataplexy, and age-matched healthy healthy controls (n = 41). Interventions: N/A. Measurements and Results: “Dream delusions” were surprisingly common in narcolepsy and were often striking in their severity. As opposed to fleeting hypnagogic and hypnopompic hallucinations of the sleep/wake transition, dream delusions were false memories induced by the experience of a vivid dream, which led to false beliefs that could persist for days or weeks. Conclusions: The delusional confusion of dreamed events with reality is a prominent feature of narcolepsy, and suggests the possibility of source memory deficits in this disorder that have not yet been fully characterized. Citation: Wamsley E; Donjacour CE; Scammell TE; Lammers GJ; Stickgold R. Delusional confusion of dreaming and reality in narcolepsy. SLEEP 2014;37(2):419-422. PMID:24501437

  2. Data depth and rank-based tests for covariance and spectral density matrices

    KAUST Repository

    Chau, Joris

    2017-06-26

    In multivariate time series analysis, objects of primary interest to study cross-dependences in the time series are the autocovariance or spectral density matrices. Non-degenerate covariance and spectral density matrices are necessarily Hermitian and positive definite, and our primary goal is to develop new methods to analyze samples of such matrices. The main contribution of this paper is the generalization of the concept of statistical data depth for collections of covariance or spectral density matrices by exploiting the geometric properties of the space of Hermitian positive definite matrices as a Riemannian manifold. This allows one to naturally characterize most central or outlying matrices, but also provides a practical framework for rank-based hypothesis testing in the context of samples of covariance or spectral density matrices. First, the desired properties of a data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally efficient pointwise and integrated data depth functions that satisfy each of these requirements. Several applications of the developed methodology are illustrated by the analysis of collections of spectral matrices in multivariate brain signal time series datasets.

  3. Data depth and rank-based tests for covariance and spectral density matrices

    KAUST Repository

    Chau, Joris; Ombao, Hernando; Sachs, Rainer von

    2017-01-01

    In multivariate time series analysis, objects of primary interest to study cross-dependences in the time series are the autocovariance or spectral density matrices. Non-degenerate covariance and spectral density matrices are necessarily Hermitian and positive definite, and our primary goal is to develop new methods to analyze samples of such matrices. The main contribution of this paper is the generalization of the concept of statistical data depth for collections of covariance or spectral density matrices by exploiting the geometric properties of the space of Hermitian positive definite matrices as a Riemannian manifold. This allows one to naturally characterize most central or outlying matrices, but also provides a practical framework for rank-based hypothesis testing in the context of samples of covariance or spectral density matrices. First, the desired properties of a data depth function acting on the space of Hermitian positive definite matrices are presented. Second, we propose two computationally efficient pointwise and integrated data depth functions that satisfy each of these requirements. Several applications of the developed methodology are illustrated by the analysis of collections of spectral matrices in multivariate brain signal time series datasets.

  4. Almost commuting self-adjoint matrices: The real and self-dual cases

    Science.gov (United States)

    Loring, Terry A.; Sørensen, Adam P. W.

    2016-08-01

    We show that a pair of almost commuting self-adjoint, symmetric matrices is close to a pair of commuting self-adjoint, symmetric matrices (in a uniform way). Moreover, we prove that the same holds with self-dual in place of symmetric and also for paths of self-adjoint matrices. Since a symmetric, self-adjoint matrix is real, we get a real version of Huaxin Lin’s famous theorem on almost commuting matrices. Similarly, the self-dual case gives a version for matrices over the quaternions. To prove these results, we develop a theory of semiprojectivity for real C*-algebras and also examine various definitions of low-rank for real C*-algebras.

  5. Too Many Choices Confuse Patients With Dementia

    Directory of Open Access Journals (Sweden)

    R. C. Hamdy MD

    2017-07-01

    Full Text Available Choices are often difficult to make by patients with Alzheimer Dementia. They often become acutely confused when faced with too many options because they are not able to retain in their working memory enough information about the various individual choices available. In this case study, we describe how an essentially simple benign task (choosing a dress to wear can rapidly escalate and result in a catastrophic outcome. We examine what went wrong in the patient/caregiver interaction and how that potentially catastrophic situation could have been avoided or defused.

  6. Preprocessing with Photoshop Software on Microscopic Images of A549 Cells in Epithelial-Mesenchymal Transition.

    Science.gov (United States)

    Ren, Zhou-Xin; Yu, Hai-Bin; Shen, Jun-Ling; Li, Ya; Li, Jian-Sheng

    2015-06-01

    To establish a preprocessing method for cell morphometry in microscopic images of A549 cells in epithelial-mesenchymal transition (EMT). Adobe Photoshop CS2 (Adobe Systems, Inc.) was used for preprocessing the images. First, all images were processed for size uniformity and high distinguishability between the cell and background area. Then, a blank image with the same size and grids was established and cross points of the grids were added into a distinct color. The blank image was merged into a processed image. In the merged images, the cells with 1 or more cross points were chosen, and then the cell areas were enclosed and were replaced in a distinct color. Except for chosen cellular areas, all areas were changed into a unique hue. Three observers quantified roundness of cells in images with the image preprocess (IPP) or without the method (Controls), respectively. Furthermore, 1 observer measured the roundness 3 times with the 2 methods, respectively. The results between IPPs and Controls were compared for repeatability and reproducibility. As compared with the Control method, among 3 observers, use of the IPP method resulted in a higher number and a higher percentage of same-chosen cells in an image. The relative average deviation values of roundness, either for 3 observers or 1 observer, were significantly higher in Controls than in IPPs (p Photoshop, a chosen cell from an image was more objective, regular, and accurate, creating an increase of reproducibility and repeatability on morphometry of A549 cells in epithelial to mesenchymal transition.

  7. Forecasting Covariance Matrices: A Mixed Frequency Approach

    DEFF Research Database (Denmark)

    Halbleib, Roxana; Voev, Valeri

    This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows for flexi......This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows...... for flexible dependence patterns for volatilities and correlations, and can be applied to covariance matrices of large dimensions. The separate modeling of volatility and correlation forecasts considerably reduces the estimation and measurement error implied by the joint estimation and modeling of covariance...

  8. Propositional matrices as alternative representation of truth values ...

    African Journals Online (AJOL)

    The paper considered the subject of representation of truth values in symbolic logic. An alternative representation was given based on the rows and columns properties of matrices, with the operations involving the logical connectives subjected to the laws of algebra of propositions. Matrices of various propositions detailing ...

  9. Predator confusion is sufficient to evolve swarming behaviour

    OpenAIRE

    Olson, Randal S.; Hintze, Arend; Dyer, Fred C.; Knoester, David B.; Adami, Christoph

    2013-01-01

    Swarming behaviours in animals have been extensively studied owing to their implications for the evolution of cooperation, social cognition and predator–prey dynamics. An important goal of these studies is discerning which evolutionary pressures favour the formation of swarms. One hypothesis is that swarms arise because the presence of multiple moving prey in swarms causes confusion for attacking predators, but it remains unclear how important this selective force is. Using an evolutionary mo...

  10. Predator confusion is sufficient to evolve swarming behavior

    OpenAIRE

    Olson, Randal S.; Hintze, Arend; Dyer, Fred C.; Knoester, David B.; Adami, Christoph

    2012-01-01

    Swarming behaviors in animals have been extensively studied due to their implications for the evolution of cooperation, social cognition, and predator-prey dynamics. An important goal of these studies is discerning which evolutionary pressures favor the formation of swarms. One hypothesis is that swarms arise because the presence of multiple moving prey in swarms causes confusion for attacking predators, but it remains unclear how important this selective force is. Using an evolutionary model...

  11. Wishart and anti-Wishart random matrices

    International Nuclear Information System (INIS)

    Janik, Romuald A; Nowak, Maciej A

    2003-01-01

    We provide a compact exact representation for the distribution of the matrix elements of the Wishart-type random matrices A † A, for any finite number of rows and columns of A, without any large N approximations. In particular, we treat the case when the Wishart-type random matrix contains redundant, non-random information, which is a new result. This representation is of interest for a procedure for reconstructing the redundant information hidden in Wishart matrices, with potential applications to numerous models based on biological, social and artificial intelligence networks

  12. The Theos/ComRes survey into public perception of Darwinism in the UK: a recipe for confusion.

    Science.gov (United States)

    Baker, Sylvia

    2012-04-01

    A survey of the general public in the UK, conducted in 2008, suggested that more than half of the British population are unconvinced by Darwinism. That survey, conducted by the polling company ComRes on behalf of the theological think-tank Theos, reported its full findings in March 2009 and found them to be "complex and confused." This paper argues that the confusion identified may have been partly engendered by the way in which the survey questionnaire was constructed and that the survey itself, not simply its respondents, was confused. A source of the confusion, it is argued, could be found, first, in the definitions used for the four positions of young earth creationism, theistic evolution, atheistic evolution and intelligent design. Second, a failure to define the key terms "evolution" and "science," used in some of the survey questions, resulted in responses that were difficult to interpret.

  13. Information geometry of density matrices and state estimation

    International Nuclear Information System (INIS)

    Brody, Dorje C

    2011-01-01

    Given a pure state vector |x) and a density matrix ρ-hat, the function p(x|ρ-hat)= defines a probability density on the space of pure states parameterised by density matrices. The associated Fisher-Rao information measure is used to define a unitary invariant Riemannian metric on the space of density matrices. An alternative derivation of the metric, based on square-root density matrices and trace norms, is provided. This is applied to the problem of quantum-state estimation. In the simplest case of unitary parameter estimation, new higher-order corrections to the uncertainty relations, applicable to general mixed states, are derived. (fast track communication)

  14. A graphical method to evaluate spectral preprocessing in multivariate regression calibrations: example with Savitzky-Golay filters and partial least squares regression.

    Science.gov (United States)

    Delwiche, Stephen R; Reeves, James B

    2010-01-01

    In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various

  15. Fuzzy cluster means algorithm for the diagnosis of confusable disease

    African Journals Online (AJOL)

    ... end platform while Microsoft Access was used as the database application. The system gives a measure of each disease within a set of confusable disease. The proposed system had a classification accuracy of 60%. Keywords: Artificial Intelligence, expert system Fuzzy cluster – means Algorithm, physician, Diagnosis ...

  16. Supercritical fluid extraction behaviour of polymer matrices

    International Nuclear Information System (INIS)

    Sujatha, K.; Kumar, R.; Sivaraman, N.; Srinivasan, T.G.; Vasudeva Rao, P.R.

    2007-01-01

    Organic compounds present in polymeric matrices such as neoprene, surgical gloves and PVC were co-extracted during the removal of uranium using supercritical fluid extraction (SFE) technique. Hence SFE studies of these matrices were carried out to establish the extracted species using HPLC, IR and mass spectrometry techniques. The initial study indicated that uranium present in the extract could be purified from the co-extracted organic species. (author)

  17. Automated cleaning and pre-processing of immunoglobulin gene sequences from high-throughput sequencing

    Directory of Open Access Journals (Sweden)

    Miri eMichaeli

    2012-12-01

    Full Text Available High throughput sequencing (HTS yields tens of thousands to millions of sequences that require a large amount of pre-processing work to clean various artifacts. Such cleaning cannot be performed manually. Existing programs are not suitable for immunoglobulin (Ig genes, which are variable and often highly mutated. This paper describes Ig-HTS-Cleaner (Ig High Throughput Sequencing Cleaner, a program containing a simple cleaning procedure that successfully deals with pre-processing of Ig sequences derived from HTS, and Ig-Indel-Identifier (Ig Insertion – Deletion Identifier, a program for identifying legitimate and artifact insertions and/or deletions (indels. Our programs were designed for analyzing Ig gene sequences obtained by 454 sequencing, but they are applicable to all types of sequences and sequencing platforms. Ig-HTS-Cleaner and Ig-Indel-Identifier have been implemented in Java and saved as executable JAR files, supported on Linux and MS Windows. No special requirements are needed in order to run the programs, except for correctly constructing the input files as explained in the text. The programs' performance has been tested and validated on real and simulated data sets.

  18. The effects of pre-processing strategies in sentiment analysis of online movie reviews

    Science.gov (United States)

    Zin, Harnani Mat; Mustapha, Norwati; Murad, Masrah Azrifah Azmi; Sharef, Nurfadhlina Mohd

    2017-10-01

    With the ever increasing of internet applications and social networking sites, people nowadays can easily express their feelings towards any products and services. These online reviews act as an important source for further analysis and improved decision making. These reviews are mostly unstructured by nature and thus, need processing like sentiment analysis and classification to provide a meaningful information for future uses. In text analysis tasks, the appropriate selection of words/features will have a huge impact on the effectiveness of the classifier. Thus, this paper explores the effect of the pre-processing strategies in the sentiment analysis of online movie reviews. In this paper, supervised machine learning method was used to classify the reviews. The support vector machine (SVM) with linear and non-linear kernel has been considered as classifier for the classification of the reviews. The performance of the classifier is critically examined based on the results of precision, recall, f-measure, and accuracy. Two different features representations were used which are term frequency and term frequency-inverse document frequency. Results show that the pre-processing strategies give a significant impact on the classification process.

  19. Characterizing the continuously acquired cardiovascular time series during hemodialysis, using median hybrid filter preprocessing noise reduction

    Directory of Open Access Journals (Sweden)

    Wilson S

    2015-01-01

    Full Text Available Scott Wilson,1,2 Andrea Bowyer,3 Stephen B Harrap4 1Department of Renal Medicine, The Alfred Hospital, 2Baker IDI, Melbourne, 3Department of Anaesthesia, Royal Melbourne Hospital, 4University of Melbourne, Parkville, VIC, Australia Abstract: The clinical characterization of cardiovascular dynamics during hemodialysis (HD has important pathophysiological implications in terms of diagnostic, cardiovascular risk assessment, and treatment efficacy perspectives. Currently the diagnosis of significant intradialytic systolic blood pressure (SBP changes among HD patients is imprecise and opportunistic, reliant upon the presence of hypotensive symptoms in conjunction with coincident but isolated noninvasive brachial cuff blood pressure (NIBP readings. Considering hemodynamic variables as a time series makes a continuous recording approach more desirable than intermittent measures; however, in the clinical environment, the data signal is susceptible to corruption due to both impulsive and Gaussian-type noise. Signal preprocessing is an attractive solution to this problem. Prospectively collected continuous noninvasive SBP data over the short-break intradialytic period in ten patients was preprocessed using a novel median hybrid filter (MHF algorithm and compared with 50 time-coincident pairs of intradialytic NIBP measures from routine HD practice. The median hybrid preprocessing technique for continuously acquired cardiovascular data yielded a dynamic regression without significant noise and artifact, suitable for high-level profiling of time-dependent SBP behavior. Signal accuracy is highly comparable with standard NIBP measurement, with the added clinical benefit of dynamic real-time hemodynamic information. Keywords: continuous monitoring, blood pressure

  20. Safe and sensible preprocessing and baseline correction of pupil-size data.

    Science.gov (United States)

    Mathôt, Sebastiaan; Fabius, Jasper; Van Heusden, Elle; Van der Stigchel, Stefan

    2018-02-01

    Measurement of pupil size (pupillometry) has recently gained renewed interest from psychologists, but there is little agreement on how pupil-size data is best analyzed. Here we focus on one aspect of pupillometric analyses: baseline correction, i.e., analyzing changes in pupil size relative to a baseline period. Baseline correction is useful in experiments that investigate the effect of some experimental manipulation on pupil size. In such experiments, baseline correction improves statistical power by taking into account random fluctuations in pupil size over time. However, we show that baseline correction can also distort data if unrealistically small pupil sizes are recorded during the baseline period, which can easily occur due to eye blinks, data loss, or other distortions. Divisive baseline correction (corrected pupil size = pupil size/baseline) is affected more strongly by such distortions than subtractive baseline correction (corrected pupil size = pupil size - baseline). We discuss the role of baseline correction as a part of preprocessing of pupillometric data, and make five recommendations: (1) before baseline correction, perform data preprocessing to mark missing and invalid data, but assume that some distortions will remain in the data; (2) use subtractive baseline correction; (3) visually compare your corrected and uncorrected data; (4) be wary of pupil-size effects that emerge faster than the latency of the pupillary response allows (within ±220 ms after the manipulation that induces the effect); and (5) remove trials on which baseline pupil size is unrealistically small (indicative of blinks and other distortions).

  1. RELATIONAL HEALING OF EARLY AFFECT-CONFUSION - PART 3 OF A CASE STUDY TRILOGY

    Directory of Open Access Journals (Sweden)

    Richard G. Erskine

    2015-06-01

    Full Text Available Part 3 of a case study trilogy on early affect-confusion describes the use of therapeutic dialogue, relational presence and supportive age regression in the psychotherapy of a client who lived on a “borderline” of early affect confusion. The concepts and methods of an in-depth, integrative and relational psychotherapy include a sensitivity to the client’s physiological and emotional expressions of implicit and sub-symbolic memories, therapeutic inference, an awareness of the client’s relational-needs, the effective use of a developmental image, as well the identification of an introjected other and the use of therapeutic interposition.

  2. Fabrication of Aligned Carbon Nanotube/Polycaprolactone/Gelatin Nanofibrous Matrices for Schwann Cell Immobilization

    Directory of Open Access Journals (Sweden)

    Shiao-Wen Tsai

    2014-01-01

    Full Text Available In this study, we utilized a mandrel rotating collector consisting of two parallel, electrically conductive pieces of tape to fabricate aligned electrospun polycaprolactone/gelatin (PG and carbon nanotube/polycaprolactone/gelatin (PGC nanofibrous matrices. Furthermore, we examined the biological performance of the PGC nanofibrous and film matrices using an in vitro culture of RT4-D6P2T rat Schwann cells. Using cell adhesion tests, we found that carbon nanotube inhibited Schwann cell attachment on PGC nanofibrous and film matrices. However, the proliferation rates of Schwann cells were higher when they were immobilized on PGC nanofibrous matrices compared to PGC film matrices. Using western blot analysis, we found that NRG1 and P0 protein expression levels were higher for cells immobilized on PGC nanofibrous matrices compared to PG nanofibrous matrices. However, the carbon nanotube inhibited NRG1 and P0 protein expression in cells immobilized on PGC film matrices. Moreover, the NRG1 and P0 protein expression levels were higher for cells immobilized on PGC nanofibrous matrices compared to PGC film matrices. We found that the matrix topography and composition influenced Schwann cell behavior.

  3. Change detection using landsat time series: A review of frequencies, preprocessing, algorithms, and applications

    Science.gov (United States)

    Zhu, Zhe

    2017-08-01

    The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.

  4. On image pre-processing for PIV of sinlge- and two-phase flows over reflecting objects

    NARCIS (Netherlands)

    Deen, N.G.; Willems, P.; van Sint Annaland, M.; Kuipers, J.A.M.; Lammertink, Rob G.H.; Kemperman, Antonius J.B.; Wessling, Matthias; van der Meer, Walterus Gijsbertus Joseph

    2010-01-01

    A novel image pre-processing scheme for PIV of single- and two-phase flows over reflecting objects which does not require the use of additional hardware is discussed. The approach for single-phase flow consists of image normalization and intensity stretching followed by background subtraction. For

  5. The Evaluation of Preprocessing Choices in Single-Subject BOLD fMRI Using NPAIRS Performance Metrics

    DEFF Research Database (Denmark)

    Stephen, LaConte; Rottenberg, David; Strother, Stephen

    2003-01-01

    to obtain cross-validation-based model performance estimates of prediction accuracy and global reproducibility for various degrees of model complexity. We rely on the concept of an analysis chain meta-model in which all parameters of the preprocessing steps along with the final statistical model are treated...

  6. RTI Confusion in the Case Law and the Legal Commentary

    Science.gov (United States)

    Zirkel, Perry A.

    2011-01-01

    This article expresses the position that the current legal commentary and cases do not sufficiently differentiate response to intervention (RTI) from the various forms of general education interventions that preceded it, thus compounding confusion in professional practice as to legally defensible procedures for identifying children as having a…

  7. Hyperspectral imaging in medicine: image pre-processing problems and solutions in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2015-11-01

    The paper presents problems and solutions related to hyperspectral image pre-processing. New methods of preliminary image analysis are proposed. The paper shows problems occurring in Matlab when trying to analyse this type of images. Moreover, new methods are discussed which provide the source code in Matlab that can be used in practice without any licensing restrictions. The proposed application and sample result of hyperspectral image analysis. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Nano-Fiber Reinforced Enhancements in Composite Polymer Matrices

    Science.gov (United States)

    Chamis, Christos C.

    2009-01-01

    Nano-fibers are used to reinforce polymer matrices to enhance the matrix dependent properties that are subsequently used in conventional structural composites. A quasi isotropic configuration is used in arranging like nano-fibers through the thickness to ascertain equiaxial enhanced matrix behavior. The nano-fiber volume ratios are used to obtain the enhanced matrix strength properties for 0.01,0.03, and 0.05 nano-fiber volume rates. These enhanced nano-fiber matrices are used with conventional fiber volume ratios of 0.3 and 0.5 to obtain the composite properties. Results show that nano-fiber enhanced matrices of higher than 0.3 nano-fiber volume ratio are degrading the composite properties.

  9. Conversation on data mining strategies in LC-MS untargeted metabolomics: pre-processing and pre-treatment steps

    CSIR Research Space (South Africa)

    Tugizimana, F

    2016-11-01

    Full Text Available -MS)-based untargeted metabolomic dataset, this study explored the influence of collection parameters in the data pre-processing step, scaling and data transformation on the statistical models generated, and feature selection, thereafter. Data obtained in positive mode...

  10. A Formal Methods Approach to the Analysis of Mode Confusion

    Science.gov (United States)

    Butler, Ricky W.; Miller, Steven P.; Potts, James N.; Carreno, Victor A.

    2004-01-01

    The goal of the new NASA Aviation Safety Program (AvSP) is to reduce the civil aviation fatal accident rate by 80% in ten years and 90% in twenty years. This program is being driven by the accident data with a focus on the most recent history. Pilot error is the most commonly cited cause for fatal accidents (up to 70%) and obviously must be given major consideration in this program. While the greatest source of pilot error is the loss of situation awareness , mode confusion is increasingly becoming a major contributor as well. The January 30, 1995 issue of Aviation Week lists 184 incidents and accidents involving mode awareness including the Bangalore A320 crash 2/14/90, the Strasbourg A320 crash 1/20/92, the Mulhouse-Habsheim A320 crash 6/26/88, and the Toulouse A330 crash 6/30/94. These incidents and accidents reveal that pilots sometimes become confused about what the cockpit automation is doing. Consequently, human factors research is an obvious investment area. However, even a cursory look at the accident data reveals that the mode confusion problem is much deeper than just training deficiencies and a lack of human-oriented design. This is readily acknowledged by human factors experts. It seems that further progress in human factors must come through a deeper scrutiny of the internals of the automation. It is in this arena that formal methods can contribute. Formal methods refers to the use of techniques from logic and discrete mathematics in the specification, design, and verification of computer systems, both hardware and software. The fundamental goal of formal methods is to capture requirements, designs and implementations in a mathematically based model that can be analyzed in a rigorous manner. Research in formal methods is aimed at automating this analysis as much as possible. By capturing the internal behavior of a flight deck in a rigorous and detailed formal model, the dark corners of a design can be analyzed. This paper will explore how formal

  11. LOBBYING OPPORTUNITIES, CONFUSIONS AND MISREPRESENTATIONS IN THE EUROPEAN UNION

    Directory of Open Access Journals (Sweden)

    Andreea Vass

    2008-06-01

    Full Text Available Lobby activities are often likened to the misuse of authority and bad practices. Such parallels generate problems that easily spiral down into crises and conflicts, and the symbiosis of politics and business turns into an ambiguous platform. Why should we look into the core of the suspicions regarding the intertwining and overlapping interests of political and business communities? The answer: because in Romania public interest is often defined in a private or personal framework, whereas private interests are defined in markedly public terms. Confusion sets us clearly apart from the effective Israeli, American, British, Czech, Polish or Magyar lobbyists. The same confusion has a damaging effect: we are unable to efficiently handle institutional relations and public-private relations, be they national or international, that is, European. To what extent is the politics-business relationship deemed appropriate in US and EU? Which are its constraints, prerequisites and possible sanctions? These are the questions which accompany our dilemmas that we clarify in this paper. We conclude with proposals on what can be done in promoting efficiently the Romanian private interests within the European institutions.

  12. Preprocessing in a Tiered Sensor Network for Habitat Monitoring

    Directory of Open Access Journals (Sweden)

    Hanbiao Wang

    2003-03-01

    Full Text Available We investigate task decomposition and collaboration in a two-tiered sensor network for habitat monitoring. The system recognizes and localizes a specified type of birdcalls. The system has a few powerful macronodes in the first tier, and many less powerful micronodes in the second tier. Each macronode combines data collected by multiple micronodes for target classification and localization. We describe two types of lightweight preprocessing which significantly reduce data transmission from micronodes to macronodes. Micronodes classify events according to their cross-zero rates and discard irrelevant events. Data about events of interest is reduced and compressed before being transmitted to macronodes for target localization. Preliminary experiments illustrate the effectiveness of event filtering and data reduction at micronodes.

  13. Meet and Join Matrices in the Poset of Exponential Divisors

    Indian Academy of Sciences (India)

    ... exponential divisor ( G C E D ) and the least common exponential multiple ( L C E M ) do not always exist. In this paper we embed this poset in a lattice. As an application we study the G C E D and L C E M matrices, analogues of G C D and L C M matrices, which are both special cases of meet and join matrices on lattices.

  14. The Reign of Confusion: ABC and the "Crisis in Iran."

    Science.gov (United States)

    Palmerton, Patricia R.

    A study examined reports broadcast by ABC News between November 8, 1979 and December 7, 1979 in its series entitled "Crisis in Iran: America Held Hostage." Transcripts of approximately 50% of actual broadcasts were subjected to rhetorical critical analysis, from which the finding emerged that confusion was the predominant characteristic…

  15. THE EFFECT OF DECOMPOSITION METHOD AS DATA PREPROCESSING ON NEURAL NETWORKS MODEL FOR FORECASTING TREND AND SEASONAL TIME SERIES

    Directory of Open Access Journals (Sweden)

    Subanar Subanar

    2006-01-01

    Full Text Available Recently, one of the central topics for the neural networks (NN community is the issue of data preprocessing on the use of NN. In this paper, we will investigate this topic particularly on the effect of Decomposition method as data processing and the use of NN for modeling effectively time series with both trend and seasonal patterns. Limited empirical studies on seasonal time series forecasting with neural networks show that some find neural networks are able to model seasonality directly and prior deseasonalization is not necessary, and others conclude just the opposite. In this research, we study particularly on the effectiveness of data preprocessing, including detrending and deseasonalization by applying Decomposition method on NN modeling and forecasting performance. We use two kinds of data, simulation and real data. Simulation data are examined on multiplicative of trend and seasonality patterns. The results are compared to those obtained from the classical time series model. Our result shows that a combination of detrending and deseasonalization by applying Decomposition method is the effective data preprocessing on the use of NN for forecasting trend and seasonal time series.

  16. lop-DWI: A Novel Scheme for Pre-Processing of Diffusion-Weighted Images in the Gradient Direction Domain.

    Science.gov (United States)

    Sepehrband, Farshid; Choupan, Jeiran; Caruyer, Emmanuel; Kurniawan, Nyoman D; Gal, Yaniv; Tieng, Quang M; McMahon, Katie L; Vegh, Viktor; Reutens, David C; Yang, Zhengyi

    2014-01-01

    We describe and evaluate a pre-processing method based on a periodic spiral sampling of diffusion-gradient directions for high angular resolution diffusion magnetic resonance imaging. Our pre-processing method incorporates prior knowledge about the acquired diffusion-weighted signal, facilitating noise reduction. Periodic spiral sampling of gradient direction encodings results in an acquired signal in each voxel that is pseudo-periodic with characteristics that allow separation of low-frequency signal from high frequency noise. Consequently, it enhances local reconstruction of the orientation distribution function used to define fiber tracks in the brain. Denoising with periodic spiral sampling was tested using synthetic data and in vivo human brain images. The level of improvement in signal-to-noise ratio and in the accuracy of local reconstruction of fiber tracks was significantly improved using our method.

  17. Vowel identity between note labels confuses pitch identification in non-absolute pitch possessors.

    Directory of Open Access Journals (Sweden)

    Alfredo Brancucci

    Full Text Available The simplest and likeliest assumption concerning the cognitive bases of absolute pitch (AP is that at its origin there is a particularly skilled function which matches the height of the perceived pitch to the verbal label of the musical tone. Since there is no difference in sound frequency resolution between AP and non-AP (NAP musicians, the hypothesis of the present study is that the failure of NAP musicians in pitch identification relies mainly in an inability to retrieve the correct verbal label to be assigned to the perceived musical note. The primary hypothesis is that, when asked to identify tones, NAP musicians confuse the verbal labels to be attached to the stimulus on the basis of their phonetic content. Data from two AP tests are reported, in which subjects had to respond in the presence or in the absence of visually presented verbal note labels (fixed Do solmization. Results show that NAP musicians confuse more frequently notes having a similar vowel in the note label. They tend to confuse e.g. a 261 Hz tone (Do more often with Sol than, e.g., with La. As a second goal, we wondered whether this effect is lateralized, i.e. whether one hemisphere is more responsible than the other in the confusion of notes with similar labels. This question was addressed by observing pitch identification during dichotic listening. Results showed that there is a right hemispheric disadvantage, in NAP but not AP musicians, in the retrieval of the verbal label to be assigned to the perceived pitch. The present results indicate that absolute pitch has strong verbal bases, at least from a cognitive point of view.

  18. A review of blood sample handling and pre-processing for metabolomics studies.

    Science.gov (United States)

    Hernandes, Vinicius Veri; Barbas, Coral; Dudzik, Danuta

    2017-09-01

    Metabolomics has been found to be applicable to a wide range of clinical studies, bringing a new era for improving clinical diagnostics, early disease detection, therapy prediction and treatment efficiency monitoring. A major challenge in metabolomics, particularly untargeted studies, is the extremely diverse and complex nature of biological specimens. Despite great advances in the field there still exist fundamental needs for considering pre-analytical variability that can introduce bias to the subsequent analytical process and decrease the reliability of the results and moreover confound final research outcomes. Many researchers are mainly focused on the instrumental aspects of the biomarker discovery process, and sample related variables sometimes seem to be overlooked. To bridge the gap, critical information and standardized protocols regarding experimental design and sample handling and pre-processing are highly desired. Characterization of a range variation among sample collection methods is necessary to prevent results misinterpretation and to ensure that observed differences are not due to an experimental bias caused by inconsistencies in sample processing. Herein, a systematic discussion of pre-analytical variables affecting metabolomics studies based on blood derived samples is performed. Furthermore, we provide a set of recommendations concerning experimental design, collection, pre-processing procedures and storage conditions as a practical review that can guide and serve for the standardization of protocols and reduction of undesirable variation. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Joint Estimation of Multiple Precision Matrices with Common Structures.

    Science.gov (United States)

    Lee, Wonyul; Liu, Yufeng

    Estimation of inverse covariance matrices, known as precision matrices, is important in various areas of statistical analysis. In this article, we consider estimation of multiple precision matrices sharing some common structures. In this setting, estimating each precision matrix separately can be suboptimal as it ignores potential common structures. This article proposes a new approach to parameterize each precision matrix as a sum of common and unique components and estimate multiple precision matrices in a constrained l 1 minimization framework. We establish both estimation and selection consistency of the proposed estimator in the high dimensional setting. The proposed estimator achieves a faster convergence rate for the common structure in certain cases. Our numerical examples demonstrate that our new estimator can perform better than several existing methods in terms of the entropy loss and Frobenius loss. An application to a glioblastoma cancer data set reveals some interesting gene networks across multiple cancer subtypes.

  20. Asymptotics of eigenvalues and eigenvectors of Toeplitz matrices

    Science.gov (United States)

    Böttcher, A.; Bogoya, J. M.; Grudsky, S. M.; Maximenko, E. A.

    2017-11-01

    Analysis of the asymptotic behaviour of the spectral characteristics of Toeplitz matrices as the dimension of the matrix tends to infinity has a history of over 100 years. For instance, quite a number of versions of Szegő's theorem on the asymptotic behaviour of eigenvalues and of the so-called strong Szegő theorem on the asymptotic behaviour of the determinants of Toeplitz matrices are known. Starting in the 1950s, the asymptotics of the maximum and minimum eigenvalues were actively investigated. However, investigation of the individual asymptotics of all the eigenvalues and eigenvectors of Toeplitz matrices started only quite recently: the first papers on this subject were published in 2009-2010. A survey of this new field is presented here. Bibliography: 55 titles.

  1. Computational Testing for Automated Preprocessing 2: Practical Demonstration of a System for Scientific Data-Processing Workflow Management for High-Volume EEG.

    Science.gov (United States)

    Cowley, Benjamin U; Korpela, Jussi

    2018-01-01

    Existing tools for the preprocessing of EEG data provide a large choice of methods to suitably prepare and analyse a given dataset. Yet it remains a challenge for the average user to integrate methods for batch processing of the increasingly large datasets of modern research, and compare methods to choose an optimal approach across the many possible parameter configurations. Additionally, many tools still require a high degree of manual decision making for, e.g., the classification of artifacts in channels, epochs or segments. This introduces extra subjectivity, is slow, and is not reproducible. Batching and well-designed automation can help to regularize EEG preprocessing, and thus reduce human effort, subjectivity, and consequent error. The Computational Testing for Automated Preprocessing (CTAP) toolbox facilitates: (i) batch processing that is easy for experts and novices alike; (ii) testing and comparison of preprocessing methods. Here we demonstrate the application of CTAP to high-resolution EEG data in three modes of use. First, a linear processing pipeline with mostly default parameters illustrates ease-of-use for naive users. Second, a branching pipeline illustrates CTAP's support for comparison of competing methods. Third, a pipeline with built-in parameter-sweeping illustrates CTAP's capability to support data-driven method parameterization. CTAP extends the existing functions and data structure from the well-known EEGLAB toolbox, based on Matlab, and produces extensive quality control outputs. CTAP is available under MIT open-source licence from https://github.com/bwrc/ctap.

  2. Possible confusion between primary hypersomnia and adult attention-deficit/hyperactivity disorder.

    NARCIS (Netherlands)

    Oosterloo, M.; Lammers, G.; Overeem, S.; Noord, I. de; Kooij, J.J.S.

    2006-01-01

    We explored the possibility of diagnostic confusion between hypersomnias of central origin (narcolepsy and idiopathic hypersomnia, IH) and the adult form of attention-deficit/hyperactivity disorder (ADHD). We included 67 patients with narcolepsy, 7 with IH and 61 with ADHD. All patients completed

  3. On the norms of r-circulant matrices with generalized Fibonacci numbers

    Directory of Open Access Journals (Sweden)

    Amara Chandoul

    2017-01-01

    Full Text Available In this paper, we obtain a generalization of [6, 8]. Firstly, we consider the so-called r-circulant matrices with generalized Fibonacci numbers and then found lower and upper bounds for the Euclidean and spectral norms of these matrices. Afterwards, we present some bounds for the spectral norms of Hadamard and Kronecker product of these matrices.

  4. Effects of Preprocessing on Multi-Direction Properties of Aluminum Alloy Cold-Spray Deposits

    Science.gov (United States)

    Rokni, M. R.; Nardi, A. T.; Champagne, V. K.; Nutt, S. R.

    2018-05-01

    The effects of powder preprocessing (degassing at 400 °C for 6 h) on microstructure and mechanical properties of 5056 aluminum deposits produced by high-pressure cold spray were investigated. To investigate directionality of the mechanical properties, microtensile coupons were excised from different directions of the deposit, i.e., longitudinal, short transverse, long transverse, and diagonal and then tested. The results were compared to properties of wrought 5056 and the coating deposited with as-received 5056 Al powder and correlated with the observed microstructures. Preprocessing softened the particles and eliminated the pores within them, resulting in more extensive and uniform deformation upon impact with the substrate and with underlying deposited material. Microstructural characterization and finite element simulation indicated that upon particle impact, the peripheral regions experienced more extensive deformation and higher temperatures than the central contact zone. This led to more recrystallization and stronger bonding at peripheral regions relative to the contact zone area and yielded superior properties in the longitudinal direction compared with the short transverse direction. Fractography revealed that crack propagation takes place along the particle-particle interfaces in the transverse directions (caused by insufficient bonding and recrystallization), whereas through the deposited particles, fracture is dominant in the longitudinal direction.

  5. Influence of Averaging Preprocessing on Image Analysis with a Markov Random Field Model

    Science.gov (United States)

    Sakamoto, Hirotaka; Nakanishi-Ohno, Yoshinori; Okada, Masato

    2018-02-01

    This paper describes our investigations into the influence of averaging preprocessing on the performance of image analysis. Averaging preprocessing involves a trade-off: image averaging is often undertaken to reduce noise while the number of image data available for image analysis is decreased. We formulated a process of generating image data by using a Markov random field (MRF) model to achieve image analysis tasks such as image restoration and hyper-parameter estimation by a Bayesian approach. According to the notions of Bayesian inference, posterior distributions were analyzed to evaluate the influence of averaging. There are three main results. First, we found that the performance of image restoration with a predetermined value for hyper-parameters is invariant regardless of whether averaging is conducted. We then found that the performance of hyper-parameter estimation deteriorates due to averaging. Our analysis of the negative logarithm of the posterior probability, which is called the free energy based on an analogy with statistical mechanics, indicated that the confidence of hyper-parameter estimation remains higher without averaging. Finally, we found that when the hyper-parameters are estimated from the data, the performance of image restoration worsens as averaging is undertaken. We conclude that averaging adversely influences the performance of image analysis through hyper-parameter estimation.

  6. Hypersymmetric functions and Pochhammers of 2×2 nonautonomous matrices

    Directory of Open Access Journals (Sweden)

    A. F. Antippa

    2004-01-01

    Full Text Available We introduce the hypersymmetric functions of 2×2 nonautonomous matrices and show that they are related, by simple expressions, to the Pochhammers (factorial polynomials of these matrices. The hypersymmetric functions are generalizations of the associated elementary symmetric functions, and for a specific class of 2×2 matrices, having a high degree of symmetry, they reduce to these latter functions. This class of matrices includes rotations, Lorentz boosts, and discrete time generators for the harmonic oscillators. The hypersymmetric functions are defined over four sets of independent indeterminates using a triplet of interrelated binary partitions. We work out the algebra of this triplet of partitions and then make use of the results in order to simplify the expressions for the hypersymmetric functions for a special class of matrices. In addition to their obvious applications in matrix theory, in coupled difference equations, and in the theory of symmetric functions, the results obtained here also have useful applications in problems involving successive rotations, successive Lorentz transformations, discrete harmonic oscillators, and linear two-state systems.

  7. An algorithmic characterization of P-matricity

    OpenAIRE

    Ben Gharbia , Ibtihel; Gilbert , Jean Charles

    2013-01-01

    International audience; It is shown that a matrix M is a P-matrix if and only if, whatever is the vector q, the Newton-min algorithm does not cycle between two points when it is used to solve the linear complementarity problem 0 ≤ x ⊥ (Mx+q) ≥ 0.; Nous montrons dans cet article qu'une matrice M est une P-matrice si, et seulement si, quel que soit le vecteur q, l'algorithme de Newton-min ne fait pas de cycle de deux points lorsqu'il est utilisé pour résoudre le problème de compl\\émentarité lin...

  8. Textural Analysis of Fatique Crack Surfaces: Image Pre-processing

    Directory of Open Access Journals (Sweden)

    H. Lauschmann

    2000-01-01

    Full Text Available For the fatique crack history reconstitution, new methods of quantitative microfractography are beeing developed based on the image processing and textural analysis. SEM magnifications between micro- and macrofractography are used. Two image pre-processing operatins were suggested and proved to prepare the crack surface images for analytical treatment: 1. Normalization is used to transform the image to a stationary form. Compared to the generally used equalization, it conserves the shape of brightness distribution and saves the character of the texture. 2. Binarization is used to transform the grayscale image to a system of thick fibres. An objective criterion for the threshold brightness value was found as that resulting into the maximum number of objects. Both methods were succesfully applied together with the following textural analysis.

  9. Partitioning sparse rectangular matrices for parallel processing

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, T.G.

    1998-05-01

    The authors are interested in partitioning sparse rectangular matrices for parallel processing. The partitioning problem has been well-studied in the square symmetric case, but the rectangular problem has received very little attention. They will formalize the rectangular matrix partitioning problem and discuss several methods for solving it. They will extend the spectral partitioning method for symmetric matrices to the rectangular case and compare this method to three new methods -- the alternating partitioning method and two hybrid methods. The hybrid methods will be shown to be best.

  10. PRACTICAL RECOMMENDATIONS OF DATA PREPROCESSING AND GEOSPATIAL MEASURES FOR OPTIMIZING THE NEUROLOGICAL AND OTHER PEDIATRIC EMERGENCIES MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Ionela MANIU

    2017-08-01

    Full Text Available Time management, optimal and timed determination of emergency severity as well as optimizing the use of available human and material resources are crucial areas of emergency services. A starting point for achieving these optimizations can be considered the analysis and preprocess of real data from the emergency services. The benefits of performing this method consist in exposing more useful structures to data modelling algorithms which consequently will reduce overfitting and improves accuracy. This paper aims to offer practical recommendations for data preprocessing measures including feature selection and discretization of numeric attributes regarding age, duration of the case, season, period, week period (workday, weekend and geospatial location of neurological and other pediatric emergencies. An analytical, retrospective study was conducted on a sample consisting of 933 pediatric cases, from UPU-SMURD Sibiu, 01.01.2014 – 27.02.2017 period.

  11. Conceptions about the mind-body problem and their relations to afterlife beliefs, paranormal beliefs, religiosity, and ontological confusions.

    Science.gov (United States)

    Riekki, Tapani; Lindeman, Marjaana; Lipsanen, Jari

    2013-01-01

    We examined lay people's conceptions about the relationship between mind and body and their correlates. In Study 1, a web survey (N = 850) of reflective dualistic, emergentistic, and monistic perceptions of the mind-body relationship, afterlife beliefs (i.e., common sense dualism), religiosity, paranormal beliefs, and ontological confusions about physical, biological, and psychological phenomena was conducted. In Study 2 (N = 73), we examined implicit ontological confusions and their relations to afterlife beliefs, paranormal beliefs, and religiosity. Correlation and regression analyses showed that reflective dualism, afterlife beliefs, paranormal beliefs, and religiosity were strongly and positively related and that reflective dualism and afterlife beliefs mediated the relationship between ontological confusions and religious and paranormal beliefs. The results elucidate the contention that dualism is a manifestation of universal cognitive processes related to intuitions about physical, biological, and psychological phenomena by showing that especially individuals who confuse the distinctive attributes of these phenomena tend to set the mind apart from the body.

  12. Conceptions about the mind-body problem and their relations to afterlife beliefs, paranormal beliefs, religiosity, and ontological confusions

    Science.gov (United States)

    Riekki, Tapani; Lindeman, Marjaana; Lipsanen, Jari

    2013-01-01

    We examined lay people’s conceptions about the relationship between mind and body and their correlates. In Study 1, a web survey (N = 850) of reflective dualistic, emergentistic, and monistic perceptions of the mind-body relationship, afterlife beliefs (i.e., common sense dualism), religiosity, paranormal beliefs, and ontological confusions about physical, biological, and psychological phenomena was conducted. In Study 2 (N = 73), we examined implicit ontological confusions and their relations to afterlife beliefs, paranormal beliefs, and religiosity. Correlation and regression analyses showed that reflective dualism, afterlife beliefs, paranormal beliefs, and religiosity were strongly and positively related and that reflective dualism and afterlife beliefs mediated the relationship between ontological confusions and religious and paranormal beliefs. The results elucidate the contention that dualism is a manifestation of universal cognitive processes related to intuitions about physical, biological, and psychological phenomena by showing that especially individuals who confuse the distinctive attributes of these phenomena tend to set the mind apart from the body. PMID:25247011

  13. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  14. Web Log Pre-processing and Analysis for Generation of Learning Profiles in Adaptive E-learning

    Directory of Open Access Journals (Sweden)

    Radhika M. Pai

    2016-03-01

    Full Text Available Adaptive E-learning Systems (AESs enhance the efficiency of online courses in education by providing personalized contents and user interfaces that changes according to learner’s requirements and usage patterns. This paper presents the approach to generate learning profile of each learner which helps to identify the learning styles and provide Adaptive User Interface which includes adaptive learning components and learning material. The proposed method analyzes the captured web usage data to identify the learning profile of the learners. The learning profiles are identified by an algorithmic approach that is based on the frequency of accessing the materials and the time spent on the various learning components on the portal. The captured log data is pre-processed and converted into standard XML format to generate learners sequence data corresponding to the different sessions and time spent. The learning style model adopted in this approach is Felder-Silverman Learning Style Model (FSLSM. This paper also presents the analysis of learner’s activities, preprocessed XML files and generated sequences.

  15. Web Log Pre-processing and Analysis for Generation of Learning Profiles in Adaptive E-learning

    Directory of Open Access Journals (Sweden)

    Radhika M. Pai

    2016-04-01

    Full Text Available Adaptive E-learning Systems (AESs enhance the efficiency of online courses in education by providing personalized contents and user interfaces that changes according to learner’s requirements and usage patterns. This paper presents the approach to generate learning profile of each learner which helps to identify the learning styles and provide Adaptive User Interface which includes adaptive learning components and learning material. The proposed method analyzes the captured web usage data to identify the learning profile of the learners. The learning profiles are identified by an algorithmic approach that is based on the frequency of accessing the materials and the time spent on the various learning components on the portal. The captured log data is pre-processed and converted into standard XML format to generate learners sequence data corresponding to the different sessions and time spent. The learning style model adopted in this approach is Felder-Silverman Learning Style Model (FSLSM. This paper also presents the analysis of learner’s activities, preprocessed XML files and generated sequences.

  16. Brand confusion in South African Rugby – Super 12 brands vs ...

    African Journals Online (AJOL)

    Brand confusion in South African Rugby – Super 12 brands vs Currie-Cup brands? ... Through the application of marketing principles and practice, sport marketers should anticipate, manage ... 12 rugby brands and the apparent lack of differentiation from the traditional Currie Cup brands. ... AJOL African Journals Online.

  17. Derivation of Color Confusion Lines for Pseudo-Dichromat Observers from Color Discrimination Thresholds

    Directory of Open Access Journals (Sweden)

    Kahiro Matsudaira

    2011-05-01

    Full Text Available The objective is to develop a method of defining color confusion lines in the display RGB color space through color discrimination tasks. In the experiment, reference and test square patches were presented side by side on a CRT display. The subject's task is to set the test color where the color difference from the reference is just noticeable to him/her. In a single trial, the test color was only adjustable along one of 26 directions around the reference. Thus 26 colors with just noticeable difference (JND were obtained and made up a tube-like or an ellipsoidal shape around each reference. With color-anomalous subjects, the major axes of these shapes should be parallel to color confusion lines that have a common orientation vector corresponding to one of the cone excitation axes L, M, or S. In our method, the orientation vector was determined by minimizing the sum of the squares of the distances from JND colors to each confusion line. To assess the performance the method, the orientation vectors obtained by pseudo-dichromats (color normal observers with a dichromat simulator were compared to those theoretically calculated from the color vision model used in the simulator.

  18. Digital soil mapping: strategy for data pre-processing

    Directory of Open Access Journals (Sweden)

    Alexandre ten Caten

    2012-08-01

    Full Text Available The region of greatest variability on soil maps is along the edge of their polygons, causing disagreement among pedologists about the appropriate description of soil classes at these locations. The objective of this work was to propose a strategy for data pre-processing applied to digital soil mapping (DSM. Soil polygons on a training map were shrunk by 100 and 160 m. This strategy prevented the use of covariates located near the edge of the soil classes for the Decision Tree (DT models. Three DT models derived from eight predictive covariates, related to relief and organism factors sampled on the original polygons of a soil map and on polygons shrunk by 100 and 160 m were used to predict soil classes. The DT model derived from observations 160 m away from the edge of the polygons on the original map is less complex and has a better predictive performance.

  19. Bayesian Optimization for Neuroimaging Pre-processing in Brain Age Classification and Prediction

    Directory of Open Access Journals (Sweden)

    Jenessa Lancaster

    2018-02-01

    Full Text Available Neuroimaging-based age prediction using machine learning is proposed as a biomarker of brain aging, relating to cognitive performance, health outcomes and progression of neurodegenerative disease. However, even leading age-prediction algorithms contain measurement error, motivating efforts to improve experimental pipelines. T1-weighted MRI is commonly used for age prediction, and the pre-processing of these scans involves normalization to a common template and resampling to a common voxel size, followed by spatial smoothing. Resampling parameters are often selected arbitrarily. Here, we sought to improve brain-age prediction accuracy by optimizing resampling parameters using Bayesian optimization. Using data on N = 2003 healthy individuals (aged 16–90 years we trained support vector machines to (i distinguish between young (<22 years and old (>50 years brains (classification and (ii predict chronological age (regression. We also evaluated generalisability of the age-regression model to an independent dataset (CamCAN, N = 648, aged 18–88 years. Bayesian optimization was used to identify optimal voxel size and smoothing kernel size for each task. This procedure adaptively samples the parameter space to evaluate accuracy across a range of possible parameters, using independent sub-samples to iteratively assess different parameter combinations to arrive at optimal values. When distinguishing between young and old brains a classification accuracy of 88.1% was achieved, (optimal voxel size = 11.5 mm3, smoothing kernel = 2.3 mm. For predicting chronological age, a mean absolute error (MAE of 5.08 years was achieved, (optimal voxel size = 3.73 mm3, smoothing kernel = 3.68 mm. This was compared to performance using default values of 1.5 mm3 and 4mm respectively, resulting in MAE = 5.48 years, though this 7.3% improvement was not statistically significant. When assessing generalisability, best performance was achieved when applying the entire Bayesian

  20. Topological expansion of the chain of matrices

    International Nuclear Information System (INIS)

    Eynard, B.; Ferrer, A. Prats

    2009-01-01

    We solve the loop equations to all orders in 1/N 2 , for the Chain of Matrices matrix model (with possibly an external field coupled to the last matrix of the chain). We show that the topological expansion of the free energy, is, like for the 1 and 2-matrix model, given by the symplectic invariants of [19]. As a consequence, we find the double scaling limit explicitly, and we discuss modular properties, large N asymptotics. We also briefly discuss the limit of an infinite chain of matrices (matrix quantum mechanics).

  1. Newton`s iteration for inversion of Cauchy-like and other structured matrices

    Energy Technology Data Exchange (ETDEWEB)

    Pan, V.Y. [Lehman College, Bronx, NY (United States); Zheng, Ailong; Huang, Xiaohan; Dias, O. [CUNY, New York, NY (United States)

    1996-12-31

    We specify some initial assumptions that guarantee rapid refinement of a rough initial approximation to the inverse of a Cauchy-like matrix, by mean of our new modification of Newton`s iteration, where the input, output, and all the auxiliary matrices are represented with their short generators defined by the associated scaling operators. The computations are performed fast since they are confined to operations with short generators of the given and computed matrices. Because of the known correlations among various structured matrices, the algorithm is immediately extended to rapid refinement of rough initial approximations to the inverses of Vandermonde-like, Chebyshev-Vandermonde-like and Toeplitz-like matrices, where again, the computations are confined to operations with short generators of the involved matrices.

  2. Agricultural matrices affect ground ant assemblage composition inside forest fragments.

    Directory of Open Access Journals (Sweden)

    Diego Santana Assis

    Full Text Available The establishment of agricultural matrices generally involves deforestation, which leads to fragmentation of the remaining forest. This fragmentation can affect forest dynamics both positively and negatively. Since most animal species are affected, certain groups can be used to measure the impact of such fragmentation. This study aimed to measure the impacts of agricultural crops (matrices on ant communities of adjacent lower montane Atlantic rainforest fragments. We sampled nine forest fragments at locations surrounded by different agricultural matrices, namely: coffee (3 replicates; sugarcane (3; and pasture (3. At each site we installed pitfall traps along a 500 m transect from the interior of the matrix to the interior of the fragment (20 pitfall traps ~25 m apart. Each transect was partitioned into four categories: interior of the matrix; edge of the matrix; edge of the fragment; and interior of the fragment. For each sample site, we measured ant species richness and ant community composition within each transect category. Ant richness and composition differed between fragments and matrices. Each sample location had a specific composition of ants, probably because of the influence of the nature and management of the agricultural matrices. Species composition in the coffee matrix had the highest similarity to its corresponding fragment. The variability in species composition within forest fragments surrounded by pasture was greatest when compared with forest fragments surrounded by sugarcane or, to a lesser extent, coffee. Functional guild composition differed between locations, but the most representative guild was 'generalist' both in the agricultural matrices and forest fragments. Our results are important for understanding how agricultural matrices act on ant communities, and also, how these isolated forest fragments could act as an island of biodiversity in an 'ocean of crops'.

  3. Agricultural matrices affect ground ant assemblage composition inside forest fragments.

    Science.gov (United States)

    Assis, Diego Santana; Dos Santos, Iracenir Andrade; Ramos, Flavio Nunes; Barrios-Rojas, Katty Elena; Majer, Jonathan David; Vilela, Evaldo Ferreira

    2018-01-01

    The establishment of agricultural matrices generally involves deforestation, which leads to fragmentation of the remaining forest. This fragmentation can affect forest dynamics both positively and negatively. Since most animal species are affected, certain groups can be used to measure the impact of such fragmentation. This study aimed to measure the impacts of agricultural crops (matrices) on ant communities of adjacent lower montane Atlantic rainforest fragments. We sampled nine forest fragments at locations surrounded by different agricultural matrices, namely: coffee (3 replicates); sugarcane (3); and pasture (3). At each site we installed pitfall traps along a 500 m transect from the interior of the matrix to the interior of the fragment (20 pitfall traps ~25 m apart). Each transect was partitioned into four categories: interior of the matrix; edge of the matrix; edge of the fragment; and interior of the fragment. For each sample site, we measured ant species richness and ant community composition within each transect category. Ant richness and composition differed between fragments and matrices. Each sample location had a specific composition of ants, probably because of the influence of the nature and management of the agricultural matrices. Species composition in the coffee matrix had the highest similarity to its corresponding fragment. The variability in species composition within forest fragments surrounded by pasture was greatest when compared with forest fragments surrounded by sugarcane or, to a lesser extent, coffee. Functional guild composition differed between locations, but the most representative guild was 'generalist' both in the agricultural matrices and forest fragments. Our results are important for understanding how agricultural matrices act on ant communities, and also, how these isolated forest fragments could act as an island of biodiversity in an 'ocean of crops'.

  4. Identity Confusion and Materialism Mediate the Relationship Between Excessive Social Network Site Usage and Online Compulsive Buying.

    Science.gov (United States)

    Sharif, Saeed Pahlevan; Khanekharab, Jasmine

    2017-08-01

    This study investigates the mediating role of identity confusion and materialism in the relationship between social networking site (SNS) excessive usage and online compulsive buying among young adults. A total of 501 SNS users aged 17 to 23 years (M = 19.68, SD = 1.65) completed an online survey questionnaire. A serial multiple mediator model was developed and hypotheses were tested using structural equation modeling. The results showed that excessive young adult SNS users had a higher tendency toward compulsive buying online. This was partly because they experienced higher identity confusion and developed higher levels of materialism. Targeted psychological interventions seeking to gradually increase identity clarity to buffer the detrimental effects of SNS usage and identity confusion in young adults are suggested.

  5. MALDI matrices for low molecular weight compounds: an endless story?

    Science.gov (United States)

    Calvano, Cosima Damiana; Monopoli, Antonio; Cataldi, Tommaso R I; Palmisano, Francesco

    2018-04-23

    Since its introduction in the 1980s, matrix-assisted laser desorption/ionization mass spectrometry (MALDI MS) has gained a prominent role in the analysis of high molecular weight biomolecules such as proteins, peptides, oligonucleotides, and polysaccharides. Its application to low molecular weight compounds has remained for long time challenging due to the spectral interferences produced by conventional organic matrices in the low m/z window. To overcome this problem, specific sample preparation such as analyte/matrix derivatization, addition of dopants, or sophisticated deposition technique especially useful for imaging experiments, have been proposed. Alternative approaches based on second generation (rationally designed) organic matrices, ionic liquids, and inorganic matrices, including metallic nanoparticles, have been the object of intense and continuous research efforts. Definite evidences are now provided that MALDI MS represents a powerful and invaluable analytical tool also for small molecules, including their quantification, thus opening new, exciting applications in metabolomics and imaging mass spectrometry. This review is intended to offer a concise critical overview of the most recent achievements about MALDI matrices capable of specifically address the challenging issue of small molecules analysis. Graphical abstract An ideal Book of matrices for MALDI MS of small molecules.

  6. Hierarchical Matrices Method and Its Application in Electromagnetic Integral Equations

    Directory of Open Access Journals (Sweden)

    Han Guo

    2012-01-01

    Full Text Available Hierarchical (H- matrices method is a general mathematical framework providing a highly compact representation and efficient numerical arithmetic. When applied in integral-equation- (IE- based computational electromagnetics, H-matrices can be regarded as a fast algorithm; therefore, both the CPU time and memory requirement are reduced significantly. Its kernel independent feature also makes it suitable for any kind of integral equation. To solve H-matrices system, Krylov iteration methods can be employed with appropriate preconditioners, and direct solvers based on the hierarchical structure of H-matrices are also available along with high efficiency and accuracy, which is a unique advantage compared to other fast algorithms. In this paper, a novel sparse approximate inverse (SAI preconditioner in multilevel fashion is proposed to accelerate the convergence rate of Krylov iterations for solving H-matrices system in electromagnetic applications, and a group of parallel fast direct solvers are developed for dealing with multiple right-hand-side cases. Finally, numerical experiments are given to demonstrate the advantages of the proposed multilevel preconditioner compared to conventional “single level” preconditioners and the practicability of the fast direct solvers for arbitrary complex structures.

  7. The recurrence sequences via Sylvester matrices

    Science.gov (United States)

    Karaduman, Erdal; Deveci, Ömür

    2017-07-01

    In this work, we define the Pell-Jacobsthal-Slyvester sequence and the Jacobsthal-Pell-Slyvester sequence by using the Slyvester matrices which are obtained from the characteristic polynomials of the Pell and Jacobsthal sequences and then, we study the sequences defined modulo m. Also, we obtain the cyclic groups and the semigroups from the generating matrices of these sequences when read modulo m and then, we derive the relationships among the orders of the cyclic groups and the periods of the sequences. Furthermore, we redefine Pell-Jacobsthal-Slyvester sequence and the Jacobsthal-Pell-Slyvester sequence by means of the elements of the groups and then, we examine them in the finite groups.

  8. Quantitative mass spectrometry of unconventional human biological matrices

    Science.gov (United States)

    Dutkiewicz, Ewelina P.; Urban, Pawel L.

    2016-10-01

    The development of sensitive and versatile mass spectrometric methodology has fuelled interest in the analysis of metabolites and drugs in unconventional biological specimens. Here, we discuss the analysis of eight human matrices-hair, nail, breath, saliva, tears, meibum, nasal mucus and skin excretions (including sweat)-by mass spectrometry (MS). The use of such specimens brings a number of advantages, the most important being non-invasive sampling, the limited risk of adulteration and the ability to obtain information that complements blood and urine tests. The most often studied matrices are hair, breath and saliva. This review primarily focuses on endogenous (e.g. potential biomarkers, hormones) and exogenous (e.g. drugs, environmental contaminants) small molecules. The majority of analytical methods used chromatographic separation prior to MS; however, such a hyphenated methodology greatly limits analytical throughput. On the other hand, the mass spectrometric methods that exclude chromatographic separation are fast but suffer from matrix interferences. To enable development of quantitative assays for unconventional matrices, it is desirable to standardize the protocols for the analysis of each specimen and create appropriate certified reference materials. Overcoming these challenges will make analysis of unconventional human biological matrices more common in a clinical setting. This article is part of the themed issue 'Quantitative mass spectrometry'.

  9. A Conceptual Cost Benefit Analysis of Tailings Matrices Use in Construction Applications

    Directory of Open Access Journals (Sweden)

    Mahmood Ali A.

    2016-01-01

    Full Text Available As part of a comprehensive research program, new tailings matrices are formulated of combinations of tailings and binder materials. The research program encompasses experimental and numerical analysis of the tailings matrices to investigate the feasibility of using them as construction materials in cold climates. This paper discusses a conceptual cost benefit analysis for the use of these new materials. It is shown here that the financial benefits of using the proposed new tailings matrices in terms of environmental sustainability are much higher when compared to normal sand matrices.

  10. Flexible Bayesian Dynamic Modeling of Covariance and Correlation Matrices

    KAUST Repository

    Lan, Shiwei

    2017-11-08

    Modeling covariance (and correlation) matrices is a challenging problem due to the large dimensionality and positive-definiteness constraint. In this paper, we propose a novel Bayesian framework based on decomposing the covariance matrix into variance and correlation matrices. The highlight is that the correlations are represented as products of vectors on unit spheres. We propose a variety of distributions on spheres (e.g. the squared-Dirichlet distribution) to induce flexible prior distributions for covariance matrices that go beyond the commonly used inverse-Wishart prior. To handle the intractability of the resulting posterior, we introduce the adaptive $\\\\Delta$-Spherical Hamiltonian Monte Carlo. We also extend our structured framework to dynamic cases and introduce unit-vector Gaussian process priors for modeling the evolution of correlation among multiple time series. Using an example of Normal-Inverse-Wishart problem, a simulated periodic process, and an analysis of local field potential data (collected from the hippocampus of rats performing a complex sequence memory task), we demonstrated the validity and effectiveness of our proposed framework for (dynamic) modeling covariance and correlation matrices.

  11. Impact of Economic Hardship and Financial Threat on Suicide Ideation and Confusion.

    Science.gov (United States)

    Fiksenbaum, Lisa; Marjanovic, Zdravko; Greenglass, Esther; Garcia-Santos, Francisco

    2017-07-04

    The present study tested the extent to which perceived economic hardship is associated with psychological distress (suicide ideation and confusion) after controlling for personal characteristics. It also explored whether perceived financial threat (i.e., fearful anxious-uncertainty about the stability and security of one's personal financial situation) mediates the relationship between economic hardship and psychological distress outcomes. The theoretical model was tested in a sample of Canadian students (n = 211) and was validated in a community sample of employed Portuguese adults (n = 161). In both samples, the fit of the model was good. Parameter estimates indicated that greater experience of economic hardship increased with financial threat, which in turn increased with levels of suicide ideation and confusion. We discuss the practical implications of these results, such as for programs aimed at alleviating the burden of financial hardship, in our concluding remarks.

  12. An introduction to the theory of canonical matrices

    CERN Document Server

    Turnbull, H W

    2004-01-01

    Thorough and self-contained, this penetrating study of the theory of canonical matrices presents a detailed consideration of all the theory's principal features. Topics include elementary transformations and bilinear and quadratic forms; canonical reduction of equivalent matrices; subgroups of the group of equivalent transformations; and rational and classical canonical forms. The final chapters explore several methods of canonical reduction, including those of unitary and orthogonal transformations. 1952 edition. Index. Appendix. Historical notes. Bibliographies. 275 problems.

  13. Complementary Set Matrices Satisfying a Column Correlation Constraint

    OpenAIRE

    Wu, Di; Spasojevic, Predrag

    2006-01-01

    Motivated by the problem of reducing the peak to average power ratio (PAPR) of transmitted signals, we consider a design of complementary set matrices whose column sequences satisfy a correlation constraint. The design algorithm recursively builds a collection of $2^{t+1}$ mutually orthogonal (MO) complementary set matrices starting from a companion pair of sequences. We relate correlation properties of column sequences to that of the companion pair and illustrate how to select an appropriate...

  14. Hamiltonian structure of isospectral deformation equation and semi-classical approximation to factorized S-matrices

    International Nuclear Information System (INIS)

    Chudnovsky, D.V.; Chudnovsky, G.V.

    1980-01-01

    We consider semi-classical approximation to factorized S-matrices. We show that this new class of matrices, called s-matrices, defines Hamiltonian structures for isospectral deformation equations. Concrete examples of factorized s-matrices are constructed and they are used to define Hamiltonian structure for general two-dimensional isospectral deformation systems. (orig.)

  15. Origins Space Telescope: Breaking the Confusion Limit

    Science.gov (United States)

    Wright, Edward L.; Origins Space Telescope Science and Technology Definition Team

    2018-01-01

    The Origins Space Telescope (OST) is the mission concept for the Far-Infrared Surveyor, one of the four science and technology definition studies of NASA Headquarters for the 2020 Astronomy and Astrophysics Decadal survey. Origins will enable flagship-quality general observing programs led by the astronomical community in the 2030s.OST will have a background-limited sensitivity for a background 27,000 times lower than the Herschel background caused by thermal emission from Herschel's warm telescope. For continuum observations the confusion limit in a diffraction-limited survey can be reached in very short integration times at longer far-infrared wavelengths. But the confusion limit can be pierced for both the nearest and the farthest objects to be observed by OST. For outer the Solar System the targets' motion across the sky will provide a clear signature in surveys repeated after an interval of days to months. This will provide a size-frequency distribution of TNOs that is not biased toward high albedo objects.For the distant Universe the first galaxies and the first metals will provide a third dimension of spectral information that can be measured with a long-slit, medium resolution spectrograph. This will allow 3Dmapping to measure source densities as a function of redshift. The continuum shape associated with sourcesat different redshifts can be derived from correlation analyses of these 3D maps.Fairly large sky areas can be scanned by moving the spacecraft at a constant angular rate perpendicular to the orientation of the long slit of the spectrograph, avoiding the high overhead of step-and-stare surveying with a large space observatory.We welcome you to contact the Science and Technology Definition Team (STDT) with your science needs and ideas by emailing us at ost_info@lists.ipac.caltech.edu

  16. Self-orthogonal codes from some bush-type Hadamard matrices ...

    African Journals Online (AJOL)

    By means of a construction method outlined by Harada and Tonchev, we determine some non-binary self-orthogonal codes obtained from the row span of orbit matrices of Bush-type Hadamard matrices that admit a xed-point-free and xed-block-free automorphism of prime order. We show that the code [20; 15; 4]5 obtained ...

  17. Mirror-Image Confusions: Implications for Representation and Processing of Object Orientation

    Science.gov (United States)

    Gregory, Emma; McCloskey, Michael

    2010-01-01

    Perceiving the orientation of objects is important for interacting with the world, yet little is known about the mental representation or processing of object orientation information. The tendency of humans and other species to confuse mirror images provides a potential clue. However, the appropriate characterization of this phenomenon is not…

  18. Dyscalculia, Dysgraphia, and Left-Right Confusion from a Left Posterior Peri-Insular Infarct

    Directory of Open Access Journals (Sweden)

    S. Bhattacharyya

    2014-01-01

    Full Text Available The Gerstmann syndrome of dyscalculia, dysgraphia, left-right confusion, and finger agnosia is generally attributed to lesions near the angular gyrus of the dominant hemisphere. A 68-year-old right-handed woman presented with sudden difficulty completing a Sudoku grid and was found to have dyscalculia, dysgraphia, and left-right confusion. Magnetic resonance imaging (MRI showed a focus of abnormal reduced diffusivity in the left posterior insula and temporoparietal operculum consistent with acute infarct. Gerstmann syndrome from an insular or peri-insular lesion has not been described in the literature previously. Pathological and functional imaging studies show connections between left posterior insular region and inferior parietal lobe. We postulate that the insula and operculum lesion disrupted key functional networks resulting in a pseudoparietal presentation.

  19. Dyscalculia, dysgraphia, and left-right confusion from a left posterior peri-insular infarct.

    Science.gov (United States)

    Bhattacharyya, S; Cai, X; Klein, J P

    2014-01-01

    The Gerstmann syndrome of dyscalculia, dysgraphia, left-right confusion, and finger agnosia is generally attributed to lesions near the angular gyrus of the dominant hemisphere. A 68-year-old right-handed woman presented with sudden difficulty completing a Sudoku grid and was found to have dyscalculia, dysgraphia, and left-right confusion. Magnetic resonance imaging (MRI) showed a focus of abnormal reduced diffusivity in the left posterior insula and temporoparietal operculum consistent with acute infarct. Gerstmann syndrome from an insular or peri-insular lesion has not been described in the literature previously. Pathological and functional imaging studies show connections between left posterior insular region and inferior parietal lobe. We postulate that the insula and operculum lesion disrupted key functional networks resulting in a pseudoparietal presentation.

  20. Some thoughts on positive definiteness in the consideration of nuclear data covariance matrices

    Energy Technology Data Exchange (ETDEWEB)

    Geraldo, L.P.; Smith, D.L.

    1988-01-01

    Some basic mathematical features of covariance matrices are reviewed, particularly as they relate to the property of positive difiniteness. Physical implications of positive definiteness are also discussed. Consideration is given to an examination of the origins of non-positive definite matrices, to procedures which encourage the generation of positive definite matrices and to the testing of covariance matrices for positive definiteness. Attention is also given to certain problems associated with the construction of covariance matrices using information which is obtained from evaluated data files recorded in the ENDF format. Examples are provided to illustrate key points pertaining to each of the topic areas covered.

  1. Some thoughts on positive definiteness in the consideration of nuclear data covariance matrices

    International Nuclear Information System (INIS)

    Geraldo, L.P.; Smith, D.L.

    1988-01-01

    Some basic mathematical features of covariance matrices are reviewed, particularly as they relate to the property of positive difiniteness. Physical implications of positive definiteness are also discussed. Consideration is given to an examination of the origins of non-positive definite matrices, to procedures which encourage the generation of positive definite matrices and to the testing of covariance matrices for positive definiteness. Attention is also given to certain problems associated with the construction of covariance matrices using information which is obtained from evaluated data files recorded in the ENDF format. Examples are provided to illustrate key points pertaining to each of the topic areas covered

  2. 3-D image pre-processing algorithms for improved automated tracing of neuronal arbors.

    Science.gov (United States)

    Narayanaswamy, Arunachalam; Wang, Yu; Roysam, Badrinath

    2011-09-01

    The accuracy and reliability of automated neurite tracing systems is ultimately limited by image quality as reflected in the signal-to-noise ratio, contrast, and image variability. This paper describes a novel combination of image processing methods that operate on images of neurites captured by confocal and widefield microscopy, and produce synthetic images that are better suited to automated tracing. The algorithms are based on the curvelet transform (for denoising curvilinear structures and local orientation estimation), perceptual grouping by scalar voting (for elimination of non-tubular structures and improvement of neurite continuity while preserving branch points), adaptive focus detection, and depth estimation (for handling widefield images without deconvolution). The proposed methods are fast, and capable of handling large images. Their ability to handle images of unlimited size derives from automated tiling of large images along the lateral dimension, and processing of 3-D images one optical slice at a time. Their speed derives in part from the fact that the core computations are formulated in terms of the Fast Fourier Transform (FFT), and in part from parallel computation on multi-core computers. The methods are simple to apply to new images since they require very few adjustable parameters, all of which are intuitive. Examples of pre-processing DIADEM Challenge images are used to illustrate improved automated tracing resulting from our pre-processing methods.

  3. Statistical Downscaling Output GCM Modeling with Continuum Regression and Pre-Processing PCA Approach

    Directory of Open Access Journals (Sweden)

    Sutikno Sutikno

    2010-08-01

    Full Text Available One of the climate models used to predict the climatic conditions is Global Circulation Models (GCM. GCM is a computer-based model that consists of different equations. It uses numerical and deterministic equation which follows the physics rules. GCM is a main tool to predict climate and weather, also it uses as primary information source to review the climate change effect. Statistical Downscaling (SD technique is used to bridge the large-scale GCM with a small scale (the study area. GCM data is spatial and temporal data most likely to occur where the spatial correlation between different data on the grid in a single domain. Multicollinearity problems require the need for pre-processing of variable data X. Continuum Regression (CR and pre-processing with Principal Component Analysis (PCA methods is an alternative to SD modelling. CR is one method which was developed by Stone and Brooks (1990. This method is a generalization from Ordinary Least Square (OLS, Principal Component Regression (PCR and Partial Least Square method (PLS methods, used to overcome multicollinearity problems. Data processing for the station in Ambon, Pontianak, Losarang, Indramayu and Yuntinyuat show that the RMSEP values and R2 predict in the domain 8x8 and 12x12 by uses CR method produces results better than by PCR and PLS.

  4. CSS Preprocessing: Tools and Automation Techniques

    Directory of Open Access Journals (Sweden)

    Ricardo Queirós

    2018-01-01

    Full Text Available Cascading Style Sheets (CSS is a W3C specification for a style sheet language used for describing the presentation of a document written in a markup language, more precisely, for styling Web documents. However, in the last few years, the landscape for CSS development has changed dramatically with the appearance of several languages and tools aiming to help developers build clean, modular and performance-aware CSS. These new approaches give developers mechanisms to preprocess CSS rules through the use of programming constructs, defined as CSS preprocessors, with the ultimate goal to bring those missing constructs to the CSS realm and to foster stylesheets structured programming. At the same time, a new set of tools appeared, defined as postprocessors, for extension and automation purposes covering a broad set of features ranging from identifying unused and duplicate code to applying vendor prefixes. With all these tools and techniques in hands, developers need to provide a consistent workflow to foster CSS modular coding. This paper aims to present an introductory survey on the CSS processors. The survey gathers information on a specific set of processors, categorizes them and compares their features regarding a set of predefined criteria such as: maturity, coverage and performance. Finally, we propose a basic set of best practices in order to setup a simple and pragmatic styling code workflow.

  5. Preprocessing Raw Data in Clinical Medicine for a Data Mining Purpose

    Directory of Open Access Journals (Sweden)

    Peterková Andrea

    2016-12-01

    Full Text Available Dealing with data from the field of medicine is nowadays very current and difficult. On a global scale, a large amount of medical data is produced on an everyday basis. For the purpose of our research, we understand medical data as data about patients like results from laboratory analysis, results from screening examinations (CT, ECHO and clinical parameters. This data is usually in a raw format, difficult to understand, non-standard and not suitable for further processing or analysis. This paper aims to describe the possible method of data preparation and preprocessing of such raw medical data into a form, where further analysis algorithms can be applied.

  6. Associative Yang-Baxter equation for quantum (semi-)dynamical R-matrices

    International Nuclear Information System (INIS)

    Sechin, Ivan; Zotov, Andrei

    2016-01-01

    In this paper we propose versions of the associative Yang-Baxter equation and higher order R-matrix identities which can be applied to quantum dynamical R-matrices. As is known quantum non-dynamical R-matrices of Baxter-Belavin type satisfy this equation. Together with unitarity condition and skew-symmetry it provides the quantum Yang-Baxter equation and a set of identities useful for different applications in integrable systems. The dynamical R-matrices satisfy the Gervais-Neveu-Felder (or dynamical Yang-Baxter) equation. Relation between the dynamical and non-dynamical cases is described by the IRF (interaction-round-a-face)-Vertex transformation. An alternative approach to quantum (semi-)dynamical R-matrices and related quantum algebras was suggested by Arutyunov, Chekhov, and Frolov (ACF) in their study of the quantum Ruijsenaars-Schneider model. The purpose of this paper is twofold. First, we prove that the ACF elliptic R-matrix satisfies the associative Yang-Baxter equation with shifted spectral parameters. Second, we directly prove a simple relation of the IRF-Vertex type between the Baxter-Belavin and the ACF elliptic R-matrices predicted previously by Avan and Rollet. It provides the higher order R-matrix identities and an explanation of the obtained equations through those for non-dynamical R-matrices. As a by-product we also get an interpretation of the intertwining transformation as matrix extension of scalar theta function likewise R-matrix is interpreted as matrix extension of the Kronecker function. Relations to the Gervais-Neveu-Felder equation and identities for the Felder’s elliptic R-matrix are also discussed.

  7. Asymmetric correlation matrices: an analysis of financial data

    Science.gov (United States)

    Livan, G.; Rebecchi, L.

    2012-06-01

    We analyse the spectral properties of correlation matrices between distinct statistical systems. Such matrices are intrinsically non-symmetric, and lend themselves to extend the spectral analyses usually performed on standard Pearson correlation matrices to the realm of complex eigenvalues. We employ some recent random matrix theory results on the average eigenvalue density of this type of matrix to distinguish between noise and non-trivial correlation structures, and we focus on financial data as a case study. Namely, we employ daily prices of stocks belonging to the American and British stock exchanges, and look for the emergence of correlations between two such markets in the eigenvalue spectrum of their non-symmetric correlation matrix. We find several non trivial results when considering time-lagged correlations over short lags, and we corroborate our findings by additionally studying the asymmetric correlation matrix of the principal components of our datasets.

  8. No Eigenvalues Outside the Limiting Support of Generally Correlated Gaussian Matrices

    KAUST Repository

    Kammoun, Abla; Alouini, Mohamed-Slim

    2016-01-01

    This paper investigates the behaviour of the spectrum of generally correlated Gaussian random matrices whose columns are zero-mean independent vectors but have different correlations, under the specific regime where the number of their columns and that of their rows grow at infinity with the same pace. Following the approach proposed in [1], we prove that under some mild conditions, there is no eigenvalue outside the limiting support of generally correlated Gaussian matrices. As an outcome of this result, we establish that the smallest singular value of these matrices is almost surely greater than zero. From a practical perspective, this control of the smallest singular value is paramount to applications from statistical signal processing and wireless communication, in which this kind of matrices naturally arise.

  9. No Eigenvalues Outside the Limiting Support of Generally Correlated Gaussian Matrices

    KAUST Repository

    Kammoun, Abla

    2016-05-04

    This paper investigates the behaviour of the spectrum of generally correlated Gaussian random matrices whose columns are zero-mean independent vectors but have different correlations, under the specific regime where the number of their columns and that of their rows grow at infinity with the same pace. Following the approach proposed in [1], we prove that under some mild conditions, there is no eigenvalue outside the limiting support of generally correlated Gaussian matrices. As an outcome of this result, we establish that the smallest singular value of these matrices is almost surely greater than zero. From a practical perspective, this control of the smallest singular value is paramount to applications from statistical signal processing and wireless communication, in which this kind of matrices naturally arise.

  10. Adhesion and metabolic activity of human corneal cells on PCL based nanofiber matrices

    Energy Technology Data Exchange (ETDEWEB)

    Stafiej, Piotr; Küng, Florian [Department of Ophthalmology, Universität Erlangen-Nürnberg, Schwabachanlage 6, 91054 Erlangen (Germany); Institute of Polymer Materials, Universität Erlangen-Nürnberg, Martensstraße 7, 91054 Erlangen (Germany); Thieme, Daniel; Czugala, Marta; Kruse, Friedrich E. [Department of Ophthalmology, Universität Erlangen-Nürnberg, Schwabachanlage 6, 91054 Erlangen (Germany); Schubert, Dirk W. [Institute of Polymer Materials, Universität Erlangen-Nürnberg, Martensstraße 7, 91054 Erlangen (Germany); Fuchsluger, Thomas A., E-mail: thomas.fuchsluger@uk-erlangen.de [Department of Ophthalmology, Universität Erlangen-Nürnberg, Schwabachanlage 6, 91054 Erlangen (Germany)

    2017-02-01

    In this work, polycaprolactone (PCL) was used as a basic polymer for electrospinning of random and aligned nanofiber matrices. Our aim was to develop a biocompatible substrate for ophthalmological application to improve wound closure in defects of the cornea as replacement for human amniotic membrane. We investigated whether blending the hydrophobic PCL with poly (glycerol sebacate) (PGS) or chitosan (CHI) improves the biocompatibility of the matrices for cell expansion. Human corneal epithelial cells (HCEp) and human corneal keratocytes (HCK) were used for in vitro biocompatibility studies. After optimization of the electrospinning parameters for all blends, scanning electron microscopy (SEM), attenuated total reflectance Fourier transform infrared spectroscopy (ATR-FTIR), and water contact angle were used to characterize the different matrices. Fluorescence staining of the F-actin cytoskeleton of the cells was performed to analyze the adherence of the cells to the different matrices. Metabolic activity of the cells was measured by cell counting kit-8 (CCK-8) for 20 days to compare the biocompatibility of the materials. Our results show the feasibility of producing uniform nanofiber matrices with and without orientation for the used blends. All materials support adherence and proliferation of human corneal cell lines with oriented growth on aligned matrices. Although hydrophobicity of the materials was lowered by blending PCL, no increase in biocompatibility or proliferation, as was expected, could be measured. All tested matrices supported the expansion of human corneal cells, confirming their potential as substrates for biomedical applications. - Highlights: • PCL was blended with chitosan and poly(glycerol sebacate) for electrospinning. • Biocompatibility was proven with two human corneal cell lines. • Both cell lines adhered and proliferated on random and aligned nanofiber matrices. • Cytoskeletal orientation is shown on aligned nanofiber matrices.

  11. First results in the application of silicon photomultiplier matrices to small animal PET

    Energy Technology Data Exchange (ETDEWEB)

    Llosa, G. [University of Pisa, Department of Physics, Pisa (Italy)], E-mail: gabriela.llosa@pi.infn.it; Belcari, N.; Bisogni, M.G. [University of Pisa, Department of Physics, Pisa (Italy); INFN Pisa (Italy); Collazuol, G. [University of Pisa, Department of Physics, Pisa (Italy); Scuola Normale Superiore, Pisa (Italy); Marcatili, S. [University of Pisa, Department of Physics, Pisa (Italy); INFN Pisa (Italy); Boscardin, M.; Melchiorri, M.; Tarolli, A.; Piemonte, C.; Zorzi, N. [FBK irst, Trento (Italy); Barrillon, P.; Bondil-Blin, S.; Chaumat, V.; La Taille, C. de; Dinu, N.; Puill, V.; Vagnucci, J-F. [Laboratoire de l' Accelerateur Lineaire, IN2P3-CNRS, Orsay (France); Del Guerra, A. [University of Pisa, Department of Physics, Pisa (Italy); INFN Pisa (Italy)

    2009-10-21

    A very high resolution small animal PET scanner that employs matrices of silicon photomultipliers as photodetectors is under development at the University of Pisa and INFN Pisa. The first SiPM matrices composed of 16 (4x4)1mmx1mm pixel elements on a common substrate have been produced at FBK-irst, and are being evaluated for this application. The MAROC2 ASIC developed at LAL-Orsay has been employed for the readout of the SiPM matrices. The devices have been tested with pixelated and continuous LYSO crystals. The results show the good performance of the matrices and lead to the fabrication of matrices with 64 SiPM elements.

  12. First results in the application of silicon photomultiplier matrices to small animal PET

    International Nuclear Information System (INIS)

    Llosa, G.; Belcari, N.; Bisogni, M.G.; Collazuol, G.; Marcatili, S.; Boscardin, M.; Melchiorri, M.; Tarolli, A.; Piemonte, C.; Zorzi, N.; Barrillon, P.; Bondil-Blin, S.; Chaumat, V.; La Taille, C. de; Dinu, N.; Puill, V.; Vagnucci, J-F.; Del Guerra, A.

    2009-01-01

    A very high resolution small animal PET scanner that employs matrices of silicon photomultipliers as photodetectors is under development at the University of Pisa and INFN Pisa. The first SiPM matrices composed of 16 (4x4)1mmx1mm pixel elements on a common substrate have been produced at FBK-irst, and are being evaluated for this application. The MAROC2 ASIC developed at LAL-Orsay has been employed for the readout of the SiPM matrices. The devices have been tested with pixelated and continuous LYSO crystals. The results show the good performance of the matrices and lead to the fabrication of matrices with 64 SiPM elements.

  13. More about unphysical zeroes in quark mass matrices

    Energy Technology Data Exchange (ETDEWEB)

    Emmanuel-Costa, David, E-mail: david.costa@tecnico.ulisboa.pt [Departamento de Física and Centro de Física Teórica de Partículas - CFTP, Instituto Superior Técnico, Universidade de Lisboa, Avenida Rovisco Pais, 1049-001 Lisboa (Portugal); González Felipe, Ricardo, E-mail: ricardo.felipe@tecnico.ulisboa.pt [Departamento de Física and Centro de Física Teórica de Partículas - CFTP, Instituto Superior Técnico, Universidade de Lisboa, Avenida Rovisco Pais, 1049-001 Lisboa (Portugal); ISEL - Instituto Superior de Engenharia de Lisboa, Instituto Politécnico de Lisboa, Rua Conselheiro Emídio Navarro, 1959-007 Lisboa (Portugal)

    2017-01-10

    We look for all weak bases that lead to texture zeroes in the quark mass matrices and contain a minimal number of parameters in the framework of the standard model. Since there are ten physical observables, namely, six nonvanishing quark masses, three mixing angles and one CP phase, the maximum number of texture zeroes in both quark sectors is altogether nine. The nine zero entries can only be distributed between the up- and down-quark sectors in matrix pairs with six and three texture zeroes or five and four texture zeroes. In the weak basis where a quark mass matrix is nonsingular and has six zeroes in one sector, we find that there are 54 matrices with three zeroes in the other sector, obtainable through right-handed weak basis transformations. It is also found that all pairs composed of a nonsingular matrix with five zeroes and a nonsingular and nondecoupled matrix with four zeroes simply correspond to a weak basis choice. Without any further assumptions, none of these pairs of up- and down-quark mass matrices has physical content. It is shown that all non-weak-basis pairs of quark mass matrices that contain nine zeroes are not compatible with current experimental data. The particular case of the so-called nearest-neighbour-interaction pattern is also discussed.

  14. EARNED INCOME CREDIT: Opportunities To Make Recertification Program Less Confusing and More Consistent

    National Research Council Canada - National Science Library

    2002-01-01

    .... While it is important to ensure that all persons eligible for the EIC receive it, equally important is the need to identify and deny erroneous claims, whether due to fraud, negligence, or confusion...

  15. Basal Cell Carcinoma With Matrical Differentiation: Clinicopathologic, Immunohistochemical, and Molecular Biological Study of 22 Cases.

    Science.gov (United States)

    Kyrpychova, Liubov; Carr, Richard A; Martinek, Petr; Vanecek, Tomas; Perret, Raul; Chottová-Dvořáková, Magdalena; Zamecnik, Michal; Hadravsky, Ladislav; Michal, Michal; Kazakov, Dmitry V

    2017-06-01

    Basal cell carcinoma (BCC) with matrical differentiation is a fairly rare neoplasm, with about 30 cases documented mainly as isolated case reports. We studied a series of this neoplasm, including cases with an atypical matrical component, a hitherto unreported feature. Lesions coded as BCC with matrical differentiation were reviewed; 22 cases were included. Immunohistochemical studies were performed using antibodies against BerEp4, β-catenin, and epithelial membrane antigen (EMA). Molecular genetic studies using Ion AmpliSeq Cancer Hotspot Panel v2 by massively parallel sequencing on Ion Torrent PGM were performed in 2 cases with an atypical matrical component (1 was previously subjected to microdissection to sample the matrical and BCC areas separately). There were 13 male and 9 female patients, ranging in age from 41 to 89 years. Microscopically, all lesions manifested at least 2 components, a BCC area (follicular germinative differentiation) and areas with matrical differentiation. A BCC component dominated in 14 cases, whereas a matrical component dominated in 4 cases. Matrical differentiation was recognized as matrical/supramatrical cells (n=21), shadow cells (n=21), bright red trichohyaline granules (n=18), and blue-gray corneocytes (n=18). In 2 cases, matrical areas manifested cytologic atypia, and a third case exhibited an infiltrative growth pattern, with the tumor metastasizing to a lymph node. BerEP4 labeled the follicular germinative cells, whereas it was markedly reduced or negative in matrical areas. The reverse pattern was seen with β-catenin. EMA was negative in BCC areas but stained a proportion of matrical/supramatrical cells. Genetic studies revealed mutations of the following genes: CTNNB1, KIT, CDKN2A, TP53, SMAD4, ERBB4, and PTCH1, with some differences between the matrical and BCC components. It is concluded that matrical differentiation in BCC in most cases occurs as multiple foci. Rare neoplasms manifest atypia in the matrical areas

  16. Dirac Matrices and Feynman’s Rest of the Universe

    Directory of Open Access Journals (Sweden)

    Young S. Kim

    2012-10-01

    Full Text Available There are two sets of four-by-four matrices introduced by Dirac. The first set consists of fifteen Majorana matrices derivable from his four γ matrices. These fifteen matrices can also serve as the generators of the group SL(4, r. The second set consists of ten generators of the Sp(4 group which Dirac derived from two coupled harmonic oscillators. It is shown possible to extend the symmetry of Sp(4 to that of SL(4, r if the area of the phase space of one of the oscillators is allowed to become smaller without a lower limit. While there are no restrictions on the size of phase space in classical mechanics, Feynman’s rest of the universe makes this Sp(4-to-SL(4, r transition possible. The ten generators are for the world where quantum mechanics is valid. The remaining five generators belong to the rest of the universe. It is noted that the groups SL(4, r and Sp(4 are locally isomorphic to the Lorentz groups O(3, 3 and O(3, 2 respectively. This allows us to interpret Feynman’s rest of the universe in terms of space-time symmetry.

  17. Reduced Discrimination in the Tritanopic Confusion Line for Congenital Color Deficiency Adults.

    Science.gov (United States)

    Costa, Marcelo F; Goulart, Paulo R K; Barboni, Mirella T S; Ventura, Dora F

    2016-01-01

    In congenital color blindness the red-green discrimination is impaired resulting in an increased confusion between those colors with yellow. Our post-receptoral physiological mechanisms are organized in two pathways for color perception, a red-green (protanopic and deuteranopic) and a blue-yellow (tritanopic). We argue that the discrimination losses in the yellow area in congenital color vision deficiency subjects could generate a subtle loss of discriminability in the tritanopic channel considering discrepancies with yellow perception. We measured color discrimination thresholds for blue and yellow of tritanopic channel in congenital color deficiency subjects. Chromaticity thresholds were measured around a white background (0.1977 u', 0.4689 v' in the CIE 1976) consisting of a blue-white and white-yellow thresholds in a tritanopic color confusion line of 21 congenital colorblindness subjects (mean age = 27.7; SD = 5.6 years; 14 deuteranomalous and 7 protanomalous) and of 82 (mean age = 25.1; SD = 3.7 years) normal color vision subjects. Significant increase in the whole tritanopic axis was found for both deuteranomalous and protanomalous subjects compared to controls for the blue-white (F 2,100 = 18.80; p color confusion axis is significantly reduced in congenital color vision deficiency compared to normal subjects. Since yellow discrimination was impaired the balance of the blue-yellow channels is impaired justifying the increased thresholds found for blue-white discrimination. The weighting toward the yellow region of the color space with the deuteranomalous contributing to that perceptual distortion is discussed in terms of physiological mechanisms.

  18. Dynamical correlations for circular ensembles of random matrices

    International Nuclear Information System (INIS)

    Nagao, Taro; Forrester, Peter

    2003-01-01

    Circular Brownian motion models of random matrices were introduced by Dyson and describe the parametric eigenparameter correlations of unitary random matrices. For symmetric unitary, self-dual quaternion unitary and an analogue of antisymmetric Hermitian matrix initial conditions, Brownian dynamics toward the unitary symmetry is analyzed. The dynamical correlation functions of arbitrary number of Brownian particles at arbitrary number of times are shown to be written in the forms of quaternion determinants, similarly as in the case of Hermitian random matrix models

  19. Quantum Algorithms for Weighing Matrices and Quadratic Residues

    OpenAIRE

    van Dam, Wim

    2000-01-01

    In this article we investigate how we can employ the structure of combinatorial objects like Hadamard matrices and weighing matrices to device new quantum algorithms. We show how the properties of a weighing matrix can be used to construct a problem for which the quantum query complexity is ignificantly lower than the classical one. It is pointed out that this scheme captures both Bernstein & Vazirani's inner-product protocol, as well as Grover's search algorithm. In the second part of the ar...

  20. Quality assessment of baby food made of different pre-processed organic raw materials under industrial processing conditions.

    Science.gov (United States)

    Seidel, Kathrin; Kahl, Johannes; Paoletti, Flavio; Birlouez, Ines; Busscher, Nicolaas; Kretzschmar, Ursula; Särkkä-Tirkkonen, Marjo; Seljåsen, Randi; Sinesio, Fiorella; Torp, Torfinn; Baiamonte, Irene

    2015-02-01

    The market for processed food is rapidly growing. The industry needs methods for "processing with care" leading to high quality products in order to meet consumers' expectations. Processing influences the quality of the finished product through various factors. In carrot baby food, these are the raw material, the pre-processing and storage treatments as well as the processing conditions. In this study, a quality assessment was performed on baby food made from different pre-processed raw materials. The experiments were carried out under industrial conditions using fresh, frozen and stored organic carrots as raw material. Statistically significant differences were found for sensory attributes among the three autoclaved puree samples (e.g. overall odour F = 90.72, p processed from frozen carrots show increased moisture content and decrease of several chemical constituents. Biocrystallization identified changes between replications of the cooking. Pre-treatment of raw material has a significant influence on the final quality of the baby food.

  1. THE IMAGE REGISTRATION OF FOURIER-MELLIN BASED ON THE COMBINATION OF PROJECTION AND GRADIENT PREPROCESSING

    Directory of Open Access Journals (Sweden)

    D. Gao

    2017-09-01

    Full Text Available Image registration is one of the most important applications in the field of image processing. The method of Fourier Merlin transform, which has the advantages of high precision and good robustness to change in light and shade, partial blocking, noise influence and so on, is widely used. However, not only this method can’t obtain the unique mutual power pulse function for non-parallel image pairs, even part of image pairs also can’t get the mutual power function pulse. In this paper, an image registration method based on Fourier-Mellin transformation in the view of projection-gradient preprocessing is proposed. According to the projection conformational equation, the method calculates the matrix of image projection transformation to correct the tilt image; then, gradient preprocessing and Fourier-Mellin transformation are performed on the corrected image to obtain the registration parameters. Eventually, the experiment results show that the method makes the image registration of Fourier-Mellin transformation not only applicable to the registration of the parallel image pairs, but also to the registration of non-parallel image pairs. What’s more, the better registration effect can be obtained

  2. Data Acquisition and Preprocessing in Studies on Humans: What Is Not Taught in Statistics Classes?

    Science.gov (United States)

    Zhu, Yeyi; Hernandez, Ladia M; Mueller, Peter; Dong, Yongquan; Forman, Michele R

    2013-01-01

    The aim of this paper is to address issues in research that may be missing from statistics classes and important for (bio-)statistics students. In the context of a case study, we discuss data acquisition and preprocessing steps that fill the gap between research questions posed by subject matter scientists and statistical methodology for formal inference. Issues include participant recruitment, data collection training and standardization, variable coding, data review and verification, data cleaning and editing, and documentation. Despite the critical importance of these details in research, most of these issues are rarely discussed in an applied statistics program. One reason for the lack of more formal training is the difficulty in addressing the many challenges that can possibly arise in the course of a study in a systematic way. This article can help to bridge this gap between research questions and formal statistical inference by using an illustrative case study for a discussion. We hope that reading and discussing this paper and practicing data preprocessing exercises will sensitize statistics students to these important issues and achieve optimal conduct, quality control, analysis, and interpretation of a study.

  3. Empowering first year (post-matric) students in basic research skills ...

    African Journals Online (AJOL)

    Post-matric students from under-resourced (historically disadvantaged) black high schools generally encounter difficulties in their academic work at university. The study reported here was intended to empower first year (post-matric) students from these schools with basic research skills in a bid to counteract the effects of ...

  4. Applicability of non-invasively collected matrices for human biomonitoring

    Directory of Open Access Journals (Sweden)

    Nickmilder Marc

    2009-03-01

    Full Text Available Abstract With its inclusion under Action 3 in the Environment and Health Action Plan 2004–2010 of the European Commission, human biomonitoring is currently receiving an increasing amount of attention from the scientific community as a tool to better quantify human exposure to, and health effects of, environmental stressors. Despite the policy support, however, there are still several issues that restrict the routine application of human biomonitoring data in environmental health impact assessment. One of the main issues is the obvious need to routinely collect human samples for large-scale surveys. Particularly the collection of invasive samples from susceptible populations may suffer from ethical and practical limitations. Children, pregnant women, elderly, or chronically-ill people are among those that would benefit the most from non-invasive, repeated or routine sampling. Therefore, the use of non-invasively collected matrices for human biomonitoring should be promoted as an ethically appropriate, cost-efficient and toxicologically relevant alternative for many biomarkers that are currently determined in invasively collected matrices. This review illustrates that several non-invasively collected matrices are widely used that can be an valuable addition to, or alternative for, invasively collected matrices such as peripheral blood sampling. Moreover, a well-informed choice of matrix can provide an added value for human biomonitoring, as different non-invasively collected matrices can offer opportunities to study additional aspects of exposure to and effects from environmental contaminants, such as repeated sampling, historical overview of exposure, mother-child transfer of substances, or monitoring of substances with short biological half-lives.

  5. Polymer Percolation Threshold in Multi-Component HPMC Matrices Tablets

    Directory of Open Access Journals (Sweden)

    Maryam Maghsoodi

    2011-06-01

    Full Text Available Introduction: The percolation theory studies the critical points or percolation thresholds of the system, where onecomponent of the system undergoes a geometrical phase transition, starting to connect the whole system. The application of this theory to study the release rate of hydrophilic matrices allows toexplain the changes in release kinetics of swellable matrix type system and results in a clear improvement of the design of controlled release dosage forms. Methods: In this study, the percolation theory has been applied to multi-component hydroxypropylmethylcellulose (HPMC hydrophilic matrices. Matrix tablets have been prepared using phenobarbital as drug,magnesium stearate as a lubricant employing different amount of lactose and HPMC K4M as a fillerandmatrix forming material, respectively. Ethylcelullose (EC as a polymeric excipient was also examined. Dissolution studies were carried out using the paddle method. In order to estimate the percolation threshold, the behaviour of the kinetic parameters with respect to the volumetric fraction of HPMC at time zero, was studied. Results: In both HPMC/lactose and HPMC/EC/lactose matrices, from the point of view of the percolation theory, the optimum concentration for HPMC, to obtain a hydrophilic matrix system for the controlled release of phenobarbital is higher than 18.1% (v/v HPMC. Above 18.1% (v/v HPMC, an infinite cluster of HPMC would be formed maintaining integrity of the system and controlling the drug release from the matrices. According to results, EC had no significant influence on the HPMC percolation threshold. Conclusion: This may be related to broad functionality of the swelling hydrophilic matrices.

  6. Pre-processing of input files for the AZTRAN code

    International Nuclear Information System (INIS)

    Vargas E, S.; Ibarra, G.

    2017-09-01

    The AZTRAN code began to be developed in the Nuclear Engineering Department of the Escuela Superior de Fisica y Matematicas (ESFM) of the Instituto Politecnico Nacional (IPN) with the purpose of numerically solving various models arising from the physics and engineering of nuclear reactors. The code is still under development and is part of the AZTLAN platform: Development of a Mexican platform for the analysis and design of nuclear reactors. Due to the complexity to generate an input file for the code, a script based on D language is developed, with the purpose of making its elaboration easier, based on a new input file format which includes specific cards, which have been divided into two blocks, mandatory cards and optional cards, including a pre-processing of the input file to identify possible errors within it, as well as an image generator for the specific problem based on the python interpreter. (Author)

  7. Review of Data Preprocessing Methods for Sign Language Recognition Systems based on Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Zorins Aleksejs

    2016-12-01

    Full Text Available The article presents an introductory analysis of relevant research topic for Latvian deaf society, which is the development of the Latvian Sign Language Recognition System. More specifically the data preprocessing methods are discussed in the paper and several approaches are shown with a focus on systems based on artificial neural networks, which are one of the most successful solutions for sign language recognition task.

  8. Efficient linear algebra routines for symmetric matrices stored in packed form.

    Science.gov (United States)

    Ahlrichs, Reinhart; Tsereteli, Kakha

    2002-01-30

    Quantum chemistry methods require various linear algebra routines for symmetric matrices, for example, diagonalization or Cholesky decomposition for positive matrices. We present a small set of these basic routines that are efficient and minimize memory requirements.

  9. Defensive medicine: No wonder policymakers are confused.

    Science.gov (United States)

    Kapp, Marshall B

    2016-01-01

    Discussions regarding defensive medical practice often result in proposals for public policy actions. Such proposals generally are premised on assumptions about defensive medicine, namely, that it (a) is driven by physicians' legal anxieties, (b) constitutes bad medical practice, (c) drives up health care costs, (d) varies depending on a jurisdiction's particular tort law climate, (e) depends on medical specialty and a physician's own prior experience as a malpractice defendant, and (f) is a rational response to actual legal risks confronting physicians. This article examines a sample of recent literature focusing on defensive medicine and finds that the messages conveyed vary widely, helping to explain the confusion experienced by many policymakers trying to improve the quality and affordability of health care.

  10. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices

    OpenAIRE

    Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan

    2012-01-01

    In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the ...

  11. Two-mode Gaussian density matrices and squeezing of photons

    International Nuclear Information System (INIS)

    Tucci, R.R.

    1992-01-01

    In this paper, the authors generalize to 2-mode states the 1-mode state results obtained in a previous paper. The authors study 2-mode Gaussian density matrices. The authors find a linear transformation which maps the two annihilation operators, one for each mode, into two new annihilation operators that are uncorrelated and unsqueezed. This allows the authors to express the density matrix as a product of two 1-mode density matrices. The authors find general conditions under which 2-mode Gaussian density matrices become pure states. Possible pure states include the 2-mode squeezed pure states commonly mentioned in the literature, plus other pure states never mentioned before. The authors discuss the entropy and thermodynamic laws (Second Law, Fundamental Equation, and Gibbs-Duhem Equation) for the 2-mode states being considered

  12. Confusão de línguas, trauma e hospitalidade em Sándor Ferenczi

    Directory of Open Access Journals (Sweden)

    Alan Osmo

    2012-06-01

    Full Text Available Neste trabalho discutimos as ideias de confusão de línguas, de trauma e de hospitalidade no campo psicanalítico. Para Ferenczi, a relação adulto-criança é marcada por uma confusão decorrente de uma diferença de línguas, de forma que muitas vezes um não entende o outro. Nesse contexto, é possível a emergência do trauma patogênico. A experiência analítica, ao invés de levar o acontecimento traumático a domínios psíquicos melhores, pode reproduzir e até agravar o que foi vivido como catastrófico na infância. Neste sentido, o princípio de hospitalidade na clínica analítica é de suma importância para se evitar uma possível reprodução do trauma entre analista e analisando. Neste artigo utilizamos como referência principal a obra de Sándor Ferenczi, estabelecendo relações em alguns pontos com textos de Jacques Derrida e de Walter Benjamin, que discutem a origem da confusão de línguas e o problema da possibilidade da tradução.

  13. Neural Online Filtering Based on Preprocessed Calorimeter Data

    CERN Document Server

    Torres, R C; The ATLAS collaboration; Simas Filho, E F; De Seixas, J M

    2009-01-01

    Among LHC detectors, ATLAS aims at coping with such high event rate by designing a three-level online triggering system. The first level trigger output will be ~75 kHz. This level will mark the regions where relevant events were found. The second level will validate LVL1 decision by looking only at the approved data using full granularity. At the level two output, the event rate will be reduced to ~2 kHz. Finally, the third level will look at full event information and a rate of ~200 Hz events is expected to be approved, and stored in persistent media for further offline analysis. Many interesting events decay into electrons, which have to be identified from the huge background noise (jets). This work proposes a high-efficient LVL2 electron / jet discrimination system based on neural networks fed from preprocessed calorimeter information. The feature extraction part of the proposed system performs a ring structure of data description. A set of concentric rings centered at the highest energy cell is generated ...

  14. Data preprocessing methods for robust Fourier ptychographic microscopy

    Science.gov (United States)

    Zhang, Yan; Pan, An; Lei, Ming; Yao, Baoli

    2017-12-01

    Fourier ptychographic microscopy (FPM) is a recently developed computational imaging technique that achieves gigapixel images with both high resolution and large field-of-view. In the current FPM experimental setup, the dark-field images with high-angle illuminations are easily overwhelmed by stray lights and background noises due to the low signal-to-noise ratio, thus significantly degrading the achievable resolution of the FPM approach. We provide an overall and systematic data preprocessing scheme to enhance the FPM's performance, which involves sampling analysis, underexposed/overexposed treatments, background noises suppression, and stray lights elimination. It is demonstrated experimentally with both US Air Force (USAF) 1951 resolution target and biological samples that the benefit of the noise removal by these methods far outweighs the defect of the accompanying signal loss, as part of the lost signals can be compensated by the improved consistencies among the captured raw images. In addition, the reported nonparametric scheme could be further cooperated with the existing state-of-the-art algorithms with a great flexibility, facilitating a stronger noise-robust capability of the FPM approach in various applications.

  15. Positive projections of symmetric matrices and Jordan algebras

    DEFF Research Database (Denmark)

    Fuglede, Bent; Jensen, Søren Tolver

    2013-01-01

    An elementary proof is given that the projection from the space of all symmetric p×p matrices onto a linear subspace is positive if and only if the subspace is a Jordan algebra. This solves a problem in a statistical model.......An elementary proof is given that the projection from the space of all symmetric p×p matrices onto a linear subspace is positive if and only if the subspace is a Jordan algebra. This solves a problem in a statistical model....

  16. Statistical potential-based amino acid similarity matrices for aligning distantly related protein sequences.

    Science.gov (United States)

    Tan, Yen Hock; Huang, He; Kihara, Daisuke

    2006-08-15

    Aligning distantly related protein sequences is a long-standing problem in bioinformatics, and a key for successful protein structure prediction. Its importance is increasing recently in the context of structural genomics projects because more and more experimentally solved structures are available as templates for protein structure modeling. Toward this end, recent structure prediction methods employ profile-profile alignments, and various ways of aligning two profiles have been developed. More fundamentally, a better amino acid similarity matrix can improve a profile itself; thereby resulting in more accurate profile-profile alignments. Here we have developed novel amino acid similarity matrices from knowledge-based amino acid contact potentials. Contact potentials are used because the contact propensity to the other amino acids would be one of the most conserved features of each position of a protein structure. The derived amino acid similarity matrices are tested on benchmark alignments at three different levels, namely, the family, the superfamily, and the fold level. Compared to BLOSUM45 and the other existing matrices, the contact potential-based matrices perform comparably in the family level alignments, but clearly outperform in the fold level alignments. The contact potential-based matrices perform even better when suboptimal alignments are considered. Comparing the matrices themselves with each other revealed that the contact potential-based matrices are very different from BLOSUM45 and the other matrices, indicating that they are located in a different basin in the amino acid similarity matrix space.

  17. The optimal version of Hua's fundamental theorem of geometry of rectangular matrices

    CERN Document Server

    Semrl, Peter

    2014-01-01

    Hua's fundamental theorem of geometry of matrices describes the general form of bijective maps on the space of all m\\times n matrices over a division ring \\mathbb{D} which preserve adjacency in both directions. Motivated by several applications the author studies a long standing open problem of possible improvements. There are three natural questions. Can we replace the assumption of preserving adjacency in both directions by the weaker assumption of preserving adjacency in one direction only and still get the same conclusion? Can we relax the bijectivity assumption? Can we obtain an analogous result for maps acting between the spaces of rectangular matrices of different sizes? A division ring is said to be EAS if it is not isomorphic to any proper subring. For matrices over EAS division rings the author solves all three problems simultaneously, thus obtaining the optimal version of Hua's theorem. In the case of general division rings he gets such an optimal result only for square matrices and gives examples ...

  18. Numerical solutions of stochastic Lotka-Volterra equations via operational matrices

    Directory of Open Access Journals (Sweden)

    F. Hosseini Shekarabi

    2016-03-01

    Full Text Available In this paper, an efficient and convenient method for numerical solutions of stochastic Lotka-Volterra dynamical system is proposed. Here, we consider block pulse functions and their operational matrices of integration. Illustrative example is included to demonstrate the procedure and accuracy of the operational matrices based on block pulse functions.

  19. The performance of the Congruence Among Distance Matrices (CADM) test in phylogenetic analysis

    Science.gov (United States)

    2011-01-01

    Background CADM is a statistical test used to estimate the level of Congruence Among Distance Matrices. It has been shown in previous studies to have a correct rate of type I error and good power when applied to dissimilarity matrices and to ultrametric distance matrices. Contrary to most other tests of incongruence used in phylogenetic analysis, the null hypothesis of the CADM test assumes complete incongruence of the phylogenetic trees instead of congruence. In this study, we performed computer simulations to assess the type I error rate and power of the test. It was applied to additive distance matrices representing phylogenies and to genetic distance matrices obtained from nucleotide sequences of different lengths that were simulated on randomly generated trees of varying sizes, and under different evolutionary conditions. Results Our results showed that the test has an accurate type I error rate and good power. As expected, power increased with the number of objects (i.e., taxa), the number of partially or completely congruent matrices and the level of congruence among distance matrices. Conclusions Based on our results, we suggest that CADM is an excellent candidate to test for congruence and, when present, to estimate its level in phylogenomic studies where numerous genes are analysed simultaneously. PMID:21388552

  20. The performance of the Congruence Among Distance Matrices (CADM test in phylogenetic analysis

    Directory of Open Access Journals (Sweden)

    Lapointe François-Joseph

    2011-03-01

    Full Text Available Abstract Background CADM is a statistical test used to estimate the level of Congruence Among Distance Matrices. It has been shown in previous studies to have a correct rate of type I error and good power when applied to dissimilarity matrices and to ultrametric distance matrices. Contrary to most other tests of incongruence used in phylogenetic analysis, the null hypothesis of the CADM test assumes complete incongruence of the phylogenetic trees instead of congruence. In this study, we performed computer simulations to assess the type I error rate and power of the test. It was applied to additive distance matrices representing phylogenies and to genetic distance matrices obtained from nucleotide sequences of different lengths that were simulated on randomly generated trees of varying sizes, and under different evolutionary conditions. Results Our results showed that the test has an accurate type I error rate and good power. As expected, power increased with the number of objects (i.e., taxa, the number of partially or completely congruent matrices and the level of congruence among distance matrices. Conclusions Based on our results, we suggest that CADM is an excellent candidate to test for congruence and, when present, to estimate its level in phylogenomic studies where numerous genes are analysed simultaneously.

  1. On families of anticommuting matrices

    Czech Academy of Sciences Publication Activity Database

    Hrubeš, Pavel

    2016-01-01

    Roč. 493, March 15 (2016), s. 494-507 ISSN 0024-3795 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : anticommuting matrices * sum-of-squares formulas Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016 http://www.sciencedirect.com/science/article/pii/S0024379515007296

  2. On families of anticommuting matrices

    Czech Academy of Sciences Publication Activity Database

    Hrubeš, Pavel

    2016-01-01

    Roč. 493, March 15 (2016), s. 494-507 ISSN 0024-3795 EU Projects: European Commission(XE) 339691 - FEALORA Institutional support: RVO:67985840 Keywords : anticommuting matrices * sum -of-squares formulas Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016 http://www.sciencedirect.com/science/article/pii/S0024379515007296

  3. Schur Complement Inequalities for Covariance Matrices and Monogamy of Quantum Correlations.

    Science.gov (United States)

    Lami, Ludovico; Hirche, Christoph; Adesso, Gerardo; Winter, Andreas

    2016-11-25

    We derive fundamental constraints for the Schur complement of positive matrices, which provide an operator strengthening to recently established information inequalities for quantum covariance matrices, including strong subadditivity. This allows us to prove general results on the monogamy of entanglement and steering quantifiers in continuous variable systems with an arbitrary number of modes per party. A powerful hierarchical relation for correlation measures based on the log-determinant of covariance matrices is further established for all Gaussian states, which has no counterpart among quantities based on the conventional von Neumann entropy.

  4. Application of Parallel Hierarchical Matrices in Spatial Statistics and Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2018-04-20

    Parallel H-matrices in spatial statistics 1. Motivation: improve statistical model 2. Tools: Hierarchical matrices [Hackbusch 1999] 3. Matern covariance function and joint Gaussian likelihood 4. Identification of unknown parameters via maximizing Gaussian log-likelihood 5. Implementation with HLIBPro

  5. Higher dimensional unitary braid matrices: Construction, associated structures and entanglements

    International Nuclear Information System (INIS)

    Abdesselam, B.; Chakrabarti, A.; Dobrev, V.K.; Mihov, S.G.

    2007-03-01

    We construct (2n) 2 x (2n) 2 unitary braid matrices R-circumflex for n ≥ 2 generalizing the class known for n = 1. A set of (2n) x (2n) matrices (I, J,K,L) are defined. R-circumflex is expressed in terms of their tensor products (such as K x J), leading to a canonical formulation for all n. Complex projectors P ± provide a basis for our real, unitary R-circumflex. Baxterization is obtained. Diagonalizations and block- diagonalizations are presented. The loss of braid property when R-circumflex (n > 1) is block-diagonalized in terms of R-circumflex (n = 1) is pointed out and explained. For odd dimension (2n + 1) 2 x (2n + 1) 2 , a previously constructed braid matrix is complexified to obtain unitarity. R-circumflexLL- and R-circumflexTT- algebras, chain Hamiltonians, potentials for factorizable S-matrices, complex non-commutative spaces are all studied briefly in the context of our unitary braid matrices. Turaev construction of link invariants is formulated for our case. We conclude with comments concerning entanglements. (author)

  6. The Serbian idea in an era of confused historical consciousness

    OpenAIRE

    Mitrović Milovan M.

    2011-01-01

    This paper, represents a hypothetical consideration of the phenomenology of the Serbian national idea, within the traumatic circumstances of the breakup of the Yugoslav state at the end of the 20th century, when the Serbian national issue was reopened in an exceptionally unfavorable geopolitical context for the Serbian people. The author specifically analyzes the ideological and political factors behind the Serbian confusion with the theoretical framework of Agnes Heller's critical interpreta...

  7. Does working memory change with age? The interactions of concurrent articulation with the effects of word length and acoustic confusion.

    Science.gov (United States)

    Bireta, Tamra J; Fine, Hope C; Vanwormer, Lisa A

    2013-01-01

    The effects of acoustic confusion (phonological similarity), word length, and concurrent articulation (articulatory suppression) are cited as support for Working Memory's phonological loop component (e.g., Baddeley, 2000 , Psychonomic Bulletin and Review, 7, 544). Research has focused on younger adults, with no studies examining whether concurrent articulation reduces the word length and acoustic confusion effects among older adults. In the current study, younger and older adults were given lists of similar and dissimilar letters (Experiment 1) or long and short words (Experiment 2) for immediate serial reconstruction of order. Items were presented visually or auditorily, with or without concurrent articulation. As expected, younger and older adults demonstrated effects of acoustic confusion, word length, and concurrent articulation. Further, concurrent articulation reduced the effects of acoustic confusion and word length equally for younger and older adults. This suggests that age-related differences occur in overall performance, but do not reflect an age-related deficiency in the functioning of the phonological loop component of working memory.

  8. Fast randomized point location without preprocessing in two- and three-dimensional Delaunay triangulations

    Energy Technology Data Exchange (ETDEWEB)

    Muecke, E.P.; Saias, I.; Zhu, B.

    1996-05-01

    This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point simply by walking through the triangulation, after selecting a good starting point by random sampling. The analysis generalizes and extends a recent result of d = 2 dimensions by proving this procedure to take expected time close to O(n{sup 1/(d+1)}) for point location in Delaunay triangulations of n random points in d = 3 dimensions. Empirical results in both two and three dimensions show that this procedure is efficient in practice.

  9. Performance Comparison of Several Pre-Processing Methods in a Hand Gesture Recognition System based on Nearest Neighbor for Different Background Conditions

    Directory of Open Access Journals (Sweden)

    Regina Lionnie

    2013-09-01

    Full Text Available This paper presents a performance analysis and comparison of several pre-processing  methods  used  in  a  hand  gesture  recognition  system.  The  preprocessing methods are based on the combinations ofseveral image processing operations,  namely  edge  detection,  low  pass  filtering,  histogram  equalization, thresholding and desaturation. The hand gesture recognition system is designed to classify an input image into one of six possibleclasses. The input images are taken with various background conditions. Our experiments showed that the best result is achieved when the pre-processing method consists of only a desaturation operation, achieving a classification accuracy of up to 83.15%.

  10. A Conceptual Cost Benefit Analysis of Tailings Matrices Use in Construction Applications

    OpenAIRE

    Mahmood Ali A.; Elektorowicz Maria

    2016-01-01

    As part of a comprehensive research program, new tailings matrices are formulated of combinations of tailings and binder materials. The research program encompasses experimental and numerical analysis of the tailings matrices to investigate the feasibility of using them as construction materials in cold climates. This paper discusses a conceptual cost benefit analysis for the use of these new materials. It is shown here that the financial benefits of using the proposed new tailings matrices i...

  11. IMAGING THE EPOCH OF REIONIZATION: LIMITATIONS FROM FOREGROUND CONFUSION AND IMAGING ALGORITHMS

    International Nuclear Information System (INIS)

    Vedantham, Harish; Udaya Shankar, N.; Subrahmanyan, Ravi

    2012-01-01

    Tomography of redshifted 21 cm transition from neutral hydrogen using Fourier synthesis telescopes is a promising tool to study the Epoch of Reionization (EoR). Limiting the confusion from Galactic and extragalactic foregrounds is critical to the success of these telescopes. The instrumental response or the point-spread function (PSF) of such telescopes is inherently three dimensional with frequency mapping to the line-of-sight (LOS) distance. EoR signals will necessarily have to be detected in data where continuum confusion persists; therefore, it is important that the PSF has acceptable frequency structure so that the residual foreground does not confuse the EoR signature. This paper aims to understand the three-dimensional PSF and foreground contamination in the same framework. We develop a formalism to estimate the foreground contamination along frequency, or equivalently LOS dimension, and establish a relationship between foreground contamination in the image plane and visibility weights on the Fourier plane. We identify two dominant sources of LOS foreground contamination—'PSF contamination' and 'gridding contamination'. We show that PSF contamination is localized in LOS wavenumber space, beyond which there potentially exists an 'EoR window' with negligible foreground contamination where we may focus our efforts to detect EoR. PSF contamination in this window may be substantially reduced by judicious choice of a frequency window function. Gridding and imaging algorithms create additional gridding contamination and we propose a new imaging algorithm using the Chirp Z Transform that significantly reduces this contamination. Finally, we demonstrate the analytical relationships and the merit of the new imaging algorithm for the case of imaging with the Murchison Widefield Array.

  12. Clasificación acústica de anchoveta (Engraulis ringens y sardina común (Strangomera bentincki mediante máquinas de vectores soporte en la zona centro-sur de Chile: efecto de la calibración de los parámetros en la matriz de confusión Acoustic classification of anchovy (Engraulis ringens and sardine (Strangomera bentincki using support vector machines in central-southern Chile: effect of parameter calibration on the confusion matrix

    Directory of Open Access Journals (Sweden)

    Hugo Robotham

    2012-03-01

    C by analyzing the effect of the calibration on the confusion matrices resulting from the classification of the species under study. The SVM method correctly classified 95.3% of anchovy and sardine schools. The optimal parameters of the Gaussian-Kernel γ and penalty C obtained with the proposed methodology were γ = 450 and C = 0.95. These parameters have an important influence over the confusion matrix and the final classifications percentages, suggesting the development of experimental protocols for calibrating these parameters in future applications of this methodology. In all the confusion matrices, the common sardine showed the lowest classification error. The bottom depth was the descriptor that was most sensitive to the SVM, followed by school-shore distance.

  13. Invertibility and Explicit Inverses of Circulant-Type Matrices with k-Fibonacci and k-Lucas Numbers

    Directory of Open Access Journals (Sweden)

    Zhaolin Jiang

    2014-01-01

    Full Text Available Circulant matrices have important applications in solving ordinary differential equations. In this paper, we consider circulant-type matrices with the k-Fibonacci and k-Lucas numbers. We discuss the invertibility of these circulant matrices and present the explicit determinant and inverse matrix by constructing the transformation matrices, which generalizes the results in Shen et al. (2011.

  14. Integrated fMRI Preprocessing Framework Using Extended Kalman Filter for Estimation of Slice-Wise Motion

    OpenAIRE

    Basile Pinsard; Basile Pinsard; Basile Pinsard; Arnaud Boutin; Arnaud Boutin; Julien Doyon; Julien Doyon; Habib Benali; Habib Benali; Habib Benali

    2018-01-01

    Functional MRI acquisition is sensitive to subjects' motion that cannot be fully constrained. Therefore, signal corrections have to be applied a posteriori in order to mitigate the complex interactions between changing tissue localization and magnetic fields, gradients and readouts. To circumvent current preprocessing strategies limitations, we developed an integrated method that correct motion and spatial low-frequency intensity fluctuations at the level of each slice in order to better fit ...

  15. Subdural Empyema Presenting with Seizure, Confusion, and Focal Weakness

    Directory of Open Access Journals (Sweden)

    David I Bruner

    2012-12-01

    Full Text Available While sinusitis is a common ailment, intracranial suppurative complications of sinusitis are rare and difficult to diagnose and treat. The morbidity and mortality of intracranial complications of sinusitis have decreased significantly since the advent of antibiotics, but diseases such as subdural empyemas and intracranial abscesses still occur, and they require prompt diagnosis, treatment, and often surgical drainage to prevent death or long-term neurologic sequelae. We present a case of an immunocompetent adolescent male with a subdural empyema who presented with seizures,confusion, and focal arm weakness after a bout of sinusitis.

  16. Subdural Empyema Presenting with Seizure, Confusion, and Focal Weakness

    Science.gov (United States)

    Bruner, David I.; Littlejohn, Lanny; Pritchard, Amy

    2012-01-01

    While sinusitis is a common ailment, intracranial suppurative complications of sinusitis are rare and difficult to diagnose and treat. The morbidity and mortality of intracranial complications of sinusitis have decreased significantly since the advent of antibiotics, but diseases such as subdural empyemas and intracranial abscesses still occur, and they require prompt diagnosis, treatment, and often surgical drainage to prevent death or long-term neurologic sequelae. We present a case of an immunocompetent adolescent male with a subdural empyema who presented with seizures, confusion, and focal arm weakness after a bout of sinusitis. PMID:23358438

  17. Asymptotic Distribution of Eigenvalues of Weakly Dilute Wishart Matrices

    Energy Technology Data Exchange (ETDEWEB)

    Khorunzhy, A. [Institute for Low Temperature Physics (Ukraine)], E-mail: khorunjy@ilt.kharkov.ua; Rodgers, G. J. [Brunel University, Uxbridge, Department of Mathematics and Statistics (United Kingdom)], E-mail: g.j.rodgers@brunel.ac.uk

    2000-03-15

    We study the eigenvalue distribution of large random matrices that are randomly diluted. We consider two random matrix ensembles that in the pure (nondilute) case have a limiting eigenvalue distribution with a singular component at the origin. These include the Wishart random matrix ensemble and Gaussian random matrices with correlated entries. Our results show that the singularity in the eigenvalue distribution is rather unstable under dilution and that even weak dilution destroys it.

  18. Piecewise Polynomial Aggregation as Preprocessing for Data Numerical Modeling

    Science.gov (United States)

    Dobronets, B. S.; Popova, O. A.

    2018-05-01

    Data aggregation issues for numerical modeling are reviewed in the present study. The authors discuss data aggregation procedures as preprocessing for subsequent numerical modeling. To calculate the data aggregation, the authors propose using numerical probabilistic analysis (NPA). An important feature of this study is how the authors represent the aggregated data. The study shows that the offered approach to data aggregation can be interpreted as the frequency distribution of a variable. To study its properties, the density function is used. For this purpose, the authors propose using the piecewise polynomial models. A suitable example of such approach is the spline. The authors show that their approach to data aggregation allows reducing the level of data uncertainty and significantly increasing the efficiency of numerical calculations. To demonstrate the degree of the correspondence of the proposed methods to reality, the authors developed a theoretical framework and considered numerical examples devoted to time series aggregation.

  19. Teaching Fourier optics through ray matrices

    International Nuclear Information System (INIS)

    Moreno, I; Sanchez-Lopez, M M; Ferreira, C; Davis, J A; Mateos, F

    2005-01-01

    In this work we examine the use of ray-transfer matrices for teaching and for deriving some topics in a Fourier optics course, exploiting the mathematical simplicity of ray matrices compared to diffraction integrals. A simple analysis of the physical meaning of the elements of the ray matrix provides a fast derivation of the conditions to obtain the optical Fourier transform. We extend this derivation to fractional Fourier transform optical systems, and derive the order of the transform from the ray matrix. Some examples are provided to stress this point of view, both with classical and with graded index lenses. This formulation cannot replace the complete explanation of Fourier optics provided by the wave theory, but it is a complementary tool useful to simplify many aspects of Fourier optics and to relate them to geometrical optics

  20. Perceptual Confusions of the Manual Alphabet by Naive, Trained, and Familiar Users.

    Science.gov (United States)

    Hawes, M. Dixie; Danhauer, Jeffrey L.

    1980-01-01

    An investigation of the confusion resulting from reliance on visual perceptual teachers in the identification of dactylemes (handshapes) in the American Manual Alphabet (MA) is reported. A hierarchy of errors varying with subjects' degree of expertness in the MA is established. This can help manual communication teachers develop techniques for…

  1. Econometric models for predicting confusion crop ratios

    Science.gov (United States)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  2. Fabrication of chemically cross-linked porous gelatin matrices.

    Science.gov (United States)

    Bozzini, Sabrina; Petrini, Paola; Altomare, Lina; Tanzi, Maria Cristina

    2009-01-01

    The aim of this study was to chemically cross-link gelatin, by reacting its free amino groups with an aliphatic diisocyanate. To produce hydrogels with controllable properties, the number of reacting amino groups was carefully determined. Porosity was introduced into the gelatin-based hydrogels through the lyophilization process. Porous and non-porous matrices were characterized with respect to their chemical structure, morphology, water uptake and mechanical properties. The physical, chemical and mechanical properties of the porous matrices are related to the extent of their cross-linking, showing that they can be controlled by varying the reaction parameters. Water uptake values (24 hours) vary between 160% and 200% as the degree of cross-linking increases. The flexibility of the samples also decreases by changing the extent of cross-linking. Young's modulus shows values between 0.188 KPa, for the highest degree, and 0.142 KPa for the lowest degree. The matrices are potential candidates for use as tissue-engineering scaffolds by modulating their physical chemical properties according to the specific application.

  3. Large-deviation theory for diluted Wishart random matrices

    Science.gov (United States)

    Castillo, Isaac Pérez; Metz, Fernando L.

    2018-03-01

    Wishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues IN(x ) smaller than x ∈R+ , from which all cumulants of IN(x ) and the rate function Ψx(k ) controlling its large-deviation probability Prob[IN(x ) =k N ] ≍e-N Ψx(k ) follow. Explicit results for the mean value and the variance of IN(x ) , its rate function, and its third cumulant are discussed and thoroughly compared to numerical diagonalization, showing very good agreement. The present work establishes the theoretical framework put forward in a recent letter [Phys. Rev. Lett. 117, 104101 (2016), 10.1103/PhysRevLett.117.104101] as an exact and compelling approach to deal with eigenvalue fluctuations of sparse random matrices.

  4. Theoretical and experimental researches of methanol clusters in low - temperature matrices

    International Nuclear Information System (INIS)

    Chernolevs'ka, Je.A.; Doroshenko, Yi.Yu.; Pogorelov, V.Je.; Vas'kyivs'kij, Je.V.; Shablyinskas, V.; Balyavyichus, V.; Yasajev, O.

    2015-01-01

    Molecular vibrational spectra of methanol in argon and nitrogen matrices have been studied. Since methanol belongs to a class of substances with hydrogen bonds, there is a possibility of forming molecular associations and clusters with various numbers of molecules. IR spectra of methanol in Ar and N 2 matrices experimentally obtained in the temperature range from 10 to 50 K are compared with the results of computer simulation using the ab initio Car-Parrinello molecular dynamics (CPMD) method. The results obtained for small clusters in model calculations demonstrate a good correlation with experimental data for various matrices at the corresponding temperatures

  5. Multiple Regression Analysis of Unconfined Compression Strength of Mine Tailings Matrices

    Directory of Open Access Journals (Sweden)

    Mahmood Ali A.

    2017-01-01

    Full Text Available As part of a novel approach of sustainable development of mine tailings, experimental and numerical analysis is carried out on newly formulated tailings matrices. Several physical characteristic tests are carried out including the unconfined compression strength test to ascertain the integrity of these matrices when subjected to loading. The current paper attempts a multiple regression analysis of the unconfined compressive strength test results of these matrices to investigate the most pertinent factors affecting their strength. Results of this analysis showed that the suggested equation is reasonably applicable to the range of binder combinations used.

  6. The Python Spectral Analysis Tool (PySAT) for Powerful, Flexible, and Easy Preprocessing and Machine Learning with Point Spectral Data

    Science.gov (United States)

    Anderson, R. B.; Finch, N.; Clegg, S. M.; Graff, T.; Morris, R. V.; Laura, J.

    2018-04-01

    The PySAT point spectra tool provides a flexible graphical interface, enabling scientists to apply a wide variety of preprocessing and machine learning methods to point spectral data, with an emphasis on multivariate regression.

  7. To freeze or not to freeze embryos: clarity, confusion and conflict.

    Science.gov (United States)

    Goswami, Mohar; Murdoch, Alison P; Haimes, Erica

    2015-06-01

    Although embryo freezing is a routine clinical practice, there is little contemporary evidence on how couples make the decision to freeze their surplus embryos, or of their perceptions during that time. This article describes a qualitative study of 16 couples who have had in vitro fertilisation (IVF) treatment. The study question was 'What are the personal and social factors that patients consider when deciding whether to freeze embryos?' We show that while the desire for a baby is the dominant drive, couples' views revealed more nuanced and complex considerations in the decision-making process. It was clear that the desire to have a baby influenced couples' decision-making and that they saw freezing as 'part of the process'. However, there were confusions associated with the term 'freezing' related to concerns about the safety of the procedure. Despite being given written information, couples were confused about the practical aspects of embryo freezing, which suggests they were preoccupied with the immediate demands of IVF. Couples expressed ethical conflicts about freezing 'babies'. We hope the findings from this study will inform clinicians and assist them in providing support to couples confronted with this difficult decision-making.

  8. Integrated fMRI Preprocessing Framework Using Extended Kalman Filter for Estimation of Slice-Wise Motion

    Directory of Open Access Journals (Sweden)

    Basile Pinsard

    2018-04-01

    Full Text Available Functional MRI acquisition is sensitive to subjects' motion that cannot be fully constrained. Therefore, signal corrections have to be applied a posteriori in order to mitigate the complex interactions between changing tissue localization and magnetic fields, gradients and readouts. To circumvent current preprocessing strategies limitations, we developed an integrated method that correct motion and spatial low-frequency intensity fluctuations at the level of each slice in order to better fit the acquisition processes. The registration of single or multiple simultaneously acquired slices is achieved online by an Iterated Extended Kalman Filter, favoring the robust estimation of continuous motion, while an intensity bias field is non-parametrically fitted. The proposed extraction of gray-matter BOLD activity from the acquisition space to an anatomical group template space, taking into account distortions, better preserves fine-scale patterns of activity. Importantly, the proposed unified framework generalizes to high-resolution multi-slice techniques. When tested on simulated and real data the latter shows a reduction of motion explained variance and signal variability when compared to the conventional preprocessing approach. These improvements provide more stable patterns of activity, facilitating investigation of cerebral information representation in healthy and/or clinical populations where motion is known to impact fine-scale data.

  9. Procrustes Problems for General, Triangular, and Symmetric Toeplitz Matrices

    Directory of Open Access Journals (Sweden)

    Juan Yang

    2013-01-01

    Full Text Available The Toeplitz Procrustes problems are the least squares problems for the matrix equation AX=B over some Toeplitz matrix sets. In this paper the necessary and sufficient conditions are obtained about the existence and uniqueness for the solutions of the Toeplitz Procrustes problems when the unknown matrices are constrained to the general, the triangular, and the symmetric Toeplitz matrices, respectively. The algorithms are designed and the numerical examples show that these algorithms are feasible.

  10. A Workshop on Algebraic Design Theory and Hadamard Matrices

    CERN Document Server

    2015-01-01

    This volume develops the depth and breadth of the mathematics underlying the construction and analysis of Hadamard matrices and their use in the construction of combinatorial designs. At the same time, it pursues current research in their numerous applications in security and cryptography, quantum information, and communications. Bridges among diverse mathematical threads and extensive applications make this an invaluable source for understanding both the current state of the art and future directions. The existence of Hadamard matrices remains one of the most challenging open questions in combinatorics. Substantial progress on their existence has resulted from advances in algebraic design theory using deep connections with linear algebra, abstract algebra, finite geometry, number theory, and combinatorics. Hadamard matrices arise in a very diverse set of applications. Starting with applications in experimental design theory and the theory of error-correcting codes, they have found unexpected and important ap...

  11. Inconsistent Distances in Substitution Matrices can be Avoided by Properly Handling Hydrophobic Residues

    Directory of Open Access Journals (Sweden)

    J. Baussand

    2008-01-01

    Full Text Available The adequacy of substitution matrices to model evolutionary relationships between amino acid sequences can be numerically evaluated by checking the mathematical property of triangle inequality for all triplets of residues. By converting substitution scores into distances, one can verify that a direct path between two amino acids is shorter than a path passing through a third amino acid in the amino acid space modeled by the matrix. If the triangle inequality is not verified, the intuition is that the evolutionary signal is not well modeled by the matrix, that the space is locally inconsistent and that the matrix construction was probably based on insufficient biological data. Previous analysis on several substitution matrices revealed that the number of triplets violating the triangle inequality increases with sequence divergence. Here, we compare matrices which are dedicated to the alignment of highly divergent proteins. The triangle inequality is tested on several classical substitution matrices as well as in a pair of “complementary” substitution matrices recording the evolutionary pressures inside and outside hydrophobic blocks in protein sequences. The analysis proves the crucial role of hydrophobic residues in substitution matrices dedicated to the alignment of distantly related proteins.

  12. A Technique for Controlling Matric Suction on Filter Papers . GroWth ...

    African Journals Online (AJOL)

    'Abstract. Moist filter papers are widely usedfor seed gennination tests but their water confent and matric suction are not usually controlled. A technique for controlling filter paper matric suction is described and usedfor germination studies involving fresh and aged sorghum seed (Sorghummcolor (L) Moench). Filter papers ...

  13. ON MATRICES ARISING IN RETARDED DELAY DIFFERENTIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    S DJEZZAR

    2002-12-01

    Full Text Available Dans cet article, on considère une classe de système différentiels retardés et à laquelle on associe une matrice système sur R[s,z], l'anneau des polynômes à deux indéterminés s et z. Ensuite, en utilisant la notion de la matrice forme de Smith sur R[s,z], on étend un résultat de caractérisation obtenu précédemment [5] sur les formes canoniques, à un cas plus général.

  14. On the Wigner law in dilute random matrices

    Science.gov (United States)

    Khorunzhy, A.; Rodgers, G. J.

    1998-12-01

    We consider ensembles of N × N symmetric matrices whose entries are weakly dependent random variables. We show that random dilution can change the limiting eigenvalue distribution of such matrices. We prove that under general and natural conditions the normalised eigenvalue counting function coincides with the semicircle (Wigner) distribution in the limit N → ∞. This can be explained by the observation that dilution (or more generally, random modulation) eliminates the weak dependence (or correlations) between random matrix entries. It also supports our earlier conjecture that the Wigner distribution is stable to random dilution and modulation.

  15. Lectures on matrices

    CERN Document Server

    M Wedderburn, J H

    1934-01-01

    It is the organization and presentation of the material, however, which make the peculiar appeal of the book. This is no mere compendium of results-the subject has been completely reworked and the proofs recast with the skill and elegance which come only from years of devotion. -Bulletin of the American Mathematical Society The very clear and simple presentation gives the reader easy access to the more difficult parts of the theory. -Jahrbuch über die Fortschritte der Mathematik In 1937, the theory of matrices was seventy-five years old. However, many results had only recently evolved from sp

  16. Theoretical origin of quark mass matrices

    International Nuclear Information System (INIS)

    Mohapatra, R.N.

    1987-01-01

    This paper presents the theoretical origin of specific quark mass matrices in the grand unified theories. The author discusses the first natural derivation of the Stech-type mass matrix in unified gauge theories. A solution to the strong CP-problem is provided

  17. Regenerated cellulose micro-nano fiber matrices for transdermal drug release

    International Nuclear Information System (INIS)

    Liu, Yue; Nguyen, Andrew; Allen, Alicia; Zoldan, Janet; Huang, Yuxiang; Chen, Jonathan Y.

    2017-01-01

    In this work, biobased fibrous membranes with micro- and nano-fibers are fabricated for use as drug delivery carries because of their biocompatibility, eco-friendly approach, and potential for scale-up. The cellulose micro-/nano-fiber (CMF) matrices were prepared by electrospinning of pulp in an ionic liquid, 1-butyl-3-methylimidazolium chloride. A model drug, ibuprofen (IBU), was loaded on the CMF matrices by a simple immersing method. The amount of IBU loading was about 6% based on the weight of cellulose membrane. The IBU-loaded CMF matrices were characterized by Fourier-transform infrared spectroscopy, thermal gravimetric analysis, and scanning electron microscopy. The test of ibuprofen release was carried out in an acetate buffer solution of pH 5.5 and examined by UV–Vis spectroscopy. Release profiles from the CMF matrices indicated that the drug release rate could be determined by a Fickian diffusion mechanism. - Highlights: • Cellulose micro-nano fiber matrix was prepared by dry-wet electrospinning. • Ibuprofen was loaded on the matrix by a simple immersing method. • The drug loaded matrix showed a biphasic release profile. • The drug release was determined by a Fickian diffusion mechanism.

  18. Regenerated cellulose micro-nano fiber matrices for transdermal drug release

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yue [School of Human Ecology, The University of Texas at Austin, Austin, TX (United States); Department of Chemistry, School of Science, Tianjin University, Tianjin (China); Nguyen, Andrew; Allen, Alicia; Zoldan, Janet [Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX (United States); Huang, Yuxiang [School of Human Ecology, The University of Texas at Austin, Austin, TX (United States); Chen, Jonathan Y., E-mail: jychen2@austin.utexas.edu [School of Human Ecology, The University of Texas at Austin, Austin, TX (United States)

    2017-05-01

    In this work, biobased fibrous membranes with micro- and nano-fibers are fabricated for use as drug delivery carries because of their biocompatibility, eco-friendly approach, and potential for scale-up. The cellulose micro-/nano-fiber (CMF) matrices were prepared by electrospinning of pulp in an ionic liquid, 1-butyl-3-methylimidazolium chloride. A model drug, ibuprofen (IBU), was loaded on the CMF matrices by a simple immersing method. The amount of IBU loading was about 6% based on the weight of cellulose membrane. The IBU-loaded CMF matrices were characterized by Fourier-transform infrared spectroscopy, thermal gravimetric analysis, and scanning electron microscopy. The test of ibuprofen release was carried out in an acetate buffer solution of pH 5.5 and examined by UV–Vis spectroscopy. Release profiles from the CMF matrices indicated that the drug release rate could be determined by a Fickian diffusion mechanism. - Highlights: • Cellulose micro-nano fiber matrix was prepared by dry-wet electrospinning. • Ibuprofen was loaded on the matrix by a simple immersing method. • The drug loaded matrix showed a biphasic release profile. • The drug release was determined by a Fickian diffusion mechanism.

  19. A Technique for Controlling Matric Suction on Filter Papers Used in ...

    African Journals Online (AJOL)

    Moist filter papers are widely usedfor seed gennination tests but their water confent and matric suction are not usually controlled. A technique for controlling filter paper matric suction is described and usedfor germination studies involving fresh and aged sorghum seed (Sorghummcolor (L) Moench). Filter papers wetted to ...

  20. Countering Climate Confusion in the Classroom: New Methods and Initiatives

    Science.gov (United States)

    McCaffrey, M.; Berbeco, M.; Reid, A. H.

    2014-12-01

    Politicians and ideologues blocking climate education through legislative manipulation. Free marketeers promoting the teaching of doubt and controversy to head off regulation. Education standards and curricula that skim over, omit, or misrepresent the causes, effects, risks and possible responses to climate change. Teachers who unknowingly foster confusion by presenting "both sides" of a phony scientific controversy. All of these contribute to dramatic differences in the quality and quantity of climate education received by U.S. students. Most U.S. adults and teens fail basic quizzes on energy and climate basics, in large part, because climate science has never been fully accepted as a vital component of a 21st-century science education. Often skipped or skimmed over, human contributions to climate change are sometimes taught as controversy or through debate, perpetuating a climate of confusion in many classrooms. This paper will review recent history of opposition to climate science education, and explore initial findings from a new survey of science teachers on whether, where and how climate change is being taught. It will highlight emerging effective pedagogical practices identified in McCaffrey's Climate Smart & Energy Wise, including the role of new initiatives such as the Next Generation Science Standards and Green Schools, and detail efforts of the Science League of America in countering denial and doubt so that educators can teach consistently and confidently about climate change.

  1. Confusion and Agitation after a Recent Kidney Transplantation

    Directory of Open Access Journals (Sweden)

    Hussein Magdi

    2008-01-01

    Full Text Available A 51-year-old man, who received a living related transplant from his wife and anti-thymocyte globulin (ATG as induction therapy, developed delayed graft function after transplantation. One day after he received an i.v. dose of ganciclovir, the patient developed hallucinations, confusion and agitation, which worsened the following day. CT-scan of the brain and cerebrospinal fluid were unremarkable. Ganciclovir-induced encepha-lopathy was considered the most likely reason for the patient′s neurological condition, since he recovered completely a few days after discontinuation of this drug. Since anti-CMV prophylactic treatment is now widely used after transplantation, a high index of suspicion is required to diagnose ganciclovir (or acyclovir induced neurotoxicity.

  2. High levels of confusion for cholesterol awareness campaigns.

    Science.gov (United States)

    Hall, Danika V

    2008-09-15

    Earlier this year, two industry-sponsored advertising campaigns for cholesterol awareness that target the general public were launched in Australia. These campaigns aimed to alert the public to the risks associated with having high cholesterol and encouraged cholesterol testing for wider groups than those specified by the National Heart Foundation. General practitioners should be aware of the potential for the two campaigns to confuse the general public as to who should be tested, and where. The campaign sponsors (Unilever Australasia and Pfizer) each have the potential to benefit by increased market share for their products, and increased profits. These disease awareness campaigns are examples of what is increasingly being termed "condition branding" by pharmaceutical marketing experts.

  3. chipPCR: an R package to pre-process raw data of amplification curves.

    Science.gov (United States)

    Rödiger, Stefan; Burdukiewicz, Michał; Schierack, Peter

    2015-09-01

    Both the quantitative real-time polymerase chain reaction (qPCR) and quantitative isothermal amplification (qIA) are standard methods for nucleic acid quantification. Numerous real-time read-out technologies have been developed. Despite the continuous interest in amplification-based techniques, there are only few tools for pre-processing of amplification data. However, a transparent tool for precise control of raw data is indispensable in several scenarios, for example, during the development of new instruments. chipPCR is an R: package for the pre-processing and quality analysis of raw data of amplification curves. The package takes advantage of R: 's S4 object model and offers an extensible environment. chipPCR contains tools for raw data exploration: normalization, baselining, imputation of missing values, a powerful wrapper for amplification curve smoothing and a function to detect the start and end of an amplification curve. The capabilities of the software are enhanced by the implementation of algorithms unavailable in R: , such as a 5-point stencil for derivative interpolation. Simulation tools, statistical tests, plots for data quality management, amplification efficiency/quantification cycle calculation, and datasets from qPCR and qIA experiments are part of the package. Core functionalities are integrated in GUIs (web-based and standalone shiny applications), thus streamlining analysis and report generation. http://cran.r-project.org/web/packages/chipPCR. Source code: https://github.com/michbur/chipPCR. stefan.roediger@b-tu.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Reduced Discrimination in the Tritanopic Confusion Line for Congenital Color Deficiency Adults

    Directory of Open Access Journals (Sweden)

    Marcelo Fernandes Costa

    2016-03-01

    Full Text Available In congenital color blindness the red-green discrimination is impaired resulting in an increased confusion between those colors with yellow. Our post-receptoral physiological mechanisms are organized in two pathways for color perception, a red-green (protanopic and deuteranopic and a blue-yellow (tritanopic. We argue that the discrimination losses in the yellow area in congenital color vision deficiency subjects could generate a subtle loss of discriminability in the tritanopic channel considering discrepancies with yellow perception. We measured color discrimination thresholds for blue and yellow of tritanopic channel in congenital color deficiency subjects. Chromaticity thresholds were measured around a white background (0.1977 u’, 0.4689 v’ in the CIE 1976 consisting of a blue-white and white-yellow thresholds in a tritanopic color confusion line of 21 congenital colorblindness subjects (mean age = 27.7; SD= 5.6 years; 14 deuteranomalous and 7 protanomalous and of 82 (mean age = 25.1; SD= 3.7 years normal color vision subjects. Significant increase in the whole tritanopic axis was found for both deuteranomalous and protanomalous subjects compared to controls for the blue-white (F2,100= 18.80; p< 0.0001 and white-yellow (F2,100= 22.10; p< 0.0001 thresholds. A Principal Component Analysis found a weighting toward to the yellow thresholds induced by deuteranomalous subjects. In conclusion, the discrimination in the tritanopic color confusion axis is significantly reduced in congenital color vision deficiency compared to normal subjects. Since yellow discrimination was impaired the balance of the blue-yellow channels is impaired justifying the increased thresholds found for blue-white discrimination. The weighting toward the yellow region of the color space with the deuteranomalous contributing to that perceptual distortion is discussed in terms of physiological mechanisms.

  5. Evolutionary Games with Randomly Changing Payoff Matrices

    Science.gov (United States)

    Yakushkina, Tatiana; Saakian, David B.; Bratus, Alexander; Hu, Chin-Kun

    2015-06-01

    Evolutionary games are used in various fields stretching from economics to biology. In most of these games a constant payoff matrix is assumed, although some works also consider dynamic payoff matrices. In this article we assume a possibility of switching the system between two regimes with different sets of payoff matrices. Potentially such a model can qualitatively describe the development of bacterial or cancer cells with a mutator gene present. A finite population evolutionary game is studied. The model describes the simplest version of annealed disorder in the payoff matrix and is exactly solvable at the large population limit. We analyze the dynamics of the model, and derive the equations for both the maximum and the variance of the distribution using the Hamilton-Jacobi equation formalism.

  6. The effect of cognitive load on social categorization in the category confusion paradigm

    NARCIS (Netherlands)

    Spears, R; Haslam, SA; Jansen, R

    1999-01-01

    The category confusion paradigm (Taylor, Fiske, Etcoff & Ruderman, 1978) was used to examine the relationship between cognitive load and the extent of social categorization. The original prediction made by Taylor et al. (1978; Experiment 2) and inferences from the cognitive miser model suggest that

  7. The separate roles of the reflective mind and involuntary inhibitory control in gatekeeping paranormal beliefs and the underlying intuitive confusions.

    Science.gov (United States)

    Svedholm, Annika M; Lindeman, Marjaana

    2013-08-01

    Intuitive thinking is known to predict paranormal beliefs, but the processes underlying this relationship, and the role of other thinking dispositions, have remained unclear. Study 1 showed that while an intuitive style increased and a reflective disposition counteracted paranormal beliefs, the ontological confusions suggested to underlie paranormal beliefs were predicted by individual differences in involuntary inhibitory processes. When the reasoning system was subjected to cognitive load, the ontological confusions increased, lost their relationship with paranormal beliefs, and their relationship with weaker inhibition was strongly accentuated. These findings support the argument that the confusions are mainly intuitive and that they therefore are most discernible under conditions in which inhibition is impaired, that is, when thinking is dominated by intuitive processing. Study 2 replicated the findings on intuitive and reflective thinking and paranormal beliefs. In Study 2, ontological confusions were also related to the same thinking styles as paranormal beliefs. The results support a model in which both intuitive and non-reflective thinking styles and involuntary inhibitory processes give way to embracing culturally acquired paranormal beliefs. ©2012 The British Psychological Society.

  8. Open vessel microwave digestion of food matrices (T6)

    International Nuclear Information System (INIS)

    Rhodes, L.; LeBlanc, G.

    2002-01-01

    Full text: Advancements in the field of open vessel microwave digestion continue to provide solutions for industries requiring acid digestion of large sample sizes. Those interesting in digesting food matrices are particularly interested in working with large amounts of sample and then diluting small final volumes. This paper will show the advantages of instantaneous regent addition and post-digestion evaporation when performing an open vessel digestion and evaporation methods for various food matrices will be presented along with analyte recovery data. (author)

  9. Software for Preprocessing Data from Rocket-Engine Tests

    Science.gov (United States)

    Cheng, Chiu-Fu

    2004-01-01

    Three computer programs have been written to preprocess digitized outputs of sensors during rocket-engine tests at Stennis Space Center (SSC). The programs apply exclusively to the SSC E test-stand complex and utilize the SSC file format. The programs are the following: Engineering Units Generator (EUGEN) converts sensor-output-measurement data to engineering units. The inputs to EUGEN are raw binary test-data files, which include the voltage data, a list identifying the data channels, and time codes. EUGEN effects conversion by use of a file that contains calibration coefficients for each channel. QUICKLOOK enables immediate viewing of a few selected channels of data, in contradistinction to viewing only after post-test processing (which can take 30 minutes to several hours depending on the number of channels and other test parameters) of data from all channels. QUICKLOOK converts the selected data into a form in which they can be plotted in engineering units by use of Winplot (a free graphing program written by Rick Paris). EUPLOT provides a quick means for looking at data files generated by EUGEN without the necessity of relying on the PV-WAVE based plotting software.

  10. Square matrices of order 2 theory, applications, and problems

    CERN Document Server

    Pop, Vasile

    2017-01-01

    This unique and innovative book presents an exciting and complete detail of all the important topics related to the theory of square matrices of order 2. The readers exploring every detailed aspect of matrix theory are gently led toward understanding advanced topics. They will follow every notion of matrix theory with ease, accumulating a thorough understanding of algebraic and geometric aspects of matrices of order 2. The prime jewel of this book is its offering of an unusual collection of problems, theoretically motivated, most of which are new, original, and seeing the light of publication for the first time in the literature. Nearly all of the exercises are presented with detailed solutions and vary in difficulty from easy to more advanced. Many problems are particularly challenging. These, and not only these, invite the reader to unleash their creativity and research capabilities and to discover their own methods of attacking a problem. Matrices have a vast practical importance to mathematics, science, a...

  11. Linear algebra and matrices topics for a second course

    CERN Document Server

    Shapiro, Helene

    2015-01-01

    Linear algebra and matrix theory are fundamental tools for almost every area of mathematics, both pure and applied. This book combines coverage of core topics with an introduction to some areas in which linear algebra plays a key role, for example, block designs, directed graphs, error correcting codes, and linear dynamical systems. Notable features include a discussion of the Weyr characteristic and Weyr canonical forms, and their relationship to the better-known Jordan canonical form; the use of block cyclic matrices and directed graphs to prove Frobenius's theorem on the structure of the eigenvalues of a nonnegative, irreducible matrix; and the inclusion of such combinatorial topics as BIBDs, Hadamard matrices, and strongly regular graphs. Also included are McCoy's theorem about matrices with property P, the Bruck-Ryser-Chowla theorem on the existence of block designs, and an introduction to Markov chains. This book is intended for those who are familiar with the linear algebra covered in a typical first c...

  12. Selection of appropriate conditioning matrices for the safe disposal of radioactive waste

    International Nuclear Information System (INIS)

    Vance, E.R.

    2002-01-01

    The selection of appropriate solid conditioning matrices or wasteforms for the safe disposal of radioactive waste is dictated by many factors. The overriding issue is that the matrix incorporating the radionuclides, together with a set of engineered barriers in a near-surface or deep geological repository, should prevent significant groundwater transport of radionuclides to the biosphere. For high-level waste (HLW) from nuclear fuel reprocessing, the favored matrices are glasses, ceramics and glass-ceramics. Borosilicate glasses are presently being used in some countries, but there are strong scientific arguments why ceramics based on assemblages of natural minerals are advantageous for HLW. Much research has been carried out in the last 40 years around the world, and different matrices are more suitable than others for a given waste composition. However a major stumbling block for HLW immobilisation is the mall number of approved geological repositories for such matrices. The most appropriate matrices for Intermediate and low-level wastes are contentious and the selection criteria are not very well defined. The candidate matrices for these latter wastes are cements, bitumen, geopolymers, glasses, glass-ceramics and ceramics. After discussing the pros and cons of various candidate matrices for given kinds of radioactive wastes, the SYNROC research program at ANSTO will be briefly surveyed. Some of the potential applications of this work using a variety of SYNROC derivatives will be given. Finally the basic research program at ANSTO on radioactive waste immobilisation will be summarised. This comprises mainly work on solid state chemistry to understand ionic valences and co-ordinations for the chemical design of wasteforms, aqueous durability to study the pH and temperature dependence of solid-water reactions, radiation damage effects on structure and solid-water reactions. (Author)

  13. Preparation and characterization of porous crosslinked collagenous matrices containing bioavailable chondroitin sulphate

    NARCIS (Netherlands)

    Pieper, J.S.; Oosterhof, A.; Dijkstra, Pieter J.; Veerkamp, J.H.; van Kuppevelt, T.H.

    1999-01-01

    Porous collagen matrices with defined physical, chemical and biological characteristics are interesting materials for tissue engineering. Attachment of glycosaminoglycans (GAGs) may add to these characteristics and valorize collagen. In this study, porous type I collagen matrices were crosslinked

  14. Efficiency of fly ash belite cement and zeolite matrices for immobilizing cesium

    International Nuclear Information System (INIS)

    Goni, S.; Guerrero, A.; Lorenzo, M.P.

    2006-01-01

    The efficiency of innovative matrices for immobilizing cesium is presented in this work. The matrix formulation included the use of fly ash belite cement (FABC-2-W) and gismondine-type Na-P1 zeolite, both of which are synthesized from fly ash of coal combustion. The efficiency for immobilizing cesium is evaluated from the leaching test ANSI/ANS 16.1-1986 at the temperature of 40 deg. C, from which the apparent diffusion coefficient of cesium is obtained. Matrices with 100% of FABC-2-W are used as a reference. The integrity of matrices is evaluated by porosity and pore-size distribution from mercury intrusion porosimetry, X-ray diffraction and nitrogen adsorption analyses. Both matrices can be classified as good solidify systems for cesium, specially the FABC-2-W/zeolite matrix in which the replacement of 50% of belite cement by the gismondine-type Na-P1 zeolite caused a decrease of two orders of magnitude of cesium mean Effective Diffusion Coefficient (D e ) (2.8e-09 cm 2 /s versus 2.2e-07 cm 2 /s, for FABC-2-W/zeolite and FABC-2-W matrices, respectively)

  15. Mental Rotation Does Not Account for Sex Differences in Left-Right Confusion

    Science.gov (United States)

    Ocklenburg, Sebastian; Hirnstein, Marco; Ohmann, Hanno Andreas; Hausmann, Markus

    2011-01-01

    Several studies have demonstrated that women believe they are more prone to left-right confusion (LRC) than men. However, while some studies report that there is also a sex difference in LRC tasks favouring men, others report that men and women perform equally well. Recently, it was suggested that sex differences only emerge in LRC tasks when they…

  16. Preparation to care for confused older patients in general hospitals: a study of UK health professionals.

    Science.gov (United States)

    Griffiths, Amanda; Knight, Alec; Harwood, Rowan; Gladman, John R F

    2014-07-01

    in the UK, two-thirds of patients in general hospitals are older than 70, of whom half have dementia or delirium or both. Our objective was to explore doctors, nurses and allied health professionals' perceptions of their preparation to care for confused older patients on general hospital wards. : using a quota sampling strategy across 11 medical, geriatric and orthopaedic wards in a British teaching hospital, we conducted 60 semi-structured interviews with doctors, nurses and allied healthcare professionals and analysed the data using the Consensual Qualitative Research approach. : there was consensus among participants that education, induction and in-service training left them inadequately prepared and under-confident to care for confused older patients. Many doctors reported initial assessments of confused older patients as difficult. They admitted inadequate knowledge of mental health disorders, including the diagnostic features of delirium and dementia. Handling agitation and aggression were considered top priorities for training, particularly for nurses. Multidisciplinary team meetings were highly valued but were reported as too infrequent. Participants valued specialist input but reported difficulties gaining such support. Communication with confused patients was regarded as particularly challenging, both in terms of patients making their needs known, and staff conveying information to patients. Participants reported emotional and behavioural responses including frustration, stress, empathy, avoidance and low job satisfaction. : our findings indicate that a revision of training across healthcare professions in the UK is required, and that increased specialist support should be provided, so that the workforce is properly prepared to care for older patients with cognitive problems. © The Author 2013. Published by Oxford University Press on behalf of the British Geriatrics Society.

  17. Evaluation of the technical feasibility of new conditioning matrices for long-lived radionuclides

    International Nuclear Information System (INIS)

    Deschanels, X.

    2004-01-01

    Several matrices have been selected for the conditioning of long-lived radioactive wastes: a compound made of a iodo-apatite core coated with a densified matrice of vanadium-phosphorus-lead apatite for iodine; the hollandite ceramic for cesium; the britholite, zirconolite, thorium phosphate diphosphate, and the monazite-brabantite solid solution for minor actinides; and a Nb-based metal alloy and phosphate or titanate-type ceramics for technetium. This report presents the results of the researches carried out between 2002-2004 during the technical feasibility step. The main points described are: - the behaviour of matrices under irradiation. These studies were performed thanks to an approach combining the characterization of natural analogues, the doping of matrices with short-lived radionuclides and the use of external irradiations; - the behaviour of these matrices with respect to water alteration; - the sensibility of these structures with respect to the incorporation of chemical impurities; - a package-process approach including the optimization of the process and preliminary studies about the package concept retained. These studies show that important work remains to be done to develop conditioning matrices suitable for iodine and technetium, while for cesium and minor actinides, the first steps of the technical feasibility are made. However, it remains impossible today to determine the structure having the best global behaviour. (J.S.)

  18. Matrices and society matrix algebra and its applications in the social sciences

    CERN Document Server

    Bradley, Ian

    2014-01-01

    Matrices offer some of the most powerful techniques in modem mathematics. In the social sciences they provide fresh insights into an astonishing variety of topics. Dominance matrices can show how power struggles in offices or committees develop; Markov chains predict how fast news or gossip will spread in a village; permutation matrices illuminate kinship structures in tribal societies. All these invaluable techniques and many more are explained clearly and simply in this wide-ranging book. Originally published in 1986. The Princeton Legacy Library uses the latest print-on-demand technology to

  19. Theory of quark mixing matrix and invariant functions of mass matrices

    International Nuclear Information System (INIS)

    Jarlskog, C.

    1987-10-01

    The outline of this talk is as follows: The origin of the quark mixing matrix. Super elementary theory of flavour projection operators. Equivalences and invariances. The commutator formalism and CP violation. CP conditions for any number of families. The 'angle' between the quark mass matrices. Application to Fritzsch and Stech matrices. References. (author)

  20. Cellular hemangioma and angioblastoma of the spine, originally classified as hemangioendothelioma. A confusing diagnosis

    NARCIS (Netherlands)

    Been, H. D.; Fidler, M. W.; Bras, J.

    1994-01-01

    The authors report two cases of vascular tumors of the spine, classified originally as benign and malignant hemangioendothelioma, and after revision, as cellular hemangioma and angioblastomatosis, respectively. Problems in interpretation of the confusing term hemangioendothelioma and treatment

  1. Geometry and arithmetic of factorized S-matrices

    International Nuclear Information System (INIS)

    Freund, P.G.O.

    1995-01-01

    In realistic four-dimensional quantum field theories integrability is elusive. Relativity, when combined with quantum theory does not permit an infinity of local conservation laws except for free fields, for which the S-matrix is trivial S = 1. In two space-time dimensions, where forward and backward scattering are the only possibilities, nontrivial S-matrices are possible even in integrable theories. Such S-matrices are known to factorize [1]. This means that there is no particle production, so that the 4-point amplitudes determine all higher n-point amplitudes. In our recent work [2, 3, 4, 5, 6] we found that in such integrable two-dimensional theories, even the input 4-point amplitudes are determined by a simple principle. Roughly speaking these amplitudes describe the S-wave scattering which one associates with free motion on certain quantum-symmetric spaces. The trivial S-matrix of free field theory describes the absence of scattering which one associates with free motion on a euclidean space, itself a symmetric space. As is well known [7, 8, 9], for curved symmetric spaces the S-matrices for S-wave scattering are no longer trivial, but rather they are determined by the Harish-Chandra c-functions of these spaces [10]. The quantum deformation of this situation is what appears when one considers excitation scattering in two-dimensional integrable models. (orig.)

  2. Nuclear data for fusion: Validation of typical pre-processing methods for radiation transport calculations

    International Nuclear Information System (INIS)

    Hutton, T.; Sublet, J.C.; Morgan, L.; Leadbeater, T.W.

    2015-01-01

    Highlights: • We quantify the effect of processing nuclear data from ENDF to ACE format. • We consider the differences between fission and fusion angular distributions. • C-nat(n,el) at 2.0 MeV has a 0.6% deviation between original and processed data. • Fe-56(n,el) at 14.1 MeV has a 11.0% deviation between original and processed data. • Processed data do not accurately depict ENDF distributions for fusion energies. - Abstract: Nuclear data form the basis of the radiation transport codes used to design and simulate the behaviour of nuclear facilities, such as the ITER and DEMO fusion reactors. Typically these data and codes are biased towards fission and high-energy physics applications yet are still applied to fusion problems. With increasing interest in fusion applications, the lack of fusion specific codes and relevant data libraries is becoming increasingly apparent. Industry standard radiation transport codes require pre-processing of the evaluated data libraries prior to use in simulation. Historically these methods focus on speed of simulation at the cost of accurate data representation. For legacy applications this has not been a major concern, but current fusion needs differ significantly. Pre-processing reconstructs the differential and double differential interaction cross sections with a coarse binned structure, or more recently as a tabulated cumulative distribution function. This work looks at the validity of applying these processing methods to data used in fusion specific calculations in comparison to fission. The relative effects of applying this pre-processing mechanism, to both fission and fusion relevant reaction channels are demonstrated, and as such the poor representation of these distributions for the fusion energy regime. For the nat C(n,el) reaction at 2.0 MeV, the binned differential cross section deviates from the original data by 0.6% on average. For the 56 Fe(n,el) reaction at 14.1 MeV, the deviation increases to 11.0%. We

  3. Random Matrices for Information Processing – A Democratic Vision

    DEFF Research Database (Denmark)

    Cakmak, Burak

    The thesis studies three important applications of random matrices to information processing. Our main contribution is that we consider probabilistic systems involving more general random matrix ensembles than the classical ensembles with iid entries, i.e. models that account for statistical...... dependence between the entries. Specifically, the involved matrices are invariant or fulfill a certain asymptotic freeness condition as their dimensions grow to infinity. Informally speaking, all latent variables contribute to the system model in a democratic fashion – there are no preferred latent variables...

  4. A Comparative Investigation of the Combined Effects of Pre-Processing, Wavelength Selection, and Regression Methods on Near-Infrared Calibration Model Performance.

    Science.gov (United States)

    Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N

    2017-07-01

    Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant

  5. Controllable and reversible inversion of the electronic structure in nickel N-confused porphyrin: a case when MCD matters.

    Science.gov (United States)

    Sripothongnak, Saovalak; Ziegler, Christopher J; Dahlby, Michael R; Nemykin, Victor N

    2011-08-01

    Nickel N-confused tetraphenylporphyrin, 1, and nickel 2-N-methyl-N-confused tetraphenylporphyrin, 1-Me, exhibit unusual sign-reversed (positive-to-negative intensities in ascending energy) MCD spectra in the Q-type band region, suggesting a rare ΔHOMO ΔLUMO combination characteristic for the meso-(tetraaryl)porphyrins. DFT, time-dependent DFT, and semiempirical ZINDO/S calculations on 1, 1-Me, and 1(-) confirm the experimental finding and successfully explain the MCD pattern in the target compounds. © 2011 American Chemical Society

  6. Linguistic confusion in economics: utility, causality, product differentiation, and the supply of natural resources.

    Science.gov (United States)

    Simon, J L

    1982-01-01

    Lack of careful attention to the language used in the discussion of economic concepts has resulted in considerable confusion and error. 2 frequent sources of confusion include tautology and the absence of operational definitions of concepts. This paper outlines a more effective scientific practice through reference to 2 economic examples: 1) the concept of utility, where it is demonstrated that choice of an operational definition of the concept facilitates interpersonal comparisons; and 2) causality, where a multidimensional operational definition is needed to discriminate among the various meanings of the term in theoretical, empirical, and policy contexts. The paper further discusses the example of natural resource scarcity, where application of the term "finite" reveals that there is no empirical evidence of physical limits to growth in the use of resources. A more appropriate measure of scarcity is the economic concept of price.

  7. Studies of Catalytic Properties of Inorganic Rock Matrices in Redox Reactions

    Directory of Open Access Journals (Sweden)

    Nikolay M. Dobrynkin

    2017-09-01

    Full Text Available Intrinsic catalytic properties of mineral matrices of various kinds (basalts, clays, sandstones were studied, which are of interest for in-situ heavy oil upgrading (i.e., underground to create advanced technologies for enhanced oil recovery. The elemental, surface and phase composition and matrix particle morphology, surface and acidic properties were studied using elemental analysis, X-ray diffraction, adsorption and desorption of nitrogen and ammonia. The data on the catalytic activity of inorganic matrices in ammonium nitrate decomposition (reaction with a large gassing, oxidation of hydrocarbons and carbon monoxide, and hydrocracking of asphaltenes into maltenes (the conversion of heavy hydrocarbons into more valuable light hydrocarbons were discussed. In order to check their applicability for the asphaltenes hydrocracking catalytic systems development, basalt and clay matrices were used as supports for iron/basalt, nickel/basalt and iron/clay catalysts. The catalytic activity of the matrices in the reactions of the decomposition of ammonium nitrate, oxidation of hydrocarbons and carbon monoxide, and hydrocracking of asphaltens was observed for the first time.

  8. SUPERCRITICAL FLUID TREATMENT OF THREE-DIMENSIONAL HYDROGEL MATRICES, COMPOSED OF CHITOSAN DERIVATIVES

    Directory of Open Access Journals (Sweden)

    P. S. Timashev

    2016-01-01

    Full Text Available Aim. Controlled treatment of the physico-chemical and mechanical properties of a three-dimensional crosslinked matrix based on reactive chitosan. Materials and methods. The three-dimensional matrices were obtained using photosensitive composition based on allyl chitosan (5 wt%, poly(ethylene glycol diacrylate (8 wt% and the photoinitiator Irgacure 2959 (1 wt% by laser stereolithography setting. The kinetic swelling curves were constructed for structures in the base and salt forms of chitosan using gravimetric method and the contact angles were measured using droplet spreading. The supercritical fl uid setting (40 °C, 12 MPa was used to process matrices during 1.5 hours. Using nanohardness Piuma Nanoindenter we calculated values of Young’s modulus. The study of cytotoxicity was performed by direct contact with the culture of the NIH 3T3 mouse fi broblast cell line. Results. Architectonics of matrices fully repeats the program model. Matrices are uniform throughout and retain their shape after being transferred to the base form. Matrices compressed by 5% after treatment in supercritical carbon dioxide (scCO2 . The elastic modulus of matrices after scCO2 treatment is 4 times higher than the original matrix. The kinetic swelling curves have similar form. In this case the maximum degree of swelling for matrices in base form is 2–2.5 times greater than that of matrices in salt form. There was a surface hydrophobization after the material was transferred to the base form: the contact angle is 94°, and for the salt form it is 66°. The basic form absorbs liquid approximately 1.6 times faster. The fi lm thickness was increased in the area of contact with the liquid droplets after absorption by 133 and 87% for the base and the salt forms, respectively. Treatment of samples in scCO2 reduces their cytotoxicity from 2 degree of reaction (initial samples down to 1 degree of reaction. Conclusion. The use of supercritical carbon dioxide for scaffolds

  9. The reflection of hierarchical cluster analysis of co-occurrence matrices in SPSS

    NARCIS (Netherlands)

    Zhou, Q.; Leng, F.; Leydesdorff, L.

    2015-01-01

    Purpose: To discuss the problems arising from hierarchical cluster analysis of co-occurrence matrices in SPSS, and the corresponding solutions. Design/methodology/approach: We design different methods of using the SPSS hierarchical clustering module for co-occurrence matrices in order to compare

  10. Revealing the consequences and errors of substance arising from the inverse confusion between the crystal (ligand) field quantities and the zero-field splitting ones

    Energy Technology Data Exchange (ETDEWEB)

    Rudowicz, Czesław, E-mail: crudowicz@zut.edu.pl [Institute of Physics, West Pomeranian University of Technology, Al. Piastów 17, 70-310 Szczecin (Poland); Karbowiak, Mirosław [Faculty of Chemistry, University of Wrocław, ul. F. Joliot-Curie 14, 50-383 Wrocław (Poland)

    2015-01-01

    Survey of recent literature has revealed a doubly-worrying tendency concerning the treatment of the two distinct types of Hamiltonians, namely, the physical crystal field (CF), or equivalently ligand field (LF), Hamiltonians and the zero-field splitting (ZFS) Hamiltonians, which appear in the effective spin Hamiltonians (SH). The nature and properties of the CF (LF) Hamiltonians have been mixed up in various ways with those of the ZFS Hamiltonians. Such cases have been identified in a rapidly growing number of studies of the transition-ion based systems using electron magnetic resonance (EMR), optical spectroscopy, and magnetic measurements. These findings have far ranging implications since these Hamiltonians are cornerstones for interpretation of magnetic and spectroscopic properties of the single transition ions in various crystals or molecules as well as the exchange coupled systems (ECS) of transition ions, e.g. single molecule magnets (SMM) or single ion magnets (SIM). The seriousness of the consequences of such conceptual problems and related terminological confusions has reached a level that goes far beyond simple semantic issues or misleading keyword classifications of papers in journals and scientific databases. The prevailing confusion, denoted as the CF=ZFS confusion, pertains to the cases of labeling the true ZFS quantities as purportedly the CF (LF) quantities. Here we consider the inverse confusion between the CF (LF) quantities and the SH (ZFS) ones, denoted the ZFS=CF confusion, which consists in referring to the parameters (or Hamiltonians), which are the true CF (LF) quantities, as purportedly the ZFS (or SH) quantities. Specific cases of the ZFS=CF confusion identified in recent textbooks, reviews and papers, especially SMM- and SIM-related ones, are surveyed and the pertinent misconceptions are clarified. The serious consequences of the terminological confusions include misinterpretation of data from a wide range of experimental techniques and

  11. Study on vulnerability matrices of masonry buildings of mainland China

    Science.gov (United States)

    Sun, Baitao; Zhang, Guixin

    2018-04-01

    The degree and distribution of damage to buildings subjected to earthquakes is a concern of the Chinese Government and the public. Seismic damage data indicates that seismic capacities of different types of building structures in various regions throughout mainland China are different. Furthermore, the seismic capacities of the same type of structure in different regions may vary. The contributions of this research are summarized as follows: 1) Vulnerability matrices and earthquake damage matrices of masonry structures in mainland China were chosen as research samples. The aim was to analyze the differences in seismic capacities of sample matrices and to present general rules for categorizing seismic resistance. 2) Curves relating the percentage of damaged masonry structures with different seismic resistances subjected to seismic demand in different regions of seismic intensity (VI to X) have been developed. 3) A method has been proposed to build vulnerability matrices of masonry structures. The damage ratio for masonry structures under high-intensity events such as the Ms 6.1 Panzhihua earthquake in Sichuan province on 30 August 2008, was calculated to verify the applicability of this method. This research offers a significant theoretical basis for predicting seismic damage and direct loss assessment of groups of buildings, as well as for earthquake disaster insurance.

  12. The Antitriangular Factorization of Saddle Point Matrices

    KAUST Repository

    Pestana, J.; Wathen, A. J.

    2014-01-01

    Mastronardi and Van Dooren [SIAM J. Matrix Anal. Appl., 34 (2013), pp. 173-196] recently introduced the block antitriangular ("Batman") decomposition for symmetric indefinite matrices. Here we show the simplification of this factorization for saddle

  13. Diagonalization of quark mass matrices and the Cabibbo-Kobayashi-Maskawa matrix

    International Nuclear Information System (INIS)

    Rasin, A.

    1997-08-01

    I discuss some general aspect of diagonalizing the quark mass matrices and list all possible parametrizations of the Cabibbo-Kobayashi-Maskawa matrix (CKM) in terms of three rotation angles and a phase. I systematically study the relation between the rotations needed to diagonalize the Yukawa matrices and various parametrizations of the CKM. (author). 17 refs, 1 tab

  14. Concrete minimal 3 × 3 Hermitian matrices and some general cases

    Directory of Open Access Journals (Sweden)

    Klobouk Abel H.

    2017-12-01

    Full Text Available Given a Hermitian matrix M ∈ M3(ℂ we describe explicitly the real diagonal matrices DM such that ║M + DM║ ≤ ║M + D║ for all real diagonal matrices D ∈ M3(ℂ, where ║ · ║ denotes the operator norm. Moreover, we generalize our techniques to some n × n cases.

  15. Arabic text preprocessing for the natural language processing applications

    International Nuclear Information System (INIS)

    Awajan, A.

    2007-01-01

    A new approach for processing vowelized and unvowelized Arabic texts in order to prepare them for Natural Language Processing (NLP) purposes is described. The developed approach is rule-based and made up of four phases: text tokenization, word light stemming, word's morphological analysis and text annotation. The first phase preprocesses the input text in order to isolate the words and represent them in a formal way. The second phase applies a light stemmer in order to extract the stem of each word by eliminating the prefixes and suffixes. The third phase is a rule-based morphological analyzer that determines the root and the morphological pattern for each extracted stem. The last phase produces an annotated text where each word is tagged with its morphological attributes. The preprocessor presented in this paper is capable of dealing with vowelized and unvowelized words, and provides the input words along with relevant linguistics information needed by different applications. It is designed to be used with different NLP applications such as machine translation text summarization, text correction, information retrieval and automatic vowelization of Arabic Text. (author)

  16. Preparation and characterization of chitosan-heparin composite matrices for blood contacting tissue engineering

    International Nuclear Information System (INIS)

    He Qing; Gong Kai; Gong Yandao; Zhang Xiufang; Ao Qiang; Zhang Lihai; Hu Min

    2010-01-01

    Chitosan has been widely used for biomaterial scaffolds in tissue engineering because of its good mechanical properties and cytocompatibility. However, the poor blood compatibility of chitosan has greatly limited its biomedical utilization, especially for blood contacting tissue engineering. In this study, we exploited a polymer blending procedure to heparinize the chitosan material under simple and mild conditions to improve its antithrombogenic property. By an optimized procedure, a macroscopically homogeneous chitosan-heparin (Chi-Hep) blended suspension was obtained, with which Chi-Hep composite films and porous scaffolds were fabricated. X-ray photoelectron spectroscopy and sulfur elemental analysis confirmed the successful immobilization of heparin in the composite matrices (i.e. films and porous scaffolds). Toluidine blue staining indicated that heparin was distributed homogeneously in the composite matrices. Only a small amount of heparin was released from the matrices during incubation in normal saline for 10 days. The composite matrices showed improved blood compatibility, as well as good mechanical properties and endothelial cell compatibility. These results suggest that the Chi-Hep composite matrices are promising candidates for blood contacting tissue engineering.

  17. Calculation of controllability and observability matrices for special case of continuous-time multi-order fractional systems.

    Science.gov (United States)

    Hassanzadeh, Iman; Tabatabaei, Mohammad

    2017-03-28

    In this paper, controllability and observability matrices for pseudo upper or lower triangular multi-order fractional systems are derived. It is demonstrated that these systems are controllable and observable if and only if their controllability and observability matrices are full rank. In other words, the rank of these matrices should be equal to the inner dimension of their corresponding state space realizations. To reduce the computational complexities, these matrices are converted to simplified matrices with smaller dimensions. Numerical examples are provided to show the usefulness of the mentioned matrices for controllability and observability analysis of this case of multi-order fractional systems. These examples clarify that the duality concept is not necessarily true for these special systems. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  18. EBM, HTA, and CER: clearing the confusion.

    Science.gov (United States)

    Luce, Bryan R; Drummond, Michael; Jönsson, Bengt; Neumann, Peter J; Schwartz, J Sanford; Siebert, Uwe; Sullivan, Sean D

    2010-06-01

    The terms evidence-based medicine (EBM), health technology assessment (HTA), comparative effectiveness research (CER), and other related terms lack clarity and so could lead to miscommunication, confusion, and poor decision making. The objective of this article is to clarify their definitions and the relationships among key terms and concepts. This article used the relevant methods and policy literature as well as the websites of organizations engaged in evidence-based activities to develop a framework to explain the relationships among the terms EBM, HTA, and CER. This article proposes an organizing framework and presents a graphic demonstrating the differences and relationships among these terms and concepts. More specific terminology and concepts are necessary for an informed and clear public policy debate. They are even more important to inform decision making at all levels and to engender more accountability by the organizations and individuals responsible for these decisions.

  19. The Development of Novel Nanodiamond Based MALDI Matrices for the Analysis of Small Organic Pharmaceuticals

    Science.gov (United States)

    Chitanda, Jackson M.; Zhang, Haixia; Pahl, Erica; Purves, Randy W.; El-Aneed, Anas

    2016-10-01

    The utility of novel functionalized nanodiamonds (NDs) as matrices for matrix-assisted laser desorption ionization-mass spectrometry (MALDI-MS) is described herein. MALDI-MS analysis of small organic compounds (<1000 Da) is typically complex because of interferences from numerous cluster ions formed when using conventional matrices. To expand the use of MALDI for the analysis of small molecules, novel matrices were designed by covalently linking conventional matrices (or a lysine moiety) to detonated NDs. Four new functionalized NDs were evaluated for their ionization capabilities using five pharmaceuticals with varying molecular structures. Two ND matrices were able to ionize all tested pharmaceuticals in the negative ion mode, producing the deprotonated ions [M - H]-. Ion intensity for target analytes was generally strong with enhanced signal-to-noise ratios compared with conventional matrices. The negative ion mode is of great importance for biological samples as interference from endogenous compounds is inherently minimized in the negative ion mode. Since the molecular structures of the tested pharmaceuticals did not suggest that negative ion mode would be preferable, this result magnifies the importance of these findings. On the other hand, conventional matrices primarily facilitated the ionization as expected in the positive ion mode, producing either the protonated molecules [M + H]+ or cationic adducts (typically producing complex spectra with numerous adduct peaks). The data presented in this study suggests that these matrices may offer advantages for the analysis of low molecular weight pharmaceuticals/metabolites.

  20. Stabilization of chromium-bearing electroplating sludge with MSWI fly ash-based Friedel matrices.

    Science.gov (United States)

    Qian, Guangren; Yang, Xiaoyan; Dong, Shixiang; Zhou, Jizhi; Sun, Ying; Xu, Yunfeng; Liu, Qiang

    2009-06-15

    This work investigated the feasibility and effectiveness of MSWI fly ash-based Friedel matrices on stabilizing/solidifying industrial chromium-bearing electroplating sludge using MSWI fly ash as the main raw material with a small addition of active aluminum. The compressive strength, leaching behavior and chemical speciation of heavy metals and hydration phases of matrices were characterized by TCLP, XRD, FTIR and other experimental methods. The results revealed that MSWI fly ash-based Friedel matrices could effectively stabilize chromium-bearing electroplating sludge, the formed ettringite and Friedel phases played a significant role in the fixation of heavy metals in electroplating sludge. The co-disposal of chromium-bearing electroplating sludge and MSWI fly ash-based Friedel matrices with a small addition of active aluminum is promising to be an effective way of stabilizing chromium-bearing electroplating sludge.

  1. Autonomous technology - sources of confusion: a model for explanation and prediction of conceptual shifts.

    Science.gov (United States)

    Stensson, Patrik; Jansson, Anders

    2014-01-01

    Today, autonomous is often used for technology with a more intelligent self-management capability than common automation. This concept usage is maladaptive, ignoring both the distinction between autonomy and heteronomy according to Kant's categorical imperative and that the meaning of autonomy implies qualities technology cannot have. Being autonomous is about having the right to be wrong, a right justified by accountability and insightful understanding of real-life values, and it is about being externally uncontrollable. The contemporary use of autonomy as well as similar concepts is discussed and a model is presented showing how six sources of confusion interact in a vicious circle that impede human authority and autonomy. Our goal is to sort out these confusions and contribute to a development in which the different roles of machines and people, and human responsibilities, are explicated rather than blurred, which should facilitate the forming of truly beneficial and complementary systems.

  2. Use of spectral pre-processing methods to compensate for the presence of packaging film in visible–near infrared hyperspectral images of food products

    Directory of Open Access Journals (Sweden)

    A.A. Gowen

    2010-10-01

    Full Text Available The presence of polymeric packaging film in images of food products may modify spectra obtained in hyperspectral imaging (HSI experiments, leading to undesirable image artefacts which may impede image classification. Some pre-processing of the image is typically required to reduce the presence of such artefacts. The objective of this research was to investigate the use of spectral pre-processing techniques to compensate for the presence of packaging film in hyperspectral images obtained in the visible–near infrared wavelength range (445–945 nm, with application in food quality assessment. A selection of commonly used pre-processing methods, used individually and in combination, were applied to hyperspectral images of flat homogeneous samples, imaged in the presence and absence of different packaging films (polyvinyl chloride and polyethylene terephthalate. Effects of the selected pre-treatments on variation due to the film’s presence were examined in principal components score space. The results show that the combination of first derivative Savitzky–Golay followed by standard normal variate transformation was useful in reducing variations in spectral response caused by the presence of packaging film. Compared to other methods examined, this combination has the benefits of being computationally fast and not requiring a priori knowledge about the sample or film used.

  3. Detailed Investigation and Comparison of the XCMS and MZmine 2 Chromatogram Construction and Chromatographic Peak Detection Methods for Preprocessing Mass Spectrometry Metabolomics Data.

    Science.gov (United States)

    Myers, Owen D; Sumner, Susan J; Li, Shuzhao; Barnes, Stephen; Du, Xiuxia

    2017-09-05

    XCMS and MZmine 2 are two widely used software packages for preprocessing untargeted LC/MS metabolomics data. Both construct extracted ion chromatograms (EICs) and detect peaks from the EICs, the first two steps in the data preprocessing workflow. While both packages have performed admirably in peak picking, they also detect a problematic number of false positive EIC peaks and can also fail to detect real EIC peaks. The former and latter translate downstream into spurious and missing compounds and present significant limitations with most existing software packages that preprocess untargeted mass spectrometry metabolomics data. We seek to understand the specific reasons why XCMS and MZmine 2 find the false positive EIC peaks that they do and in what ways they fail to detect real compounds. We investigate differences of EIC construction methods in XCMS and MZmine 2 and find several problems in the XCMS centWave peak detection algorithm which we show are partly responsible for the false positive and false negative compound identifications. In addition, we find a problem with MZmine 2's use of centWave. We hope that a detailed understanding of the XCMS and MZmine 2 algorithms will allow users to work with them more effectively and will also help with future algorithmic development.

  4. Properties of Zero-Free Transfer Function Matrices

    Science.gov (United States)

    D. O. Anderson, Brian; Deistler, Manfred

    Transfer functions of linear, time-invariant finite-dimensional systems with more outputs than inputs, as arise in factor analysis (for example in econometrics), have, for state-variable descriptions with generic entries in the relevant matrices, no finite zeros. This paper gives a number of characterizations of such systems (and indeed square discrete-time systems with no zeros), using state-variable, impulse response, and matrix-fraction descriptions. Key properties include the ability to recover the input values at any time from a bounded interval of output values, without any knowledge of an initial state, and an ability to verify the no-zero property in terms of a property of the impulse response coefficient matrices. Results are particularized to cases where the transfer function matrix in question may or may not have a zero at infinity or a zero at zero.

  5. Sports drug testing using complementary matrices: Advantages and limitations.

    Science.gov (United States)

    Thevis, Mario; Geyer, Hans; Tretzel, Laura; Schänzer, Wilhelm

    2016-10-25

    Today, routine doping controls largely rely on testing whole blood, serum, and urine samples. These matrices allow comprehensively covering inorganic as well as low and high molecular mass organic analytes relevant to doping controls and are collecting and transferring from sampling sites to accredited anti-doping laboratories under standardized conditions. Various aspects including time and cost-effectiveness as well as intrusiveness and invasiveness of the sampling procedure but also analyte stability and breadth of the contained information have been motivation to consider and assess values potentially provided and added to modern sports drug testing programs by alternative matrices. Such alternatives could be dried blood spots (DBS), dried plasma spots (DPS), oral fluid (OF), exhaled breath (EB), and hair. In this review, recent developments and test methods concerning these alternative matrices and expected or proven contributions as well as limitations of these specimens in the context of the international anti-doping fight are presented and discussed, guided by current regulations for prohibited substances and methods of doping as established by the World Anti-Doping Agency (WADA). Focusing on literature published between 2011 and 2015, examples for doping control analytical assays concerning non-approved substances, anabolic agents, peptide hormones/growth factors/related substances and mimetics, β 2 -agonists, hormone and metabolic modulators, diuretics and masking agents, stimulants, narcotics, cannabinoids, glucocorticoids, and beta-blockers were selected to outline the advantages and limitations of the aforementioned alternative matrices as compared to conventional doping control samples (i.e. urine and blood/serum). Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Review of the Nomenclature of the Retaining Ligaments of the Cheek: Frequently Confused Terminology

    Directory of Open Access Journals (Sweden)

    Yeui Seok Seo

    2017-07-01

    Full Text Available Since the time of its inception within facial anatomy, wide variability in the terminology as well as the location and extent of retaining ligaments has resulted in confusion over nomenclature. Confusion over nomenclature also arises with regard to the subcutaneous ligamentous attachments, and in the anatomic location and extent described, particularly for zygomatic and masseteric ligaments. Certain historical terms—McGregor’s patch, the platysma auricular ligament, parotid cutaneous ligament, platysma auricular fascia, temporoparotid fasica (Lore’s fascia, anterior platysma-cutaneous ligament, and platysma cutaneous ligament—delineate retaining ligaments of related anatomic structures that have been conceptualized in various ways. Confusion around the masseteric cutaneous ligaments arises from inconsistencies in their reported locations in the literature because the size and location of the parotid gland varies so much, and this affects the relationship between the parotid gland and the fascia of the masseter muscle. For the zygomatic ligaments, there is disagreement over how far they extend, with descriptions varying over whether they extend medially beyond the zygomaticus minor muscle. Even the ‘main’ zygomatic ligament’s denotation may vary depending on which subcutaneous plane is used as a reference for naming it. Recent popularity in procedures using threads or injectables has required not only an accurate understanding of the nomenclature of retaining ligaments, but also of their location and extent. The authors have here summarized each retaining ligament with a survey of the different nomenclature that has been introduced by different authors within the most commonly cited published papers.

  7. Revealing the consequences and errors of substance arising from the inverse confusion between the crystal (ligand) field quantities and the zero-field splitting ones

    Science.gov (United States)

    Rudowicz, Czesław; Karbowiak, Mirosław

    2015-01-01

    Survey of recent literature has revealed a doubly-worrying tendency concerning the treatment of the two distinct types of Hamiltonians, namely, the physical crystal field (CF), or equivalently ligand field (LF), Hamiltonians and the zero-field splitting (ZFS) Hamiltonians, which appear in the effective spin Hamiltonians (SH). The nature and properties of the CF (LF) Hamiltonians have been mixed up in various ways with those of the ZFS Hamiltonians. Such cases have been identified in a rapidly growing number of studies of the transition-ion based systems using electron magnetic resonance (EMR), optical spectroscopy, and magnetic measurements. These findings have far ranging implications since these Hamiltonians are cornerstones for interpretation of magnetic and spectroscopic properties of the single transition ions in various crystals or molecules as well as the exchange coupled systems (ECS) of transition ions, e.g. single molecule magnets (SMM) or single ion magnets (SIM). The seriousness of the consequences of such conceptual problems and related terminological confusions has reached a level that goes far beyond simple semantic issues or misleading keyword classifications of papers in journals and scientific databases. The prevailing confusion, denoted as the CF=ZFS confusion, pertains to the cases of labeling the true ZFS quantities as purportedly the CF (LF) quantities. Here we consider the inverse confusion between the CF (LF) quantities and the SH (ZFS) ones, denoted the ZFS=CF confusion, which consists in referring to the parameters (or Hamiltonians), which are the true CF (LF) quantities, as purportedly the ZFS (or SH) quantities. Specific cases of the ZFS=CF confusion identified in recent textbooks, reviews and papers, especially SMM- and SIM-related ones, are surveyed and the pertinent misconceptions are clarified. The serious consequences of the terminological confusions include misinterpretation of data from a wide range of experimental techniques and

  8. The underexposed role of food matrices in probiotic products: reviewing the relationship between carrier matrices and product parameters

    NARCIS (Netherlands)

    Flach, J.; van der Waal, M.B.; van den Nieuwboer, M.; Claassen, H.J.H.M.; Larsen, O.F.A.

    2017-01-01

     Full Article  Figures & data References  Supplemental  Citations Metrics  Reprints & Permissions  PDF ABSTRACT Probiotic microorganisms are increasingly incorporated into food matrices in order to confer proposed health benefits on the consumer. It is important that the health benefits,

  9. Matrices and linear algebra

    CERN Document Server

    Schneider, Hans

    1989-01-01

    Linear algebra is one of the central disciplines in mathematics. A student of pure mathematics must know linear algebra if he is to continue with modern algebra or functional analysis. Much of the mathematics now taught to engineers and physicists requires it.This well-known and highly regarded text makes the subject accessible to undergraduates with little mathematical experience. Written mainly for students in physics, engineering, economics, and other fields outside mathematics, the book gives the theory of matrices and applications to systems of linear equations, as well as many related t

  10. Inversion of General Cyclic Heptadiagonal Matrices

    Directory of Open Access Journals (Sweden)

    A. A. Karawia

    2013-01-01

    Full Text Available We describe a reliable symbolic computational algorithm for inverting general cyclic heptadiagonal matrices by using parallel computing along with recursion. The computational cost of it is operations. The algorithm is implementable to the Computer Algebra System (CAS such as MAPLE, MATLAB, and MATHEMATICA. Two examples are presented for the sake of illustration.

  11. Inclusion of salt form on prescription medication labeling as a source of patient confusion: a pilot study

    Directory of Open Access Journals (Sweden)

    McDougall DJ

    2016-03-01

    Full Text Available Background: It has been estimated that 10,000 patient injuries occur in the US annually due to confusion involving drug names. An unexplored source of patient misunderstandings may be medication salt forms. Objective: The objective of this study was to assess patient knowledge and comprehension regarding the salt forms of medications as a potential source of medication errors. Methods: A 12 item questionnaire which assessed patient knowledge of medication names on prescription labels was administered to a convenience sample of patients presenting to a family practice clinic. Descriptive statistics were calculated and multivariate analyses were performed. Results: There were 308 responses. Overall, 41% of patients agreed they find their medication names confusing. Participants correctly answered to salt form questions between 12.1% and 56.9% of the time. Taking more prescription medications and higher education level were positively associated with providing more correct answers to 3 medication salt form knowledge questions, while age was negatively associated. Conclusions: Patient misconceptions about medication salt forms are common. These findings support recommendations to standardize the inclusion or exclusion of salt forms. Increasing patient education is another possible approach to reducing confusion.

  12. Critical statistics for non-Hermitian matrices

    International Nuclear Information System (INIS)

    Garcia-Garcia, A.M.; Verbaarschot, J.J.M.; Nishigaki, S.M.

    2002-01-01

    We introduce a generalized ensemble of non-Hermitian matrices interpolating between the Gaussian Unitary Ensemble, the Ginibre ensemble, and the Poisson ensemble. The joint eigenvalue distribution of this model is obtained by means of an extension of the Itzykson-Zuber formula to general complex matrices. Its correlation functions are studied both in the case of weak non-Hermiticity and in the case of strong non-Hermiticity. In the weak non-Hermiticity limit we show that the spectral correlations in the bulk of the spectrum display critical statistics: the asymptotic linear behavior of the number variance is already approached for energy differences of the order of the eigenvalue spacing. To lowest order, its slope does not depend on the degree of non-Hermiticity. Close the edge, the spectral correlations are similar to the Hermitian case. In the strong non-Hermiticity limit the crossover behavior from the Ginibre ensemble to the Poisson ensemble first appears close to the surface of the spectrum. Our model may be relevant for the description of the spectral correlations of an open disordered system close to an Anderson transition

  13. Quantification of iodine in porous hydroxyapatite matrices for application as radioactive sources in brachytherapy

    OpenAIRE

    Lacerda, Kássio André; Lameiras, Fernando Soares; Silva, Viviane Viana

    2007-01-01

    In this study, non-radioactive iodine was incorporated in two types of biodegradable hydroxyapatite-based porous matrices (HA and HACL) through impregnation process from sodium iodine aqueous solutions with varying concentrations (0.5 and 1.0 mol/L) . The results revealed that both systems presented a high capacity of incorporating iodine into their matrices. The quantity of incorporated iodine was measured through Neutron Activation Analysis (NAA). The porous ceramic matrices based on hydrox...

  14. Classification of mass matrices and the calculability of the Cabibbo angle

    International Nuclear Information System (INIS)

    Rizzo, T.G.

    1981-01-01

    We have analyzed all possible 2 x 2 mass matrices with two nonzero elements in an attempt to find which matrices yield a reasonable value of the Cabibbo angle upon diagonalization. We do not concern ourselves with the origin of these mass matrices (spontaneous symmetry breaking, bare-mass term, etc.). We find that, in the limit m/sub u//m/sub c/→0, only four possible relationships exist between sin 2 theta/sub C/ and the quark mass ratio m/sub d//m/sub s/, only one of which is reasonable for the usual value of m/sub d//m/sub s/ (approx.1/20). This limits the possible forms of the quark mass matrix to be two in number, both of which have been discussed previously in the literature

  15. Physical properties of the Schur complement of local covariance matrices

    International Nuclear Information System (INIS)

    Haruna, L F; Oliveira, M C de

    2007-01-01

    General properties of global covariance matrices representing bipartite Gaussian states can be decomposed into properties of local covariance matrices and their Schur complements. We demonstrate that given a bipartite Gaussian state ρ 12 described by a 4 x 4 covariance matrix V, the Schur complement of a local covariance submatrix V 1 of it can be interpreted as a new covariance matrix representing a Gaussian operator of party 1 conditioned to local parity measurements on party 2. The connection with a partial parity measurement over a bipartite quantum state and the determination of the reduced Wigner function is given and an operational process of parity measurement is developed. Generalization of this procedure to an n-partite Gaussian state is given, and it is demonstrated that the n - 1 system state conditioned to a partial parity projection is given by a covariance matrix such that its 2 x 2 block elements are Schur complements of special local matrices

  16. Attempt at a de-confusion. Units, biological effects of radiation limits and their meaning

    Energy Technology Data Exchange (ETDEWEB)

    Mueck, K

    1986-01-01

    In the wake of the Chernobyl accident the public was greatly confused by the press because units quite unknown before used and large numbers suggested danger; the meaning of limits for radionuclide concentrations in foods was also misunderstood. The present paper attempts a clarification. (G.Q.).

  17. Stabilization and solidification of Pb in cement matrices

    International Nuclear Information System (INIS)

    Gollmann, Maria A.C.; Silva, Marcia M. da; Santos, Joao H. Z. dos; Masuero, Angela B.

    2010-01-01

    Pb was incorporated to a series of cement matrices, which were submitted to different cure time and pH. Pb content leached to aqueous solution was monitored by atomic absorption spectroscopy. The block resistance was evaluated by unconfined compressive strength at 7 and 28 ages. Data are discussed in terms of metal mobility along the cement block monitored by X-ray fluorescence (XRF) spectrometry. The Pb incorporated matrices have shown that a long cure time is more suitable for avoiding metal leaching. For a longer cure period the action of the metal is higher and there is a decreasing in the compressive strength. The XRF analyses show that there is a lower Ca concentration in the matrix in which Pb was added. (author)

  18. Preconditioners for regularized saddle point matrices

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe

    2011-01-01

    Roč. 19, č. 2 (2011), s. 91-112 ISSN 1570-2820 Institutional research plan: CEZ:AV0Z30860518 Keywords : saddle point matrices * preconditioning * regularization * eigenvalue clustering Subject RIV: BA - General Mathematics Impact factor: 0.533, year: 2011 http://www.degruyter.com/view/j/jnma.2011.19.issue-2/jnum.2011.005/jnum.2011.005. xml

  19. Evaluation of the efficiency of continuous wavelet transform as processing and preprocessing algorithm for resolution of overlapped signals in univariate and multivariate regression analyses; an application to ternary and quaternary mixtures

    Science.gov (United States)

    Hegazy, Maha A.; Lotfy, Hayam M.; Mowaka, Shereen; Mohamed, Ekram Hany

    2016-07-01

    Wavelets have been adapted for a vast number of signal-processing applications due to the amount of information that can be extracted from a signal. In this work, a comparative study on the efficiency of continuous wavelet transform (CWT) as a signal processing tool in univariate regression and a pre-processing tool in multivariate analysis using partial least square (CWT-PLS) was conducted. These were applied to complex spectral signals of ternary and quaternary mixtures. CWT-PLS method succeeded in the simultaneous determination of a quaternary mixture of drotaverine (DRO), caffeine (CAF), paracetamol (PAR) and p-aminophenol (PAP, the major impurity of paracetamol). While, the univariate CWT failed to simultaneously determine the quaternary mixture components and was able to determine only PAR and PAP, the ternary mixtures of DRO, CAF, and PAR and CAF, PAR, and PAP. During the calculations of CWT, different wavelet families were tested. The univariate CWT method was validated according to the ICH guidelines. While for the development of the CWT-PLS model a calibration set was prepared by means of an orthogonal experimental design and their absorption spectra were recorded and processed by CWT. The CWT-PLS model was constructed by regression between the wavelet coefficients and concentration matrices and validation was performed by both cross validation and external validation sets. Both methods were successfully applied for determination of the studied drugs in pharmaceutical formulations.

  20. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    Science.gov (United States)

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the

  1. A base composition analysis of natural patterns for the preprocessing of metagenome sequences.

    Science.gov (United States)

    Bonham-Carter, Oliver; Ali, Hesham; Bastola, Dhundy

    2013-01-01

    On the pretext that sequence reads and contigs often exhibit the same kinds of base usage that is also observed in the sequences from which they are derived, we offer a base composition analysis tool. Our tool uses these natural patterns to determine relatedness across sequence data. We introduce spectrum sets (sets of motifs) which are permutations of bacterial restriction sites and the base composition analysis framework to measure their proportional content in sequence data. We suggest that this framework will increase the efficiency during the pre-processing stages of metagenome sequencing and assembly projects. Our method is able to differentiate organisms and their reads or contigs. The framework shows how to successfully determine the relatedness between these reads or contigs by comparison of base composition. In particular, we show that two types of organismal-sequence data are fundamentally different by analyzing their spectrum set motif proportions (coverage). By the application of one of the four possible spectrum sets, encompassing all known restriction sites, we provide the evidence to claim that each set has a different ability to differentiate sequence data. Furthermore, we show that the spectrum set selection having relevance to one organism, but not to the others of the data set, will greatly improve performance of sequence differentiation even if the fragment size of the read, contig or sequence is not lengthy. We show the proof of concept of our method by its application to ten trials of two or three freshly selected sequence fragments (reads and contigs) for each experiment across the six organisms of our set. Here we describe a novel and computationally effective pre-processing step for metagenome sequencing and assembly tasks. Furthermore, our base composition method has applications in phylogeny where it can be used to infer evolutionary distances between organisms based on the notion that related organisms often have much conserved code.

  2. Persistent Confusions about Hypothesis Testing in the Social Sciences

    Directory of Open Access Journals (Sweden)

    Christopher Thron

    2015-05-01

    Full Text Available This paper analyzes common confusions involving basic concepts in statistical hypothesis testing. One-third of the social science statistics textbooks examined in the study contained false statements about significance level and/or p-value. We infer that a large proportion of social scientists are being miseducated about these concepts. We analyze the causes of these persistent misunderstandings, and conclude that the conventional terminology is prone to abuse because it does not clearly represent the conditional nature of probabilities and events involved. We argue that modifications in terminology, as well as the explicit introduction of conditional probability concepts and notation into the statistics curriculum in the social sciences, are necessary to prevent the persistence of these errors.

  3. Characteristic Polynomials of Sample Covariance Matrices: The Non-Square Case

    OpenAIRE

    Kösters, Holger

    2009-01-01

    We consider the sample covariance matrices of large data matrices which have i.i.d. complex matrix entries and which are non-square in the sense that the difference between the number of rows and the number of columns tends to infinity. We show that the second-order correlation function of the characteristic polynomial of the sample covariance matrix is asymptotically given by the sine kernel in the bulk of the spectrum and by the Airy kernel at the edge of the spectrum. Similar results are g...

  4. Parallelizing flow-accumulation calculations on graphics processing units—From iterative DEM preprocessing algorithm to recursive multiple-flow-direction algorithm

    Science.gov (United States)

    Qin, Cheng-Zhi; Zhan, Lijun

    2012-06-01

    As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU

  5. Reliability and Validity of Colored Progressive Matrices for 4-6 Age Children

    Directory of Open Access Journals (Sweden)

    Ahmet Bildiren

    2017-06-01

    Full Text Available In this research, it was aimed to test the reliability and validity of Colored Progressive Matrices for children between the ages of 4 to 6 from 15 schools. The sample of the study consisted of 640 kindergarten children. Test-retest and parallel form were used for reliability analyses. For the validity analysis, the relations between the Colored Progressive Matrices Test and Bender Gestalt Visual Motor Sensitivity Test, WISC-R and TONI-3 tests were examined. The results showed that there was a significant relation between the test-retest results and the parallel forms in all the age groups. Validity analyses showed strong correlations between the Colored Progressive Matrices and all the other measures.

  6. Unified triminimal parametrizations of quark and lepton mixing matrices

    International Nuclear Information System (INIS)

    He Xiaogang; Li Shiwen; Ma Boqiang

    2009-01-01

    We present a detailed study on triminimal parametrizations of quark and lepton mixing matrices with different basis matrices. We start with a general discussion on the triminimal expansion of the mixing matrix and on possible unified quark and lepton parametrization using quark-lepton complementarity. We then consider several interesting basis matrices and compare the triminimal parametrizations with the Wolfenstein-like parametrizations. The usual Wolfenstein parametrization for quark mixing is a triminimal expansion around the unit matrix as the basis. The corresponding quark-lepton complementarity lepton mixing matrix is a triminimal expansion around the bimaximal basis. Current neutrino oscillation data show that the lepton mixing matrix is very well represented by the tribimaximal mixing. It is natural to take it as an expanding basis. The corresponding zeroth order basis for quark mixing in this case makes the triminimal expansion converge much faster than the usual Wolfenstein parametrization. The triminimal expansion based on tribimaximal mixing can be converted to the Wolfenstein-like parametrizations discussed in the literature. We thus have a unified description between different kinds of parametrizations for quark and lepton sectors: the standard parametrizations, the Wolfenstein-like parametrizations, and the triminimal parametrizations.

  7. Raven's matrices and working memory: a dual-task approach.

    Science.gov (United States)

    Rao, K Venkata; Baddeley, Alan

    2013-01-01

    Raven's Matrices Test was developed as a "pure" measure of Spearman's concept of general intelligence, g. Subsequent research has attempted to specify the processes underpinning performance, some relating it to the concept of working memory and proposing a crucial role for the central executive, with the nature of other components currently unclear. Up to this point, virtually all work has been based on correlational analysis of number of correct solutions, sometimes related to possible strategies. We explore the application to this problem of the concurrent task methodology used widely in developing the concept of multicomponent working memory. Participants attempted to solve problems from the matrices under baseline conditions, or accompanied by backward counting or verbal repetition tasks, assumed to disrupt the central executive and phonological loop components of working memory, respectively. As in other uses of this method, number of items correct showed little effect, while solution time measures gave very clear evidence of an important role for the central executive, but no evidence for phonological loop involvement. We conclude that this and related concurrent task techniques hold considerable promise for the analysis of Raven's matrices and potentially for other established psychometric tests.

  8. Pre-processing, registration and selection of adaptive optics corrected retinal images.

    Science.gov (United States)

    Ramaswamy, Gomathy; Devaney, Nicholas

    2013-07-01

    In this paper, the aim is to demonstrate enhanced processing of sequences of fundus images obtained using a commercial AO flood illumination system. The purpose of the work is to (1) correct for uneven illumination at the retina (2) automatically select the best quality images and (3) precisely register the best images. Adaptive optics corrected retinal images are pre-processed to correct uneven illumination using different methods; subtracting or dividing by the average filtered image, homomorphic filtering and a wavelet based approach. These images are evaluated to measure the image quality using various parameters, including sharpness, variance, power spectrum kurtosis and contrast. We have carried out the registration in two stages; a coarse stage using cross-correlation followed by fine registration using two approaches; parabolic interpolation on the peak of the cross-correlation and maximum-likelihood estimation. The angle of rotation of the images is measured using a combination of peak tracking and Procrustes transformation. We have found that a wavelet approach (Daubechies 4 wavelet at 6th level decomposition) provides good illumination correction with clear improvement in image sharpness and contrast. The assessment of image quality using a 'Designer metric' works well when compared to visual evaluation, although it is highly correlated with other metrics. In image registration, sub-pixel translation measured using parabolic interpolation on the peak of the cross-correlation function and maximum-likelihood estimation are found to give very similar results (RMS difference 0.047 pixels). We have confirmed that correcting rotation of the images provides a significant improvement, especially at the edges of the image. We observed that selecting the better quality frames (e.g. best 75% images) for image registration gives improved resolution, at the expense of poorer signal-to-noise. The sharpness map of the registered and de-rotated images shows increased

  9. Products of random matrices from fixed trace and induced Ginibre ensembles

    Science.gov (United States)

    Akemann, Gernot; Cikovic, Milan

    2018-05-01

    We investigate the microcanonical version of the complex induced Ginibre ensemble, by introducing a fixed trace constraint for its second moment. Like for the canonical Ginibre ensemble, its complex eigenvalues can be interpreted as a two-dimensional Coulomb gas, which are now subject to a constraint and a modified, collective confining potential. Despite the lack of determinantal structure in this fixed trace ensemble, we compute all its density correlation functions at finite matrix size and compare to a fixed trace ensemble of normal matrices, representing a different Coulomb gas. Our main tool of investigation is the Laplace transform, that maps back the fixed trace to the induced Ginibre ensemble. Products of random matrices have been used to study the Lyapunov and stability exponents for chaotic dynamical systems, where the latter are based on the complex eigenvalues of the product matrix. Because little is known about the universality of the eigenvalue distribution of such product matrices, we then study the product of m induced Ginibre matrices with a fixed trace constraint—which are clearly non-Gaussian—and M  ‑  m such Ginibre matrices without constraint. Using an m-fold inverse Laplace transform, we obtain a concise result for the spectral density of such a mixed product matrix at finite matrix size, for arbitrary fixed m and M. Very recently local and global universality was proven by the authors and their coworker for a more general, single elliptic fixed trace ensemble in the bulk of the spectrum. Here, we argue that the spectral density of mixed products is in the same universality class as the product of M independent induced Ginibre ensembles.

  10. A basis independent formulation of the connection between quark mass matrices, CP violation and experiment

    International Nuclear Information System (INIS)

    Jarlskog, C.; Stockholm Univ.; Bergen Univ.

    1985-01-01

    In the standard electroweak model, with three families, a one-to-one correspondence between certain determinants involving quark mass matrices (m and m' for charge 2/3 and -1/3 quarks respectively) and the presence/absence of CP violation is given. In an arbitrary basis for mass matrices, the quantity Im det[mm + , m'm' + ] appropriately normalized is introduced as a measure of CP violation. By this measure, CP is not maximally violated in any transition in Nature. Finally, constraints on quark mass matrices are derived from experiment. Any model of mass matrices, with the ambition to explain Nature, must satisfy these conditions. (orig.)

  11. Dazzle camouflage and the confusion effect: the influence of varying speed on target tracking.

    Science.gov (United States)

    Hogan, Benedict G; Cuthill, Innes C; Scott-Samuel, Nicholas E

    2017-01-01

    The formation of groups is a common strategy to avoid predation in animals, and recent research has indicated that there may be interactions between some forms of defensive coloration, notably high-contrast 'dazzle camouflage', and one of the proposed benefits of grouping: the confusion effect. However, research into the benefits of dazzle camouflage has largely used targets moving with constant speed. This simplification may not generalize well to real animal systems, where a number of factors influence both within- and between-individual variation in speed. Departure from the speed of your neighbours in a group may be predicted to undermine the confusion effect. This is because individual speed may become a parameter through which the observer can individuate otherwise similar targets: an 'oddity effect'. However, dazzle camouflage patterns are thought to interfere with predator perception of speed and trajectory. The current experiment investigated the possibility that such patterns could ameliorate the oddity effect caused by within-group differences in prey speed. We found that variation in speed increased the ease with which participants could track targets in all conditions. However, we found no evidence that motion dazzle camouflage patterns reduced oddity effects based on this variation in speed, a result that may be informative about the mechanisms behind this form of defensive coloration. In addition, results from those conditions most similar to those of published studies replicated previous results, indicating that targets with stripes parallel to the direction of motion are harder to track, and that this pattern interacts with the confusion effect to a greater degree than background matching or orthogonal-to-motion striped patterns.

  12. Dense tissue-like collagen matrices formed in cell-free conditions.

    Science.gov (United States)

    Mosser, Gervaise; Anglo, Anny; Helary, Christophe; Bouligand, Yves; Giraud-Guille, Marie-Madeleine

    2006-01-01

    A new protocol was developed to produce dense organized collagen matrices hierarchically ordered on a large scale. It consists of a two stage process: (1) the organization of a collagen solution and (2) the stabilization of the organizations by a sol-gel transition that leads to the formation of collagen fibrils. This new protocol relies on the continuous injection of an acid-soluble collagen solution into glass microchambers. It leads to extended concentration gradients of collagen, ranging from 5 to 1000 mg/ml. The self-organization of collagen solutions into a wide array of spatial organizations was investigated. The final matrices obtained by this procedure varied in concentration, structure and density. Changes in the liquid state of the samples were followed by polarized light microscopy, and the final stabilized gel states obtained after fibrillogenesis were analyzed by both light and electron microscopy. Typical organizations extended homogeneously by up to three centimetres in one direction and several hundreds of micrometers in other directions. Fibrillogenesis of collagen solutions of high and low concentrations led to fibrils spatially arranged as has been described in bone and derm, respectively. Moreover, a relationship was revealed between the collagen concentration and the aggregation of and rotational angles between lateral fibrils. These results constitute a strong base from which to further develop highly enriched collagen matrices that could lead to substitutes that mimic connective tissues. The matrices thus obtained may also be good candidates for the study of the three-dimensional migration of cells.

  13. Research of high speed data readout and pre-processing system based on xTCA for silicon pixel detector

    International Nuclear Information System (INIS)

    Zhao Jingzhou; Lin Haichuan; Guo Fang; Liu Zhen'an; Xu Hao; Gong Wenxuan; Liu Zhao

    2012-01-01

    As the development of the detector, Silicon pixel detectors have been widely used in high energy physics experiments. It needs data processing system with high speed, high bandwidth and high availability to read data from silicon pixel detectors which generate more large data. The same question occurs on Belle II Pixel Detector which is a new style silicon pixel detector used in SuperKEKB accelerator with high luminance. The paper describes the research of High speed data readout and pre-processing system based on xTCA for silicon pixel detector. The system consists of High Performance Computer Node (HPCN) based on xTCA and ATCA frame. The HPCN consists of 4XFPs based on AMC, 1 AMC Carrier ATCA Board (ACAB) and 1 Rear Transmission Module. It characterized by 5 high performance FPGAs, 16 fiber links based on RocketIO, 5 Gbit Ethernet ports and DDR2 with capacity up to 18GB. In a ATCA frame, 14 HPCNs make up a system using the high speed backplane to achieve the function of data pre-processing and trigger. This system will be used on the trigger and data acquisition system of Belle II Pixel detector. (authors)

  14. Classical r-matrices for the generalised Chern–Simons formulation of 3d gravity

    Science.gov (United States)

    Osei, Prince K.; Schroers, Bernd J.

    2018-04-01

    We study the conditions for classical r-matrices to be compatible with the generalised Chern–Simons action for 3d gravity. Compatibility means solving the classical Yang–Baxter equations with a prescribed symmetric part for each of the real Lie algebras and bilinear pairings arising in the generalised Chern–Simons action. We give a new construction of r-matrices via a generalised complexification and derive a non-linear set of matrix equations determining the most general compatible r-matrix. We exhibit new families of solutions and show that they contain some known r-matrices for special parameter values.

  15. MGI-oriented High-throughput Measurement of Interdiffusion Coefficient Matrices in Ni-based Superalloys

    Directory of Open Access Journals (Sweden)

    TANG Ying

    2017-01-01

    Full Text Available One of the research hotspots in the field of high-temperature alloys was to search the substitutional elements for Re in order to prepare the single-crystal Ni-based superalloys with less or even no Re addition. To find the elements with similar or even lower diffusion coefficients in comparison with that of Re was one of the effective strategies. In multicomponent alloys, the interdiffusivity matrix were used to comprehensively characterize the diffusion ability of any alloying elements. Therefore, accurate determination of the composition-dependant and temperature-dependent interdiffusivities matrices of different elements in γ and γ' phases of Ni-based superalloys was high priority. The paper briefly introduces of the status of the interdiffusivity matrices determination in Ni-based superalloys, and the methods for determining the interdiffusivities in multicomponent alloys, including the traditional Matano-Kirkaldy method and recently proposed numerical inverse method. Because the traditional Matano-Kirkaldy method is of low efficiency, the experimental reports on interdiffusivity matrices in ternary and higher order sub-systems of the Ni-based superalloys were very scarce in the literature. While the numerical inverse method newly proposed in our research group based on Fick's second law can be utilized for high-throughput measurement of accurate interdiffusivity matrices in alloys with any number of components. After that, the successful application of the numerical inverse method in the high-throughput measurement of interdiffusivity matrices in alloys is demonstrated in fcc (γ phase of the ternary Ni-Al-Ta system. Moreover, the validation of the resulting composition-dependant and temperature-dependent interdiffusivity matrices is also comprehensively made. Then, this paper summarizes the recent progress in the measurement of interdiffusivity matrices in γ and γ' phases of a series of core ternary Ni-based superalloys achieved in

  16. Characteristics of phosphorus adsorption by sediment mineral matrices with different particle sizes

    Directory of Open Access Journals (Sweden)

    Yang Xiao

    2013-07-01

    Full Text Available The particle size of sediment is one of the main factors that influence the phosphorus physical adsorption on sediment. In order to eliminate the effect of other components of sediment on the phosphorus physical adsorption the sediment mineral matrices were obtained by removing inorganic matter metal oxides, and organic matter from natural sediments, which were collected from the Nantong reach of the Yangtze River. The results show that an exponential relationship exists between the median particle size (D50 and specific surface area (Sg of the sediment mineral matrices, and the fine sediment mineral matrix sample has a larger specific surface area and pore volume than the coarse sediment particles. The kinetic equations were used to describe the phosphorus adsorption process of the sediment mineral matrices, including the Elovich equation, quasi-first-order adsorption kinetic equation, and quasi-second-order adsorption kinetic equation. The results show that the quasi-second-order adsorption kinetic equation has the best fitting effect. Using the mass conservation and Langmuir adsorption kinetic equations, a formula was deduced to calculate the equilibrium adsorption capacity of the sediment mineral matrices. The results of this study show that the phosphorus adsorption capacity decreases with the increase of D50, indicating that the specific surface area and pore volume are the main factors in determining the phosphorus adsorption capacity of the sediment mineral matrices. This study will help understand the important role of sediment in the transformation of phosphorus in aquatic environments.

  17. Flexible high-speed FASTBUS master for data read-out and preprocessing

    International Nuclear Information System (INIS)

    Wurz, A.; Manner, R.

    1990-01-01

    This paper describes a single slot FASTBUS master module. It can be used for read-out and preprocessing of data that are read out from FASTBUS modules, e.g., and ADC system. The module consists of a 25 MHz, 32-bit processor MC 68030 with cache memory and memory management, a floating point coprocessor MC68882, 4 MBytes of main memory, and FASTBUS master and slave interfaces. In addition, a DMA controller for read-out of FASTBUS data is provided. The processor allows I/O via serial ports, a 16-bit parallel port, and a transputer link. Additional interfaces are planned. The main memory is multi-ported and can be accessed directly by the CPU, the FASTBUS, and external masters via the high-speed local bus that is accessible by way of a connector. The FASTBUS interface supports most of the standard operations in master and slave mode

  18. Dimension from covariance matrices.

    Science.gov (United States)

    Carroll, T L; Byers, J M

    2017-02-01

    We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.

  19. Moment matrices, border bases and radical computation

    NARCIS (Netherlands)

    B. Mourrain; J.B. Lasserre; M. Laurent (Monique); P. Rostalski; P. Trebuchet (Philippe)

    2013-01-01

    htmlabstractIn this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming it complex (resp. real) variety is nte. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and

  20. Moment matrices, border bases and radical computation

    NARCIS (Netherlands)

    B. Mourrain; J.B. Lasserre; M. Laurent (Monique); P. Rostalski; P. Trebuchet (Philippe)

    2011-01-01

    htmlabstractIn this paper, we describe new methods to compute the radical (resp. real radical) of an ideal, assuming it complex (resp. real) variety is nte. The aim is to combine approaches for solving a system of polynomial equations with dual methods which involve moment matrices and