Prediction of Complex Human Traits Using the Genomic Best Linear Unbiased Predictor
de los Campos, Gustavo; Vazquez, Ana I; Fernando, Rohan
2013-01-01
Despite important advances from Genome Wide Association Studies (GWAS), for most complex human traits and diseases, a sizable proportion of genetic variance remains unexplained and prediction accuracy (PA) is usually low. Evidence suggests that PA can be improved using Whole-Genome Regression (WGR......) models where phenotypes are regressed on hundreds of thousands of variants simultaneously. The Genomic Best Linear Unbiased Prediction G-BLUP, a ridge-regression type method) is a commonly used WGR method and has shown good predictive performance when applied to plant and animal breeding populations....... However, breeding and human populations differ greatly in a number of factors that can affect the predictive performance of G-BLUP. Using theory, simulations, and real data analysis, we study the erformance of G-BLUP when applied to data from related and unrelated human subjects. Under perfect linkage...
Genetic evaluation using single-step genomic best linear unbiased predictor in American Angus.
Lourenco, D A L; Tsuruta, S; Fragomeni, B O; Masuda, Y; Aguilar, I; Legarra, A; Bertrand, J K; Amen, T S; Wang, L; Moser, D W; Misztal, I
2015-06-01
Predictive ability of genomic EBV when using single-step genomic BLUP (ssGBLUP) in Angus cattle was investigated. Over 6 million records were available on birth weight (BiW) and weaning weight (WW), almost 3.4 million on postweaning gain (PWG), and over 1.3 million on calving ease (CE). Genomic information was available on, at most, 51,883 animals, which included high and low EBV accuracy animals. Traditional EBV was computed by BLUP and genomic EBV by ssGBLUP and indirect prediction based on SNP effects was derived from ssGBLUP; SNP effects were calculated based on the following reference populations: ref_2k (contains top bulls and top cows that had an EBV accuracy for BiW ≥0.85), ref_8k (contains all parents that were genotyped), and ref_33k (contains all genotyped animals born up to 2012). Indirect prediction was obtained as direct genomic value (DGV) or as an index of DGV and parent average (PA). Additionally, runs with ssGBLUP used the inverse of the genomic relationship matrix calculated by an algorithm for proven and young animals (APY) that uses recursions on a small subset of reference animals. An extra reference subset included 3,872 genotyped parents of genotyped animals (ref_4k). Cross-validation was used to assess predictive ability on a validation population of 18,721 animals born in 2013. Computations for growth traits used multiple-trait linear model and, for CE, a bivariate CE-BiW threshold-linear model. With BLUP, predictivities were 0.29, 0.34, 0.23, and 0.12 for BiW, WW, PWG, and CE, respectively. With ssGBLUP and ref_2k, predictivities were 0.34, 0.35, 0.27, and 0.13 for BiW, WW, PWG, and CE, respectively, and with ssGBLUP and ref_33k, predictivities were 0.39, 0.38, 0.29, and 0.13 for BiW, WW, PWG, and CE, respectively. Low predictivity for CE was due to low incidence rate of difficult calving. Indirect predictions with ref_33k were as accurate as with full ssGBLUP. Using the APY and recursions on ref_4k gave 88% gains of full ssGBLUP and
Minimum variance linear unbiased estimators of loss and inventory
Stewart, K.B.
1977-01-01
The article illustrates a number of approaches for estimating the material balance inventory and a constant loss amount from the accountability data from a sequence of accountability periods. The approaches all lead to linear estimates that have minimum variance. Techniques are shown whereby ordinary least squares, weighted least squares and generalized least squares computer programs can be used. Two approaches are recursive in nature and lend themselves to small specialized computer programs. Another approach is developed that is easy to program; could be used with a desk calculator and can be used in a recursive way from accountability period to accountability period. Some previous results are also reviewed that are very similar in approach to the present ones and vary only in the way net throughput measurements are statistically modeled. 5 refs
Cell Mean Versus Best Linear Unbiased Predictors in Biplot ...
In multi-environment trials, accurate estimation of yields in individual environments ... AMMI analysis of variance based on cell means depicted the first five ... means in their GGE and AMMI biplot analysis of GE for wheat yield in Canada. .... GGE biplot only) principal components were partitioned to the respective genotype.
Cell Mean Versus Best Linear Unbiased Predictors in Biplot ...
Environment contributed to 65%, GE to 26.6% and G to 8.4% of the G. + E + GE sum ..... Of these, genotypes 3371, ehil and fer projected the most towards ... had the highest mean grain yield, the lowest lodging score (4%), the most number of .... multiplicative interaction model: I. theory on variance components for predicting.
Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis
Luo, Wen; Azen, Razia
2013-01-01
Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…
Zhe Zhang
2010-09-01
Full Text Available With the availability of high density whole-genome single nucleotide polymorphism chips, genomic selection has become a promising method to estimate genetic merit with potentially high accuracy for animal, plant and aquaculture species of economic importance. With markers covering the entire genome, genetic merit of genotyped individuals can be predicted directly within the framework of mixed model equations, by using a matrix of relationships among individuals that is derived from the markers. Here we extend that approach by deriving a marker-based relationship matrix specifically for the trait of interest.In the framework of mixed model equations, a new best linear unbiased prediction (BLUP method including a trait-specific relationship matrix (TA was presented and termed TABLUP. The TA matrix was constructed on the basis of marker genotypes and their weights in relation to the trait of interest. A simulation study with 1,000 individuals as the training population and five successive generations as candidate population was carried out to validate the proposed method. The proposed TABLUP method outperformed the ridge regression BLUP (RRBLUP and BLUP with realized relationship matrix (GBLUP. It performed slightly worse than BayesB with an accuracy of 0.79 in the standard scenario.The proposed TABLUP method is an improvement of the RRBLUP and GBLUP method. It might be equivalent to the BayesB method but it has additional benefits like the calculation of accuracies for individual breeding values. The results also showed that the TA-matrix performs better in predicting ability than the classical numerator relationship matrix and the realized relationship matrix which are derived solely from pedigree or markers without regard to the trait. This is because the TA-matrix not only accounts for the Mendelian sampling term, but also puts the greater emphasis on those markers that explain more of the genetic variance in the trait.
High-Order Sparse Linear Predictors for Audio Processing
Giacobello, Daniele; van Waterschoot, Toon; Christensen, Mads Græsbøll
2010-01-01
Linear prediction has generally failed to make a breakthrough in audio processing, as it has done in speech processing. This is mostly due to its poor modeling performance, since an audio signal is usually an ensemble of different sources. Nevertheless, linear prediction comes with a whole set...... of interesting features that make the idea of using it in audio processing not far fetched, e.g., the strong ability of modeling the spectral peaks that play a dominant role in perception. In this paper, we provide some preliminary conjectures and experiments on the use of high-order sparse linear predictors...... in audio processing. These predictors, successfully implemented in modeling the short-term and long-term redundancies present in speech signals, will be used to model tonal audio signals, both monophonic and polyphonic. We will show how the sparse predictors are able to model efﬁciently the different...
Baba, Toshimi; Gotoh, Yusaku; Yamaguchi, Satoshi; Nakagawa, Satoshi; Abe, Hayato; Masuda, Yutaka; Kawahara, Takayoshi
2017-08-01
This study aimed to evaluate a validation reliability of single-step genomic best linear unbiased prediction (ssGBLUP) with a multiple-lactation random regression test-day model and investigate an effect of adding genotyped cows on the reliability. Two data sets for test-day records from the first three lactations were used: full data from February 1975 to December 2015 (60 850 534 records from 2 853 810 cows) and reduced data cut off in 2011 (53 091 066 records from 2 502 307 cows). We used marker genotypes of 4480 bulls and 608 cows. Genomic enhanced breeding values (GEBV) of 305-day milk yield in all the lactations were estimated for at least 535 young bulls using two marker data sets: bull genotypes only and both bulls and cows genotypes. The realized reliability (R 2 ) from linear regression analysis was used as an indicator of validation reliability. Using only genotyped bulls, R 2 was ranged from 0.41 to 0.46 and it was always higher than parent averages. The very similar R 2 were observed when genotyped cows were added. An application of ssGBLUP to a multiple-lactation random regression model is feasible and adding a limited number of genotyped cows has no significant effect on reliability of GEBV for genotyped bulls. © 2016 Japanese Society of Animal Science.
Zhou, L; Lund, M S; Wang, Y; Su, G
2014-08-01
This study investigated genomic predictions across Nordic Holstein and Nordic Red using various genomic relationship matrices. Different sources of information, such as consistencies of linkage disequilibrium (LD) phase and marker effects, were used to construct the genomic relationship matrices (G-matrices) across these two breeds. Single-trait genomic best linear unbiased prediction (GBLUP) model and two-trait GBLUP model were used for single-breed and two-breed genomic predictions. The data included 5215 Nordic Holstein bulls and 4361 Nordic Red bulls, which was composed of three populations: Danish Red, Swedish Red and Finnish Ayrshire. The bulls were genotyped with 50 000 SNP chip. Using the two-breed predictions with a joint Nordic Holstein and Nordic Red reference population, accuracies increased slightly for all traits in Nordic Red, but only for some traits in Nordic Holstein. Among the three subpopulations of Nordic Red, accuracies increased more for Danish Red than for Swedish Red and Finnish Ayrshire. This is because closer genetic relationships exist between Danish Red and Nordic Holstein. Among Danish Red, individuals with higher genomic relationship coefficients with Nordic Holstein showed more increased accuracies in the two-breed predictions. Weighting the two-breed G-matrices by LD phase consistencies, marker effects or both did not further improve accuracies of the two-breed predictions. © 2014 Blackwell Verlag GmbH.
Mutually unbiased bases play an important role in quantum cryptography [2] and in the optimal determination of the density operator of an ensemble [3,4]. A density operator ρ in N-dimensions depends on N2 1 real quantities. With the help of MUB's, any such density operator can be encoded, in an optimal way, in terms of ...
Identifying predictors of physics item difficulty: A linear regression approach
Mesic, Vanes; Muratovic, Hasnija
2011-06-01
Large-scale assessments of student achievement in physics are often approached with an intention to discriminate students based on the attained level of their physics competencies. Therefore, for purposes of test design, it is important that items display an acceptable discriminatory behavior. To that end, it is recommended to avoid extraordinary difficult and very easy items. Knowing the factors that influence physics item difficulty makes it possible to model the item difficulty even before the first pilot study is conducted. Thus, by identifying predictors of physics item difficulty, we can improve the test-design process. Furthermore, we get additional qualitative feedback regarding the basic aspects of student cognitive achievement in physics that are directly responsible for the obtained, quantitative test results. In this study, we conducted a secondary analysis of data that came from two large-scale assessments of student physics achievement at the end of compulsory education in Bosnia and Herzegovina. Foremost, we explored the concept of “physics competence” and performed a content analysis of 123 physics items that were included within the above-mentioned assessments. Thereafter, an item database was created. Items were described by variables which reflect some basic cognitive aspects of physics competence. For each of the assessments, Rasch item difficulties were calculated in separate analyses. In order to make the item difficulties from different assessments comparable, a virtual test equating procedure had to be implemented. Finally, a regression model of physics item difficulty was created. It has been shown that 61.2% of item difficulty variance can be explained by factors which reflect the automaticity, complexity, and modality of the knowledge structure that is relevant for generating the most probable correct solution, as well as by the divergence of required thinking and interference effects between intuitive and formal physics knowledge
Identifying predictors of physics item difficulty: A linear regression approach
Hasnija Muratovic
2011-06-01
Full Text Available Large-scale assessments of student achievement in physics are often approached with an intention to discriminate students based on the attained level of their physics competencies. Therefore, for purposes of test design, it is important that items display an acceptable discriminatory behavior. To that end, it is recommended to avoid extraordinary difficult and very easy items. Knowing the factors that influence physics item difficulty makes it possible to model the item difficulty even before the first pilot study is conducted. Thus, by identifying predictors of physics item difficulty, we can improve the test-design process. Furthermore, we get additional qualitative feedback regarding the basic aspects of student cognitive achievement in physics that are directly responsible for the obtained, quantitative test results. In this study, we conducted a secondary analysis of data that came from two large-scale assessments of student physics achievement at the end of compulsory education in Bosnia and Herzegovina. Foremost, we explored the concept of “physics competence” and performed a content analysis of 123 physics items that were included within the above-mentioned assessments. Thereafter, an item database was created. Items were described by variables which reflect some basic cognitive aspects of physics competence. For each of the assessments, Rasch item difficulties were calculated in separate analyses. In order to make the item difficulties from different assessments comparable, a virtual test equating procedure had to be implemented. Finally, a regression model of physics item difficulty was created. It has been shown that 61.2% of item difficulty variance can be explained by factors which reflect the automaticity, complexity, and modality of the knowledge structure that is relevant for generating the most probable correct solution, as well as by the divergence of required thinking and interference effects between intuitive and formal
Stable 1-Norm Error Minimization Based Linear Predictors for Speech Modeling
Giacobello, Daniele; Christensen, Mads Græsbøll; Jensen, Tobias Lindstrøm
2014-01-01
In linear prediction of speech, the 1-norm error minimization criterion has been shown to provide a valid alternative to the 2-norm minimization criterion. However, unlike 2-norm minimization, 1-norm minimization does not guarantee the stability of the corresponding all-pole filter and can generate...... saturations when this is used to synthesize speech. In this paper, we introduce two new methods to obtain intrinsically stable predictors with the 1-norm minimization. The first method is based on constraining the roots of the predictor to lie within the unit circle by reducing the numerical range...... based linear prediction for modeling and coding of speech....
Cheung, Y M; Leung, W M; Xu, L
1997-01-01
We propose a prediction model called Rival Penalized Competitive Learning (RPCL) and Combined Linear Predictor method (CLP), which involves a set of local linear predictors such that a prediction is made by the combination of some activated predictors through a gating network (Xu et al., 1994). Furthermore, we present its improved variant named Adaptive RPCL-CLP that includes an adaptive learning mechanism as well as a data pre-and-post processing scheme. We compare them with some existing models by demonstrating their performance on two real-world financial time series--a China stock price and an exchange-rate series of US Dollar (USD) versus Deutschmark (DEM). Experiments have shown that Adaptive RPCL-CLP not only outperforms the other approaches with the smallest prediction error and training costs, but also brings in considerable high profits in the trading simulation of foreign exchange market.
Subedi, Bidya Raj; Reese, Nancy; Powell, Randy
2015-01-01
This study explored significant predictors of student's Grade Point Average (GPA) and truancy (days absent), and also determined teacher effectiveness based on proportion of variance explained at teacher level model. We employed a two-level hierarchical linear model (HLM) with student and teacher data at level-1 and level-2 models, respectively.…
Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi
2012-01-01
The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.
Masuda, Y; Misztal, I; Tsuruta, S; Legarra, A; Aguilar, I; Lourenco, D A L; Fragomeni, B O; Lawlor, T J
2016-03-01
The objectives of this study were to develop and evaluate an efficient implementation in the computation of the inverse of genomic relationship matrix with the recursion algorithm, called the algorithm for proven and young (APY), in single-step genomic BLUP. We validated genomic predictions for young bulls with more than 500,000 genotyped animals in final score for US Holsteins. Phenotypic data included 11,626,576 final scores on 7,093,380 US Holstein cows, and genotypes were available for 569,404 animals. Daughter deviations for young bulls with no classified daughters in 2009, but at least 30 classified daughters in 2014 were computed using all the phenotypic data. Genomic predictions for the same bulls were calculated with single-step genomic BLUP using phenotypes up to 2009. We calculated the inverse of the genomic relationship matrix GAPY(-1) based on a direct inversion of genomic relationship matrix on a small subset of genotyped animals (core animals) and extended that information to noncore animals by recursion. We tested several sets of core animals including 9,406 bulls with at least 1 classified daughter, 9,406 bulls and 1,052 classified dams of bulls, 9,406 bulls and 7,422 classified cows, and random samples of 5,000 to 30,000 animals. Validation reliability was assessed by the coefficient of determination from regression of daughter deviation on genomic predictions for the predicted young bulls. The reliabilities were 0.39 with 5,000 randomly chosen core animals, 0.45 with the 9,406 bulls, and 7,422 cows as core animals, and 0.44 with the remaining sets. With phenotypes truncated in 2009 and the preconditioned conjugate gradient to solve mixed model equations, the number of rounds to convergence for core animals defined by bulls was 1,343; defined by bulls and cows, 2,066; and defined by 10,000 random animals, at most 1,629. With complete phenotype data, the number of rounds decreased to 858, 1,299, and at most 1,092, respectively. Setting up GAPY(-1) for 569,404 genotyped animals with 10,000 core animals took 1.3h and 57 GB of memory. The validation reliability with APY reaches a plateau when the number of core animals is at least 10,000. Predictions with APY have little differences in reliability among definitions of core animals. Single-step genomic BLUP with APY is applicable to millions of genotyped animals. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Fragomeni, B O; Lourenco, D A L; Tsuruta, S; Bradford, H L; Gray, K A; Huang, Y; Misztal, I
2016-12-01
The purposes of this study were to analyze the impact of seasonal losses due to heat stress in pigs from different breeds raised in different environments and to evaluate the accuracy improvement from adding genomic information to genetic evaluations. Data were available for 2 different swine populations: purebred Duroc animals raised in Texas and North Carolina and commercial crosses of Duroc and F females (Landrace × Large White) raised in Missouri and North Carolina; pedigrees provided links for animals from different states. Pedigree information was available for 553,442 animals, of which 8,232 pure breeds were genotyped. Traits were BW at 170 d for purebred animals and HCW for crossbred animals. Analyses were done with an animal model as either single- or 2-trait models using phenotypes measured in different states as separate traits. Additionally, reaction norm models were fitted for 1 or 2 traits using heat load index as a covariable. Heat load was calculated as temperature-humidity index greater than 70 and was averaged over 30 d prior to data collection. Variance components were estimated with average information REML, and EBV and genomic EBV (GEBV) with BLUP or single-step genomic BLUP (ssGBLUP). Validation was assessed for 146 genotyped sires with progeny in the last generation. Accuracy was calculated as a correlation between EBV and GEBV using reduced data (all animals, except the last generation) and using complete data. Heritability estimates for purebred animals were similar across states (varying from 0.23 to 0.26), and reaction norm models did not show evidence of a heat stress effect. Genetic correlations between states for heat loads were always strong (>0.91). For crossbred animals, no differences in heritability were found in single- or 2-trait analysis (from 0.17 to 0.18), and genetic correlations between states were moderate (0.43). In the reaction norm for crossbreeds, heritabilities ranged from 0.15 to 0.30 and genetic correlations between heat loads were as weak as 0.36, with heat load ranging from 0 to 12. Accuracies with ssGBLUP were, on average, 25% greater than with BLUP. Accuracies were greater in 2-trait reaction norm models and at extreme heat load values. Impacts of seasonality are evident only for crossbred animals. Genomic information can help producers mitigate heat stress in swine by identifying superior sires that are more resistant to heat stress.
Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa
2008-01-01
This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.
Entanglement in mutually unbiased bases
Wiesniak, M; Zeilinger, A [Vienna Center for Quantum Science and Technology (VCQ), Faculty of Physics, University of Vienna, Boltzmanngasse 5, 1090 Vienna (Austria); Paterek, T, E-mail: tomasz.paterek@nus.edu.sg [Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, 117543 Singapore (Singapore)
2011-05-15
One of the essential features of quantum mechanics is that most pairs of observables cannot be measured simultaneously. This phenomenon manifests itself most strongly when observables are related to mutually unbiased bases. In this paper, we shed some light on the connection between mutually unbiased bases and another essential feature of quantum mechanics, quantum entanglement. It is shown that a complete set of mutually unbiased bases of a bipartite system contains a fixed amount of entanglement, independent of the choice of the set. This has implications for entanglement distribution among the states of a complete set. In prime-squared dimensions we present an explicit experiment-friendly construction of a complete set with a particularly simple entanglement distribution. Finally, we describe the basic properties of mutually unbiased bases composed of product states only. The constructions are illustrated with explicit examples in low dimensions. We believe that the properties of entanglement in mutually unbiased bases may be one of the ingredients to be taken into account to settle the question of the existence of complete sets. We also expect that they will be relevant to applications of bases in the experimental realization of quantum protocols in higher-dimensional Hilbert spaces.
UNBIASED ESTIMATORS OF SPECIFIC CONNECTIVITY
Jean-Paul Jernot
2011-05-01
Full Text Available This paper deals with the estimation of the specific connectivity of a stationary random set in IRd. It turns out that the "natural" estimator is only asymptotically unbiased. The example of a boolean model of hypercubes illustrates the amplitude of the bias produced when the measurement field is relatively small with respect to the range of the random set. For that reason unbiased estimators are desired. Such an estimator can be found in the literature in the case where the measurement field is a right parallelotope. In this paper, this estimator is extended to apply to measurement fields of various shapes, and to possess a smaller variance. Finally an example from quantitative metallography (specific connectivity of a population of sintered bronze particles is given.
Lorenzo-Seva, Urbano; Ferrando, Pere J
2011-03-01
We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.
Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O
2018-01-01
Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes
Li, Xiaoyu; Fan, Guodong; Rizzoni, Giorgio; Canova, Marcello; Zhu, Chunbo; Wei, Guo
2016-01-01
The design of a simplified yet accurate physics-based battery model enables researchers to accelerate the processes of the battery design, aging analysis and remaining useful life prediction. In order to reduce the computational complexity of the Pseudo Two-Dimensional mathematical model without sacrificing the accuracy, this paper proposes a simplified multi-particle model via a predictor-corrector strategy and quasi-linearization. In this model, a predictor-corrector strategy is used for updating two internal states, especially used for solving the electrolyte concentration approximation to reduce the computational complexity and reserve a high accuracy of the approximation. Quasi-linearization is applied to the approximations of the Butler-Volmer kinetics equation and the pore wall flux distribution to predict the non-uniform electrochemical reaction effects without using any nonlinear iterative solver. Simulation and experimental results show that the isothermal model and the model coupled with thermal behavior are greatly improve the computational efficiency with almost no loss of accuracy. - Highlights: • A simplified multi-particle model with high accuracy and computation efficiency is proposed. • The electrolyte concentration is solved based on a predictor-corrector strategy. • The non-uniform electrochemical reaction is solved based on quasi-linearization. • The model is verified by simulations and experiments at various operating conditions.
Camilo, Daniela Castro
2017-08-30
Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.
Camilo, Daniela Castro; Lombardo, Luigi; Mai, Paul Martin; Dou, Jie; Huser, Raphaë l
2017-01-01
Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.
A differential-geometric approach to generalized linear models with grouped predictors
Augugliaro, Luigi; Mineo, Angelo M.; Wit, Ernst C.
We propose an extension of the differential-geometric least angle regression method to perform sparse group inference in a generalized linear model. An efficient algorithm is proposed to compute the solution curve. The proposed group differential-geometric least angle regression method has important
Molina, J. M.; Zaitchik, B. F.
2016-12-01
Recent findings considering high CO2 emission scenarios (RCP8.5) suggest that the tropical Andes may experience a massive warming and a significant precipitation increase (decrease) during the wet (dry) seasons by the end of the 21st century. Variations on rainfall-streamflow relationships and seasonal crop yields significantly affect human development in this region and make local communities highly vulnerable to climate change and variability. We developed an expert-informed empirical statistical downscaling (ESD) algorithm to explore and construct robust global climate predictors to perform skillful RCP8.5 projections of in-situ March-May (MAM) precipitation required for impact modeling and adaptation studies. We applied our framework to a topographically-complex region of the Colombian Andes where a number of previous studies have reported El Niño-Southern Oscillation (ENSO) as the main driver of climate variability. Supervised machine learning algorithms were trained with customized and bias-corrected predictors from NCEP reanalysis, and a cross-validation approach was implemented to assess both predictive skill and model selection. We found weak and not significant teleconnections between precipitation and lagged seasonal surface temperatures over El Niño3.4 domain, which suggests that ENSO fails to explain MAM rainfall variability in the study region. In contrast, series of Sea Level Pressure (SLP) over American Samoa -likely associated with the South Pacific Convergence Zone (SPCZ)- explains more than 65% of the precipitation variance. The best prediction skill was obtained with Selected Generalized Additive Models (SGAM) given their ability to capture linear/nonlinear relationships present in the data. While SPCZ-related series exhibited a positive linear effect in the rainfall response, SLP predictors in the north Atlantic and central equatorial Pacific showed nonlinear effects. A multimodel (MIROC, CanESM2 and CCSM) ensemble of ESD projections revealed
Markovian description of unbiased polymer translocation
Mondaini, Felipe; Moriconi, L.
2012-01-01
We perform, with the help of cloud computing resources, extensive Langevin simulations which provide compelling evidence in favor of a general Markovian framework for unbiased three-dimensional polymer translocation. Our statistical analysis consists of careful evaluations of (i) two-point correlation functions of the translocation coordinate and (ii) the empirical probabilities of complete polymer translocation (taken as a function of the initial number of monomers on a given side of the membrane). We find good agreement with predictions derived from the Markov chain approach recently addressed in the literature by the present authors. -- Highlights: ► We investigate unbiased polymer translocation through membrane pores. ► Large statistical ensembles have been produced with the help of cloud computing resources. ► We evaluate the two-point correlation function of the translocation coordinate. ► We evaluate empirical probabilities for complete polymer translocation. ► Unbiased polymer translocation is described as a Markov stochastic process.
Markovian description of unbiased polymer translocation
Mondaini, Felipe [Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21945-970 Rio de Janeiro, RJ (Brazil); Centro Federal de Educação Tecnológica Celso Suckow da Fonseca, UnED Angra dos Reis, Angra dos Reis, 23953-030, RJ (Brazil); Moriconi, L., E-mail: moriconi@if.ufrj.br [Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21945-970 Rio de Janeiro, RJ (Brazil)
2012-10-01
We perform, with the help of cloud computing resources, extensive Langevin simulations which provide compelling evidence in favor of a general Markovian framework for unbiased three-dimensional polymer translocation. Our statistical analysis consists of careful evaluations of (i) two-point correlation functions of the translocation coordinate and (ii) the empirical probabilities of complete polymer translocation (taken as a function of the initial number of monomers on a given side of the membrane). We find good agreement with predictions derived from the Markov chain approach recently addressed in the literature by the present authors. -- Highlights: ► We investigate unbiased polymer translocation through membrane pores. ► Large statistical ensembles have been produced with the help of cloud computing resources. ► We evaluate the two-point correlation function of the translocation coordinate. ► We evaluate empirical probabilities for complete polymer translocation. ► Unbiased polymer translocation is described as a Markov stochastic process.
Unbiased Sampling and Meshing of Isosurfaces
Yan, Dongming
2014-05-07
In this paper, we present a new technique to generate unbiased samples on isosurfaces. An isosurface, F(x,y,z) = c , of a function, F , is implicitly defined by trilinear interpolation of background grid points. The key idea of our approach is that of treating the isosurface within a grid cell as a graph (height) function in one of the three coordinate axis directions, restricted to where the slope is not too high, and integrating / sampling from each of these three. We use this unbiased sampling algorithm for applications in Monte Carlo integration, Poisson-disk sampling, and isosurface meshing.
Unbiased Sampling and Meshing of Isosurfaces
Yan, Dongming; Wallner, Johannes; Wonka, Peter
2014-01-01
In this paper, we present a new technique to generate unbiased samples on isosurfaces. An isosurface, F(x,y,z) = c , of a function, F , is implicitly defined by trilinear interpolation of background grid points. The key idea of our approach is that of treating the isosurface within a grid cell as a graph (height) function in one of the three coordinate axis directions, restricted to where the slope is not too high, and integrating / sampling from each of these three. We use this unbiased sampling algorithm for applications in Monte Carlo integration, Poisson-disk sampling, and isosurface meshing.
Personalized recommendation based on unbiased consistence
Zhu, Xuzhen; Tian, Hui; Zhang, Ping; Hu, Zheng; Zhou, Tao
2015-08-01
Recently, in physical dynamics, mass-diffusion-based recommendation algorithms on bipartite network provide an efficient solution by automatically pushing possible relevant items to users according to their past preferences. However, traditional mass-diffusion-based algorithms just focus on unidirectional mass diffusion from objects having been collected to those which should be recommended, resulting in a biased causal similarity estimation and not-so-good performance. In this letter, we argue that in many cases, a user's interests are stable, and thus bidirectional mass diffusion abilities, no matter originated from objects having been collected or from those which should be recommended, should be consistently powerful, showing unbiased consistence. We further propose a consistence-based mass diffusion algorithm via bidirectional diffusion against biased causality, outperforming the state-of-the-art recommendation algorithms in disparate real data sets, including Netflix, MovieLens, Amazon and Rate Your Music.
Black-Box Search by Unbiased Variation
Lehre, Per Kristian; Witt, Carsten
2012-01-01
The complexity theory for black-box algorithms, introduced by Droste, Jansen, and Wegener (Theory Comput. Syst. 39:525–544, 2006), describes common limits on the efficiency of a broad class of randomised search heuristics. There is an obvious trade-off between the generality of the black-box model...... and the strength of the bounds that can be proven in such a model. In particular, the original black-box model provides for well-known benchmark problems relatively small lower bounds, which seem unrealistic in certain cases and are typically not met by popular search heuristics.In this paper, we introduce a more...... restricted black-box model for optimisation of pseudo-Boolean functions which we claim captures the working principles of many randomised search heuristics including simulated annealing, evolutionary algorithms, randomised local search, and others. The key concept worked out is an unbiased variation operator...
Taylor, Ian M; Ntoumanis, Nikos; Standage, Martyn; Spray, Christopher M
2010-02-01
Grounded in self-determination theory (SDT; Deci & Ryan, 2000), the current study explored whether physical education (PE) students' psychological needs and their motivational regulations toward PE predicted mean differences and changes in effort in PE, exercise intentions, and leisure-time physical activity (LTPA) over the course of one UK school trimester. One hundred and seventy-eight students (69% male) aged between 11 and 16 years completed a multisection questionnaire at the beginning, middle, and end of a school trimester. Multilevel growth models revealed that students' perceived competence and self-determined regulations were the most consistent predictors of the outcome variables at the within- and between-person levels. The results of this work add to the extant SDT-based literature by examining change in PE students' motivational regulations and psychological needs, as well as underscoring the importance of disaggregating within- and between-student effects.
Mutually unbiased bases and semi-definite programming
Brierley, Stephen; Weigert, Stefan, E-mail: steve.brierley@ulb.ac.be, E-mail: stefan.weigert@york.ac.uk
2010-11-01
A complex Hilbert space of dimension six supports at least three but not more than seven mutually unbiased bases. Two computer-aided analytical methods to tighten these bounds are reviewed, based on a discretization of parameter space and on Groebner bases. A third algorithmic approach is presented: the non-existence of more than three mutually unbiased bases in composite dimensions can be decided by a global optimization method known as semidefinite programming. The method is used to confirm that the spectral matrix cannot be part of a complete set of seven mutually unbiased bases in dimension six.
Mutually unbiased bases and semi-definite programming
Brierley, Stephen; Weigert, Stefan
2010-01-01
A complex Hilbert space of dimension six supports at least three but not more than seven mutually unbiased bases. Two computer-aided analytical methods to tighten these bounds are reviewed, based on a discretization of parameter space and on Groebner bases. A third algorithmic approach is presented: the non-existence of more than three mutually unbiased bases in composite dimensions can be decided by a global optimization method known as semidefinite programming. The method is used to confirm that the spectral matrix cannot be part of a complete set of seven mutually unbiased bases in dimension six.
Lobos, G; Schnettler, B; Grunert, K G; Adasme, C
2017-01-01
The main objective of this study is to show why perceived resources are a strong predictor of satisfaction with food-related life in Chilean older adults. Design, sampling and participants: A survey was conducted in rural and urban areas in 30 communes of the Maule Region with 785 participants over 60 years of age who live in their own homes. The Satisfaction with Food-related Life (SWFL) scale was used. Generalized linear models (GLM) were used for the regression analysis. The results led to different considerations: First, older adults' perceived levels of resources are a good reflection of their actual levels of resources. Second, the individuals rated the sum of the perceived resources as 'highly important' to explain older adults' satisfaction with food-related life. Third, SWFL was predicted by satisfaction with economic situation, family importance, quantity of domestic household goods and a relative health indicator. Fourth, older adults who believe they have more resources compared to others are more satisfied with their food-related life. Finally, Poisson and binomial logistic models showed that the sum of perceived resources significantly increased the prediction of SWFL. The main conclusion is that perceived personal resources are a strong predictor of SWFL in Chilean older adults.
Price, Matthew; Anderson, Page; Henrich, Christopher C; Rothbaum, Barbara Olasov
2008-12-01
A client's expectation that therapy will be beneficial has long been considered an important factor contributing to therapeutic outcomes, but recent empirical work examining this hypothesis has primarily yielded null findings. The present study examined the contribution of expectancies for treatment outcome to actual treatment outcome from the start of therapy through 12-month follow-up in a clinical sample of individuals (n=72) treated for fear of flying with either in vivo exposure or virtual reality exposure therapy. Using a piecewise hierarchical linear model, outcome expectancy predicted treatment gains made during therapy but not during follow-up. Compared to lower levels, higher expectations for treatment outcome yielded stronger rates of symptom reduction from the beginning to the end of treatment on 2 standardized self-report questionnaires on fear of flying. The analytic approach of the current study is one potential reason that findings contrast with prior literature. The advantages of using hierarchical linear modeling to assess interindividual differences in longitudinal data are discussed.
Quantifying high dimensional entanglement with two mutually unbiased bases
Paul Erker
2017-07-01
Full Text Available We derive a framework for quantifying entanglement in multipartite and high dimensional systems using only correlations in two unbiased bases. We furthermore develop such bounds in cases where the second basis is not characterized beyond being unbiased, thus enabling entanglement quantification with minimal assumptions. Furthermore, we show that it is feasible to experimentally implement our method with readily available equipment and even conservative estimates of physical parameters.
Quantum process reconstruction based on mutually unbiased basis
Fernandez-Perez, A.; Saavedra, C.; Klimov, A. B.
2011-01-01
We study a quantum process reconstruction based on the use of mutually unbiased projectors (MUB projectors) as input states for a D-dimensional quantum system, with D being a power of a prime number. This approach connects the results of quantum-state tomography using mutually unbiased bases with the coefficients of a quantum process, expanded in terms of MUB projectors. We also study the performance of the reconstruction scheme against random errors when measuring probabilities at the MUB projectors.
Unbiased metal oxide semiconductor ionising radiation dosemeter
Kumurdjian, N.; Sarrabayrouse, G.J.
1995-01-01
To assess the application of MOS devices as low dose rate dosemeters, the sensitivity is the major factor although little studies have been performed on that subject. It is studied here, as well as thermal stability and linearity of the response curve. Other advantages are specified such as large measurable dose range, low cost, small size, possibility of integration. (D.L.)
Unbiased diffusion of Brownian particles on disordered correlated potentials
Salgado-Garcia, Raúl; Maldonado, Cesar
2015-01-01
In this work we study the diffusion of non-interacting overdamped particles, moving on unbiased disordered correlated potentials, subjected to Gaussian white noise. We obtain an exact expression for the diffusion coefficient which allows us to prove that the unbiased diffusion of overdamped particles on a random polymer does not depend on the correlations of the disordered potentials. This universal behavior of the unbiased diffusivity is a direct consequence of the validity of the Einstein relation and the decay of correlations of the random polymer. We test the independence on correlations of the diffusion coefficient for correlated polymers produced by two different stochastic processes, a one-step Markov chain and the expansion-modification system. Within the accuracy of our simulations, we found that the numerically obtained diffusion coefficient for these systems agree with the analytically calculated ones, confirming our predictions. (paper)
Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation
Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann
2017-01-01
This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…
Unbiased stereologic techniques for practical use in diagnostic histopathology
Sørensen, Flemming Brandt
1995-01-01
by introducing quantitative techniques in the histopathologic discipline of malignancy grading. Unbiased stereologic methods, especially based on measurements of nuclear three-dimensional mean size, have during the last decade proved their value in this regard. In this survey, the methods are reviewed regarding......Grading of malignancy by the examination of morphologic and cytologic details in histologic sections from malignant neoplasms is based exclusively on qualitative features, associated with significant subjectivity, and thus rather poor reproducibility. The traditional way of malignancy grading may...... of solid tumors. This new, unbiased attitude to malignancy grading is associated with excellent virtues, which ultimately may help the clinician in the choice of optimal treatment of the individual patient suffering from cancer. Stereologic methods are not solely applicable to the field of malignancy...
Triangulation based inclusion probabilities: a design-unbiased sampling approach
Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph
2011-01-01
A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...
Unbiased estimators for spatial distribution functions of classical fluids
Adib, Artur B.; Jarzynski, Christopher
2005-01-01
We use a statistical-mechanical identity closely related to the familiar virial theorem, to derive unbiased estimators for spatial distribution functions of classical fluids. In particular, we obtain estimators for both the fluid density ρ(r) in the vicinity of a fixed solute and the pair correlation g(r) of a homogeneous classical fluid. We illustrate the utility of our estimators with numerical examples, which reveal advantages over traditional histogram-based methods of computing such distributions.
Mafu, M
2013-09-01
Full Text Available We present an experimental study of higher-dimensional quantum key distribution protocols based on mutually unbiased bases, implemented by means of photons carrying orbital angular momentum. We perform (d + 1) mutually unbiased measurements in a...
Peng, Yijie; Fu, Michael C.; Hu, Jian Qiang; Heidergott, Bernd
In this paper, we propose a new unbiased stochastic derivative estimator in a framework that can handle discontinuous sample performances with structural parameters. This work extends the three most popular unbiased stochastic derivative estimators: (1) infinitesimal perturbation analysis (IPA), (2)
Quantum circuit implementation of cyclic mutually unbiased bases
Seyfarth, Ulrich; Dittmann, Niklas; Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, 64289 Darmstadt (Germany)
2013-07-01
Complete sets of mutually unbiased bases (MUBs) play an important role in the areas of quantum state tomography and quantum cryptography. Sets which can be generated cyclically may eliminate certain side-channel attacks. To profit from the advantages of these MUBs we propose a method for deriving a quantum circuit that implements the generator of a set into an experimental setup. For some dimensions this circuit is minimal. The presented method is in principle applicable for a larger set of operations and generalizes recently published results.
Characteristic properties of Fibonacci-based mutually unbiased bases
Seyfarth, Ulrich; Alber, Gernot [Institut fuer Angewandte Physik, Technische Universitaet Darmstadt, 64289 Darmstadt (Germany); Ranade, Kedar [Institut fuer Quantenphysik, Universitaet Ulm, Albert-Einstein-Allee 11, 89069 Ulm (Germany)
2012-07-01
Complete sets of mutually unbiased bases (MUBs) offer interesting applications in quantum information processing ranging from quantum cryptography to quantum state tomography. Different construction schemes provide different perspectives on these bases which are typically also deeply connected to various mathematical research areas. In this talk we discuss characteristic properties resulting from a recently established connection between construction methods for cyclic MUBs and Fibonacci polynomials. As a remarkable fact this connection leads to construction methods which do not involve any relations to mathematical properties of finite fields.
On the mathematical foundations of mutually unbiased bases
Thas, Koen
2018-02-01
In order to describe a setting to handle Zauner's conjecture on mutually unbiased bases (MUBs) (stating that in C^d, a set of MUBs of the theoretical maximal size d + 1 exists only if d is a prime power), we pose some fundamental questions which naturally arise. Some of these questions have important consequences for the construction theory of (new) sets of maximal MUBs. Partial answers will be provided in particular cases; more specifically, we will analyze MUBs with associated operator groups that have nilpotence class 2, and consider MUBs of height 1. We will also confirm Zauner's conjecture for MUBs with associated finite nilpotent operator groups.
Biased and unbiased perceptual decision-making on vocal emotions.
Dricu, Mihai; Ceravolo, Leonardo; Grandjean, Didier; Frühholz, Sascha
2017-11-24
Perceptual decision-making on emotions involves gathering sensory information about the affective state of another person and forming a decision on the likelihood of a particular state. These perceptual decisions can be of varying complexity as determined by different contexts. We used functional magnetic resonance imaging and a region of interest approach to investigate the brain activation and functional connectivity behind two forms of perceptual decision-making. More complex unbiased decisions on affective voices recruited an extended bilateral network consisting of the posterior inferior frontal cortex, the orbitofrontal cortex, the amygdala, and voice-sensitive areas in the auditory cortex. Less complex biased decisions on affective voices distinctly recruited the right mid inferior frontal cortex, pointing to a functional distinction in this region following decisional requirements. Furthermore, task-induced neural connectivity revealed stronger connections between these frontal, auditory, and limbic regions during unbiased relative to biased decision-making on affective voices. Together, the data shows that different types of perceptual decision-making on auditory emotions have distinct patterns of activations and functional coupling that follow the decisional strategies and cognitive mechanisms involved during these perceptual decisions.
Unbiased classification of spatial strategies in the Barnes maze.
Illouz, Tomer; Madar, Ravit; Clague, Charlotte; Griffioen, Kathleen J; Louzoun, Yoram; Okun, Eitan
2016-11-01
Spatial learning is one of the most widely studied cognitive domains in neuroscience. The Morris water maze and the Barnes maze are the most commonly used techniques to assess spatial learning and memory in rodents. Despite the fact that these tasks are well-validated paradigms for testing spatial learning abilities, manual categorization of performance into behavioral strategies is subject to individual interpretation, and thus to bias. We have previously described an unbiased machine-learning algorithm to classify spatial strategies in the Morris water maze. Here, we offer a support vector machine-based, automated, Barnes-maze unbiased strategy (BUNS) classification algorithm, as well as a cognitive score scale that can be used for memory acquisition, reversal training and probe trials. The BUNS algorithm can greatly benefit Barnes maze users as it provides a standardized method of strategy classification and cognitive scoring scale, which cannot be derived from typical Barnes maze data analysis. Freely available on the web at http://okunlab.wix.com/okunlab as a MATLAB application. eitan.okun@biu.ac.ilSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Unbiased water and methanol maser surveys of NGC 1333
Lyo, A-Ran; Kim, Jongsoo; Byun, Do-Young; Lee, Ho-Gyu, E-mail: arl@kasi.re.kr [Korea Astronomy and Space Science Institute, 776, Daedeokdae-ro Yuseong-gu, Daejeon 305-348 (Korea, Republic of)
2014-11-01
We present the results of unbiased 22 GHz H{sub 2}O water and 44 GHz class I CH{sub 3}OH methanol maser surveys in the central 7' × 10' area of NGC 1333 and two additional mapping observations of a 22 GHz water maser in a ∼3' × 3' area of the IRAS4A region. In the 22 GHz water maser survey of NGC 1333 with a sensitivity of σ ∼ 0.3 Jy, we confirmed the detection of masers toward H{sub 2}O(B) in the region of HH 7-11 and IRAS4B. We also detected new water masers located ∼20'' away in the western direction of IRAS4B or ∼25'' away in the southern direction of IRAS4A. We could not, however, find young stellar objects or molecular outflows associated with them. They showed two different velocity components of ∼0 and ∼16 km s{sup –1}, which are blue- and redshifted relative to the adopted systemic velocity of ∼7 km s{sup –1} for NGC 1333. They also showed time variabilities in both intensity and velocity from multi-epoch observations and an anti-correlation between the intensities of the blue- and redshifted velocity components. We suggest that the unidentified power source of these masers might be found in the earliest evolutionary stage of star formation, before the onset of molecular outflows. Finding this kind of water maser is only possible through an unbiased blind survey. In the 44 GHz methanol maser survey with a sensitivity of σ ∼ 0.5 Jy, we confirmed masers toward IRAS4A2 and the eastern shock region of IRAS2A. Both sources are also detected in 95 and 132 GHz methanol maser lines. In addition, we had new detections of methanol masers at 95 and 132 GHz toward IRAS4B. In terms of the isotropic luminosity, we detected methanol maser sources brighter than ∼5 × 10{sup 25} erg s{sup –1} from our unbiased survey.
Prada-Sanchez, J.M.; Febrero-Bande, M.; Gonzalez-Manteiga, W. [Universidad de Santiago de Compostela, Dept. de Estadistica e Investigacion Operativa, Santiago de Compostela (Spain); Costos-Yanez, T. [Universidad de Vigo, Dept. de Estadistica e Investigacion Operativa, Orense (Spain); Bermudez-Cela, J.L.; Lucas-Dominguez, T. [Laboratorio, Central Termica de As Pontes, La Coruna (Spain)
2000-07-01
Atmospheric SO{sub 2} concentrations at sampling stations near the fossil fuel fired power station at As Pontes (La Coruna, Spain) were predicted using a model for the corresponding time series consisting of a self-explicative term and a linear combination of exogenous variables. In a supplementary simulation study, models of this kind behaved better than the corresponding pure self-explicative or pure linear regression models. (Author)
Prada-Sanchez, J.M.; Febrero-Bande, M.; Gonzalez-Manteiga, W.; Costos-Yanez, T.; Bermudez-Cela, J.L.; Lucas-Dominguez, T.
2000-01-01
Atmospheric SO 2 concentrations at sampling stations near the fossil fuel fired power station at As Pontes (La Coruna, Spain) were predicted using a model for the corresponding time series consisting of a self-explicative term and a linear combination of exogenous variables. In a supplementary simulation study, models of this kind behaved better than the corresponding pure self-explicative or pure linear regression models. (Author)
Pyroelectric photovoltaic spatial solitons in unbiased photorefractive crystals
Jiang, Qichang; Su, Yanli; Ji, Xuanmang
2012-01-01
A new type of spatial solitons i.e. pyroelectric photovoltaic spatial solitons based on the combination of pyroelectric and photovoltaic effect is predicted theoretically. It shows that bright, dark and grey spatial solitons can exist in unbiased photovoltaic photorefractive crystals with appreciable pyroelectric effect. Especially, the bright soliton can form in self-defocusing photovoltaic crystals if it gives larger self-focusing pyroelectric effect. -- Highlights: ► A new type of spatial soliton i.e. pyroelectric photovoltaic spatial soliton is predicted. ► The bright, dark and grey pyroelectric photovoltaic spatial soliton can form. ► The bright soliton can also exist in self-defocusing photovoltaic crystals.
Unbiased stereologic techniques for practical use in diagnostic histopathology
Sørensen, Flemming Brandt
1995-01-01
Grading of malignancy by the examination of morphologic and cytologic details in histologic sections from malignant neoplasms is based exclusively on qualitative features, associated with significant subjectivity, and thus rather poor reproducibility. The traditional way of malignancy grading may...... by introducing quantitative techniques in the histopathologic discipline of malignancy grading. Unbiased stereologic methods, especially based on measurements of nuclear three-dimensional mean size, have during the last decade proved their value in this regard. In this survey, the methods are reviewed regarding...... the basic technique involved, sampling, efficiency, and reproducibility. Various types of cancers, where stereologic grading of malignancy has been used, are reviewed and discussed with regard to the development of a new objective and reproducible basis for carrying out prognosis-related malignancy grading...
Unextendible Mutually Unbiased Bases (after Mandayam, Bandyopadhyay, Grassl and Wootters
Koen Thas
2016-11-01
Full Text Available We consider questions posed in a recent paper of Mandayam et al. (2014 on the nature of “unextendible mutually unbiased bases.” We describe a conceptual framework to study these questions, using a connection proved by the author in Thas (2009 between the set of nonidentity generalized Pauli operators on the Hilbert space of N d-level quantum systems, d a prime, and the geometry of non-degenerate alternating bilinear forms of rank N over finite fields F d . We then supply alternative and short proofs of results obtained in Mandayam et al. (2014, as well as new general bounds for the problems considered in loc. cit. In this setting, we also solve Conjecture 1 of Mandayam et al. (2014 and speculate on variations of this conjecture.
Rethinking economy-wide rebound measures: An unbiased proposal
Guerra, Ana-Isabel; Sancho, Ferran
2010-01-01
In spite of having been first introduced in the last half of the ninetieth century, the debate about the possible rebound effects from energy efficiency improvements is still an open question in the economic literature. This paper contributes to the existing research on this issue proposing an unbiased measure for economy-wide rebound effects. The novelty of this economy-wide rebound measure stems from the fact that not only actual energy savings but also potential energy savings are quantified under general equilibrium conditions. Our findings indicate that the use of engineering savings instead of general equilibrium potential savings downward biases economy-wide rebound effects and upward-biases backfire effects. The discrepancies between the traditional indicator and our proposed measure are analysed in the context of the Spanish economy.
Unbiased determination of polarized parton distributions and their uncertainties
Ball, Richard D.; Guffanti, Alberto; Nocera, Emanuele R.; Ridolfi, Giovanni; Rojo, Juan
2013-01-01
We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, ...
Coupé, Christophe
2018-01-01
As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM), which address grouping of observations, and generalized linear mixed-effects models (GLMM), which offer a family of distributions for the dependent variable. Generalized additive models (GAM) are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS). We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for 'difficult' variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships. Relying on GAMLSS, we
Christophe Coupé
2018-04-01
Full Text Available As statistical approaches are getting increasingly used in linguistics, attention must be paid to the choice of methods and algorithms used. This is especially true since they require assumptions to be satisfied to provide valid results, and because scientific articles still often fall short of reporting whether such assumptions are met. Progress is being, however, made in various directions, one of them being the introduction of techniques able to model data that cannot be properly analyzed with simpler linear regression models. We report recent advances in statistical modeling in linguistics. We first describe linear mixed-effects regression models (LMM, which address grouping of observations, and generalized linear mixed-effects models (GLMM, which offer a family of distributions for the dependent variable. Generalized additive models (GAM are then introduced, which allow modeling non-linear parametric or non-parametric relationships between the dependent variable and the predictors. We then highlight the possibilities offered by generalized additive models for location, scale, and shape (GAMLSS. We explain how they make it possible to go beyond common distributions, such as Gaussian or Poisson, and offer the appropriate inferential framework to account for ‘difficult’ variables such as count data with strong overdispersion. We also demonstrate how they offer interesting perspectives on data when not only the mean of the dependent variable is modeled, but also its variance, skewness, and kurtosis. As an illustration, the case of phonemic inventory size is analyzed throughout the article. For over 1,500 languages, we consider as predictors the number of speakers, the distance from Africa, an estimation of the intensity of language contact, and linguistic relationships. We discuss the use of random effects to account for genealogical relationships, the choice of appropriate distributions to model count data, and non-linear relationships
Mutually Unbiased Maximally Entangled Bases for the Bipartite System Cd⊗ C^{dk}
Nan, Hua; Tao, Yuan-Hong; Wang, Tian-Jiao; Zhang, Jun
2016-10-01
The construction of maximally entangled bases for the bipartite system Cd⊗ Cd is discussed firstly, and some mutually unbiased bases with maximally entangled bases are given, where 2≤ d≤5. Moreover, we study a systematic way of constructing mutually unbiased maximally entangled bases for the bipartite system Cd⊗ C^{dk}.
Unbiased methods for removing systematics from galaxy clustering measurements
Elsner, Franz; Leistedt, Boris; Peiris, Hiranya V.
2016-02-01
Measuring the angular clustering of galaxies as a function of redshift is a powerful method for extracting information from the three-dimensional galaxy distribution. The precision of such measurements will dramatically increase with ongoing and future wide-field galaxy surveys. However, these are also increasingly sensitive to observational and astrophysical contaminants. Here, we study the statistical properties of three methods proposed for controlling such systematics - template subtraction, basic mode projection, and extended mode projection - all of which make use of externally supplied template maps, designed to characterize and capture the spatial variations of potential systematic effects. Based on a detailed mathematical analysis, and in agreement with simulations, we find that the template subtraction method in its original formulation returns biased estimates of the galaxy angular clustering. We derive closed-form expressions that should be used to correct results for this shortcoming. Turning to the basic mode projection algorithm, we prove it to be free of any bias, whereas we conclude that results computed with extended mode projection are biased. Within a simplified setup, we derive analytical expressions for the bias and discuss the options for correcting it in more realistic configurations. Common to all three methods is an increased estimator variance induced by the cleaning process, albeit at different levels. These results enable unbiased high-precision clustering measurements in the presence of spatially varying systematics, an essential step towards realizing the full potential of current and planned galaxy surveys.
Unbiased determination of polarized parton distributions and their uncertainties
Ball, Richard D. [Tait Institute, University of Edinburgh, JCMB, KB, Mayfield Rd, Edinburgh EH9 3JZ, Scotland (United Kingdom); Forte, Stefano, E-mail: forte@mi.infn.it [Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Guffanti, Alberto [The Niels Bohr International Academy and Discovery Center, The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen (Denmark); Nocera, Emanuele R. [Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ridolfi, Giovanni [Dipartimento di Fisica, Università di Genova and INFN, Sezione di Genova, Genova (Italy); Rojo, Juan [PH Department, TH Unit, CERN, CH-1211 Geneva 23 (Switzerland)
2013-09-01
We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent polarized parton sets, and in particular obtain predictions for polarized first moments of PDFs based on it. We find that the uncertainties on the gluon, and to a lesser extent the strange PDF, were substantially underestimated in previous determinations.
Unbiased determination of polarized parton distributions and their uncertainties
Ball, Richard D.; Forte, Stefano; Guffanti, Alberto; Nocera, Emanuele R.; Ridolfi, Giovanni; Rojo, Juan
2013-01-01
We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent polarized parton sets, and in particular obtain predictions for polarized first moments of PDFs based on it. We find that the uncertainties on the gluon, and to a lesser extent the strange PDF, were substantially underestimated in previous determinations
Mutually unbiased bases and trinary operator sets for N qutrits
Lawrence, Jay
2004-01-01
A compete orthonormal basis of N-qutrit unitary operators drawn from the Pauli group consists of the identity and 9 N -1 traceless operators. The traceless ones partition into 3 N +1 maximally commuting subsets (MCS's) of 3 N -1 operators each, whose joint eigenbases are mutually unbiased. We prove that Pauli factor groups of order 3 N are isomorphic to all MCS's and show how this result applies in specific cases. For two qutrits, the 80 traceless operators partition into 10 MCS's. We prove that 4 of the corresponding basis sets must be separable, while 6 must be totally entangled (and Bell-like). For three qutrits, 728 operators partition into 28 MCS's with less rigid structure, allowing for the coexistence of separable, partially entangled, and totally entangled (GHZ-like) bases. However a minimum of 16 GHZ-like bases must occur. Every basis state is described by an N-digit trinary number consisting of the eigenvalues of N observables constructed from the corresponding MCS
Within-subject template estimation for unbiased longitudinal image analysis.
Reuter, Martin; Schmansky, Nicholas J; Rosas, H Diana; Fischl, Bruce
2012-07-16
Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to potential over-regularization. In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points. We demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias. We successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template. The presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations; as such they hold great potential in clinical applications, e.g. allowing for smaller sample sizes or shorter trials to establish disease specific biomarkers or to quantify drug effects. Copyright © 2012 Elsevier Inc. All rights reserved.
SU2 nonstandard bases: the case of mutually unbiased bases
Olivier, Albouy; Kibler, Maurice R.
2007-02-01
This paper deals with bases in a finite-dimensional Hilbert space. Such a space can be realized as a subspace of the representation space of SU 2 corresponding to an irreducible representation of SU 2 . The representation theory of SU 2 is reconsidered via the use of two truncated deformed oscillators. This leads to replace the familiar scheme [j 2 , j z ] by a scheme [j 2 , v ra ], where the two-parameter operator v ra is defined in the universal enveloping algebra of the Lie algebra su 2 . The eigenvectors of the commuting set of operators [j 2 , v ra ] are adapted to a tower of chains SO 3 includes C 2j+1 (2j belongs to N * ), where C 2j+1 is the cyclic group of order 2j + 1. In the case where 2j + 1 is prime, the corresponding eigenvectors generate a complete set of mutually unbiased bases. Some useful relations on generalized quadratic Gauss sums are exposed in three appendices. (authors)
Unbiased contaminant removal for 3D galaxy power spectrum measurements
Kalus, B.; Percival, W. J.; Bacon, D. J.; Samushia, L.
2016-11-01
We assess and develop techniques to remove contaminants when calculating the 3D galaxy power spectrum. We separate the process into three separate stages: (I) removing the contaminant signal, (II) estimating the uncontaminated cosmological power spectrum and (III) debiasing the resulting estimates. For (I), we show that removing the best-fitting contaminant (mode subtraction) and setting the contaminated components of the covariance to be infinite (mode deprojection) are mathematically equivalent. For (II), performing a quadratic maximum likelihood (QML) estimate after mode deprojection gives an optimal unbiased solution, although it requires the manipulation of large N_mode^2 matrices (Nmode being the total number of modes), which is unfeasible for recent 3D galaxy surveys. Measuring a binned average of the modes for (II) as proposed by Feldman, Kaiser & Peacock (FKP) is faster and simpler, but is sub-optimal and gives rise to a biased solution. We present a method to debias the resulting FKP measurements that does not require any large matrix calculations. We argue that the sub-optimality of the FKP estimator compared with the QML estimator, caused by contaminants, is less severe than that commonly ignored due to the survey window.
Szadkowski, Zbigniew [University of Lodz, Department of Physics and Applied Informatics, 90-236 Lodz, (Poland)
2015-07-01
We present the new approach to a filtering of radio frequency interferences (RFI) in the Auger Engineering Radio Array (AERA) which study the electromagnetic part of the Extensive Air Showers. The radio stations can observe radio signals caused by coherent emissions due to geomagnetic radiation and charge excess processes. AERA observes frequency band from 30 to 80 MHz. This range is highly contaminated by human-made RFI. In order to improve the signal to noise ratio RFI filters are used in AERA to suppress this contamination. The first kind of filter used by AERA was the Median one, based on the Fast Fourier Transform (FFT) technique. The second one, which is currently in use, is the infinite impulse response (IIR) notch filter. The proposed new filter is a finite impulse response (FIR) filter based on a linear prediction (LP). A periodic contamination hidden in a registered signal (digitized in the ADC) can be extracted and next subtracted to make signal cleaner. The FIR filter requires a calculation of n=32, 64 or even 128 coefficients (dependent on a required speed or accuracy) by solving of n linear equations with coefficients built from the covariance Toeplitz matrix. This matrix can be solved by the Levinson recursion, which is much faster than the Gauss procedure. The filter has been already tested in the real AERA radio stations on Argentinean pampas with a very successful results. The linear equations were solved either in the virtual soft-core NIOSR processor (implemented in the FPGA chip as a net of logic elements) or in the external Voipac PXA270M ARM processor. The NIOS processor is relatively slow (50 MHz internal clock), calculations performed in an external processor consume a significant amount of time for data exchange between the FPGA and the processor. Test showed a very good efficiency of the RFI suppression for stationary (long-term) contaminations. However, we observed a short-time contaminations, which could not be suppressed either by the
UNBIASED INCLINATION DISTRIBUTIONS FOR OBJECTS IN THE KUIPER BELT
Gulbis, A. A. S.; Elliot, J. L.; Adams, E. R.; Benecchi, S. D.; Buie, M. W.; Trilling, D. E.; Wasserman, L. H.
2010-01-01
Using data from the Deep Ecliptic Survey (DES), we investigate the inclination distributions of objects in the Kuiper Belt. We present a derivation for observational bias removal and use this procedure to generate unbiased inclination distributions for Kuiper Belt objects (KBOs) of different DES dynamical classes, with respect to the Kuiper Belt plane. Consistent with previous results, we find that the inclination distribution for all DES KBOs is well fit by the sum of two Gaussians, or a Gaussian plus a generalized Lorentzian, multiplied by sin i. Approximately 80% of KBOs are in the high-inclination grouping. We find that Classical object inclinations are well fit by sin i multiplied by the sum of two Gaussians, with roughly even distribution between Gaussians of widths 2.0 +0.6 -0.5 0 and 8.1 +2.6 -2.1 0 . Objects in different resonances exhibit different inclination distributions. The inclinations of Scattered objects are best matched by sin i multiplied by a single Gaussian that is centered at 19.1 +3.9 -3.6 0 with a width of 6.9 +4.1 -2.7 0 . Centaur inclinations peak just below 20 0 , with one exceptionally high-inclination object near 80 0 . The currently observed inclination distribution of the Centaurs is not dissimilar to that of the Scattered Extended KBOs and Jupiter-family comets, but is significantly different from the Classical and Resonant KBOs. While the sample sizes of some dynamical classes are still small, these results should begin to serve as a critical diagnostic for models of solar system evolution.
Unbiased roughness measurements: the key to better etch performance
Liang, Andrew; Mack, Chris; Sirard, Stephen; Liang, Chen-wei; Yang, Liu; Jiang, Justin; Shamma, Nader; Wise, Rich; Yu, Jengyi; Hymes, Diane
2018-03-01
Edge placement error (EPE) has become an increasingly critical metric to enable Moore's Law scaling. Stochastic variations, as characterized for lines by line width roughness (LWR) and line edge roughness (LER), are dominant factors in EPE and known to increase with the introduction of EUV lithography. However, despite recommendations from ITRS, NIST, and SEMI standards, the industry has not agreed upon a methodology to quantify these properties. Thus, differing methodologies applied to the same image often result in different roughness measurements and conclusions. To standardize LWR and LER measurements, Fractilia has developed an unbiased measurement that uses a raw unfiltered line scan to subtract out image noise and distortions. By using Fractilia's inverse linescan model (FILM) to guide development, we will highlight the key influences of roughness metrology on plasma-based resist smoothing processes. Test wafers were deposited to represent a 5 nm node EUV logic stack. The patterning stack consists of a core Si target layer with spin-on carbon (SOC) as the hardmask and spin-on glass (SOG) as the cap. Next, these wafers were exposed through an ASML NXE 3350B EUV scanner with an advanced chemically amplified resist (CAR). Afterwards, these wafers were etched through a variety of plasma-based resist smoothing techniques using a Lam Kiyo conductor etch system. Dense line and space patterns on the etched samples were imaged through advanced Hitachi CDSEMs and the LER and LWR were measured through both Fractilia and an industry standard roughness measurement software. By employing Fractilia to guide plasma-based etch development, we demonstrate that Fractilia produces accurate roughness measurements on resist in contrast to an industry standard measurement software. These results highlight the importance of subtracting out SEM image noise to obtain quicker developmental cycle times and lower target layer roughness.
Pablo Martinez-Martín
Full Text Available To estimate the magnitude in which Parkinson's disease (PD symptoms and health- related quality of life (HRQoL determined PD costs over a 4-year period.Data collected during 3-month, each year, for 4 years, from the ELEP study, included sociodemographic, clinical and use of resources information. Costs were calculated yearly, as mean 3-month costs/patient and updated to Spanish €, 2012. Mixed linear models were performed to analyze total, direct and indirect costs based on symptoms and HRQoL.One-hundred and seventy four patients were included. Mean (SD age: 63 (11 years, mean (SD disease duration: 8 (6 years. Ninety-three percent were HY I, II or III (mild or moderate disease. Forty-nine percent remained in the same stage during the study period. Clinical evaluation and HRQoL scales showed relatively slight changes over time, demonstrating a stable group overall. Mean (SD PD total costs augmented 92.5%, from € 2,082.17 (€ 2,889.86 in year 1 to € 4,008.6 (€ 7,757.35 in year 4. Total, direct and indirect cost incremented 45.96%, 35.63%, and 69.69% for mild disease, respectively, whereas increased 166.52% for total, 55.68% for direct and 347.85% for indirect cost in patients with moderate PD. For severe patients, cost remained almost the same throughout the study. For each additional point in the SCOPA-Motor scale total costs increased € 75.72 (p = 0.0174; for each additional point on SCOPA-Motor and the SCOPA-COG, direct costs incremented € 49.21 (p = 0.0094 and € 44.81 (p = 0.0404, respectively; and for each extra point on the pain scale, indirect costs increased € 16.31 (p = 0.0228.PD is an expensive disease in Spain. Disease progression and severity as well as motor and cognitive dysfunctions are major drivers of costs increments. Therapeutic measures aimed at controlling progression and symptoms could help contain disease expenses.
Madala, NE
2012-08-01
Full Text Available Metabolomics entails identification and quantification of all metabolites within a biological system with a given physiological status; as such, it should be unbiased. A variety of techniques are used to measure the metabolite content of living...
Faraway, Julian J
2014-01-01
A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz
Unbiased structural search of small copper clusters within DFT
Cogollo-Olivo, Beatriz H.; Seriani, Nicola; Montoya, Javier A.
2015-01-01
Highlights: • We have been able to identify novel metastable structures for small Cu clusters. • We have shown that a linear structure reported for Cu_3 is actually a local maximum. • Some of the structures reported in literature are actually unstable within DFT. • Some of the isomer structures found shows the limits of educated guesses. - Abstract: The atomic structure of small Cu clusters with 3–6 atoms has been investigated by density functional theory and random search algorithm. New metastable structures have been found that lie merely tens of meV/atom above the corresponding ground state, and could therefore be present at thermodynamic equilibrium at room temperature or slightly above. Moreover, we show that the previously proposed linear configuration for Cu_3 is in fact a local maximum of the energy. Finally, we argue that the random search algorithm also provides qualitative information about the attraction basin of each structure in the energy landscape.
Hua, Xue; Hibar, Derrek P; Ching, Christopher R K; Boyle, Christina P; Rajagopalan, Priya; Gutman, Boris A; Leow, Alex D; Toga, Arthur W; Jack, Clifford R; Harvey, Danielle; Weiner, Michael W; Thompson, Paul M
2013-02-01
Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. Copyright © 2012 Elsevier Inc. All rights reserved.
Nicodemus, Kristin K; Malley, James D; Strobl, Carolin; Ziegler, Andreas
2010-02-27
Random forests (RF) have been increasingly used in applications such as genome-wide association and microarray studies where predictor correlation is frequently observed. Recent works on permutation-based variable importance measures (VIMs) used in RF have come to apparently contradictory conclusions. We present an extended simulation study to synthesize results. In the case when both predictor correlation was present and predictors were associated with the outcome (HA), the unconditional RF VIM attributed a higher share of importance to correlated predictors, while under the null hypothesis that no predictors are associated with the outcome (H0) the unconditional RF VIM was unbiased. Conditional VIMs showed a decrease in VIM values for correlated predictors versus the unconditional VIMs under HA and was unbiased under H0. Scaled VIMs were clearly biased under HA and H0. Unconditional unscaled VIMs are a computationally tractable choice for large datasets and are unbiased under the null hypothesis. Whether the observed increased VIMs for correlated predictors may be considered a "bias" - because they do not directly reflect the coefficients in the generating model - or if it is a beneficial attribute of these VIMs is dependent on the application. For example, in genetic association studies, where correlation between markers may help to localize the functionally relevant variant, the increased importance of correlated predictors may be an advantage. On the other hand, we show examples where this increased importance may result in spurious signals.
Unbiased structural search of small copper clusters within DFT
Cogollo-Olivo, Beatriz H., E-mail: bcogolloo@unicartagena.edu.co [Maestría en Ciencias Físicas, Universidad de Cartagena, 130001 Cartagena de Indias, Bolívar (Colombia); Seriani, Nicola, E-mail: nseriani@ictp.it [Condensed Matter and Statistical Physics Section, The Abdus Salam ICTP, Strada Costiera 11, 34151 Trieste (Italy); Montoya, Javier A., E-mail: jmontoyam@unicartagena.edu.co [Instituto de Matemáticas Aplicadas, Universidad de Cartagena, 130001 Cartagena de Indias, Bolívar (Colombia); Associates Program, The Abdus Salam ICTP, Strada Costiera 11, 34151 Trieste (Italy)
2015-11-05
Highlights: • We have been able to identify novel metastable structures for small Cu clusters. • We have shown that a linear structure reported for Cu{sub 3} is actually a local maximum. • Some of the structures reported in literature are actually unstable within DFT. • Some of the isomer structures found shows the limits of educated guesses. - Abstract: The atomic structure of small Cu clusters with 3–6 atoms has been investigated by density functional theory and random search algorithm. New metastable structures have been found that lie merely tens of meV/atom above the corresponding ground state, and could therefore be present at thermodynamic equilibrium at room temperature or slightly above. Moreover, we show that the previously proposed linear configuration for Cu{sub 3} is in fact a local maximum of the energy. Finally, we argue that the random search algorithm also provides qualitative information about the attraction basin of each structure in the energy landscape.
Nikita A. Moiseev
2017-01-01
Full Text Available The paper is devoted to a new randomization method that yields unbiased adjustments of p-values for linear regression models predictors by incorporating the number of potential explanatory variables, their variance-covariance matrix and its uncertainty, based on the number of observations. This adjustment helps to control type I errors in scientific studies, significantly decreasing the number of publications that report false relations to be authentic ones. Comparative analysis with such existing methods as Bonferroni correction and Shehata and White adjustments explicitly shows their imperfections, especially in case when the number of observations and the number of potential explanatory variables are approximately equal. Also during the comparative analysis it was shown that when the variance-covariance matrix of a set of potential predictors is diagonal, i.e. the data are independent, the proposed simple correction is the best and easiest way to implement the method to obtain unbiased corrections of traditional p-values. However, in the case of the presence of strongly correlated data, a simple correction overestimates the true pvalues, which can lead to type II errors. It was also found that the corrected p-values depend on the number of observations, the number of potential explanatory variables and the sample variance-covariance matrix. For example, if there are only two potential explanatory variables competing for one position in the regression model, then if they are weakly correlated, the corrected p-value will be lower than when the number of observations is smaller and vice versa; if the data are highly correlated, the case with a larger number of observations will show a lower corrected p-value. With increasing correlation, all corrections, regardless of the number of observations, tend to the original p-value. This phenomenon is easy to explain: as correlation coefficient tends to one, two variables almost linearly depend on each
Photonic quantum simulator for unbiased phase covariant cloning
Knoll, Laura T.; López Grande, Ignacio H.; Larotonda, Miguel A.
2018-01-01
We present the results of a linear optics photonic implementation of a quantum circuit that simulates a phase covariant cloner, using two different degrees of freedom of a single photon. We experimentally simulate the action of two mirrored 1→ 2 cloners, each of them biasing the cloned states into opposite regions of the Bloch sphere. We show that by applying a random sequence of these two cloners, an eavesdropper can mitigate the amount of noise added to the original input state and therefore, prepare clones with no bias, but with the same individual fidelity, masking its presence in a quantum key distribution protocol. Input polarization qubit states are cloned into path qubit states of the same photon, which is identified as a potential eavesdropper in a quantum key distribution protocol. The device has the flexibility to produce mirrored versions that optimally clone states on either the northern or southern hemispheres of the Bloch sphere, as well as to simulate optimal and non-optimal cloning machines by tuning the asymmetry on each of the cloning machines.
Raquel Lorenzo
2007-07-01
Full Text Available The knowledge of talent predictors is the initial point for building diagnosis and encouragement procedures in this field. The meaning of word predictor is to anticipate the future, to divine. Early prediction of high performance is complex problem no resolute by the science yet. There are many discrepancies about what measure and how to do. The article analyze the art state in this problematic because the excellence is determined by the interaction between internal and environmental factors.
Ridge regression estimator: combining unbiased and ordinary ridge regression methods of estimation
Sharad Damodar Gore
2009-10-01
Full Text Available Statistical literature has several methods for coping with multicollinearity. This paper introduces a new shrinkage estimator, called modified unbiased ridge (MUR. This estimator is obtained from unbiased ridge regression (URR in the same way that ordinary ridge regression (ORR is obtained from ordinary least squares (OLS. Properties of MUR are derived. Results on its matrix mean squared error (MMSE are obtained. MUR is compared with ORR and URR in terms of MMSE. These results are illustrated with an example based on data generated by Hoerl and Kennard (1975.
Hierarchical thinking in network biology: the unbiased modularization of biochemical networks.
Papin, Jason A; Reed, Jennifer L; Palsson, Bernhard O
2004-12-01
As reconstructed biochemical reaction networks continue to grow in size and scope, there is a growing need to describe the functional modules within them. Such modules facilitate the study of biological processes by deconstructing complex biological networks into conceptually simple entities. The definition of network modules is often based on intuitive reasoning. As an alternative, methods are being developed for defining biochemical network modules in an unbiased fashion. These unbiased network modules are mathematically derived from the structure of the whole network under consideration.
Giovannini, D
2013-06-01
Full Text Available : QELS_Fundamental Science, San Jose, California United States, 9-14 June 2013 Reconstruction of High-Dimensional States Entangled in Orbital Angular Momentum Using Mutually Unbiased Measurements D. Giovannini1, ⇤, J. Romero1, 2, J. Leach3, A...
Encoding mutually unbiased bases in orbital angular momentum for quantum key distribution
Dudley, Angela L
2013-07-01
Full Text Available We encode mutually unbiased bases (MUBs) using the higher-dimensional orbital angular momentum (OAM) degree of freedom associated with optical fields. We illustrate how these states are encoded with the use of a spatial light modulator (SLM). We...
Critical point relascope sampling for unbiased volume estimation of downed coarse woody debris
Jeffrey H. Gove; Michael S. Williams; Mark J. Ducey; Mark J. Ducey
2005-01-01
Critical point relascope sampling is developed and shown to be design-unbiased for the estimation of log volume when used with point relascope sampling for downed coarse woody debris. The method is closely related to critical height sampling for standing trees when trees are first sampled with a wedge prism. Three alternative protocols for determining the critical...
An Unbiased Survey of 500 Nearby Stars for Debris Disks: A JCMT Legacy Program
Matthews, B.C.; Greaves, J.S.; Holland, W.S.; Wyatt, M.C.; Barlow, M.J.; Bastien, P.; Beichman, C.A.; Biggs, A.; Butner, H.M.; Dent, W.R.F.; Francesco, J. Di; Dominik, C.; Fissel, L.; Friberg, P.; Gibb, A.G.; Halpern, M.; Ivison, R.J.; Jayawardhana, R.; Jenness, T.; Johnstone, D.; Kavelaars, J.J.; Marshall, J.L.; Phillips, N.; Schieven, G.; Snellen, I.A.G.; Walker, H.J.; Ward-Thompson, D.; Weferling, B.; White, G.J.; Yates, J.; Zhu, M.; Craigon, A.
2007-01-01
We present the scientific motivation and observing plan for an upcoming detection survey for debris disks using the James Clerk Maxwell Telescope. The SCUBA-2 Unbiased Nearby Stars (SUNS) survey will observe 500 nearby main-sequence and subgiant stars (100 of each of the A, F, G, K, and M spectral
An asymptotically unbiased minimum density power divergence estimator for the Pareto-tail index
Dierckx, Goedele; Goegebeur, Yuri; Guillou, Armelle
2013-01-01
We introduce a robust and asymptotically unbiased estimator for the tail index of Pareto-type distributions. The estimator is obtained by fitting the extended Pareto distribution to the relative excesses over a high threshold with the minimum density power divergence criterion. Consistency...
Hansen, Michael Edberg; Andersen, Birgitte; Smedsgaard, Jørn
2005-01-01
In this paper we present a method for unbiased/unsupervised classification and identification of closely related fungi, using chemical analysis of secondary metabolite profiles created by HPLC with UV diode array detection. For two chromatographic data matrices a vector of locally aligned full sp...
Andersen, Birgitte; Hansen, Michael Edberg; Smedsgaard, Jørn
2005-01-01
often has been broadly applied to various morphologically and chemically distinct groups of isolates from different hosts. The purpose of this study was to develop and evaluate automated and unbiased image analysis systems that will analyze different phenotypic characters and facilitate testing...
Application of Singh et al., unbiased estimator in a dual to ratio-cum ...
This paper applied an unbiased estimator in a dual to ratio–cum-product estimator in sample surveys to double sampling design. Its efficiency over the conventional biased double sampling design estimator was determined based on the conditions attached to its supremacy. Three different data sets were used to testify to ...
Unbiased stereological methods used for the quantitative evaluation of guided bone regeneration
Aaboe, Else Merete; Pinholt, E M; Schou, S
1998-01-01
The present study describes the use of unbiased stereological methods for the quantitative evaluation of the amount of regenerated bone. Using the principle of guided bone regeneration the amount of regenerated bone after placement of degradable or non-degradable membranes covering defects...
Linear Estimation of Standard Deviation of Logistic Distribution ...
The paper presents a theoretical method based on order statistics and a FORTRAN program for computing the variance and relative efficiencies of the standard deviation of the logistic population with respect to the Cramer-Rao lower variance bound and the best linear unbiased estimators (BLUE\\'s) when the mean is ...
Advanced statistics: linear regression, part I: simple linear regression.
Marill, Keith A
2004-01-01
Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.
Zhe Zhang, Z.; Liu, J.F.; Ding, Z.; Bijma, P.; Koning, de D.J.
2010-01-01
With the availability of high density whole-genome single nucleotide polymorphism chips, genomic selection has become a promising method to estimate genetic merit with potentially high accuracy for animal, plant and aquaculture species of economic importance. With markers covering the entire genome,
Hagger, C
1992-07-01
Two generations of selection on restricted BLUP breeding values were applied in an experiment with laying hens. Selection had been on phenotype of income minus feed cost (IFC) between 21 and 40 wk of age in the previous five generations. The restriction of no genetic change in egg weight was included in the EBV for power-transformed IFC (i.e., IFCt, with t-values of 3.7 and 3.6 in the two generations, respectively). The experiment consisted of two selection lines plus a randomly bred control of 20 male and 80 female breeders each. Observations on 8,844 survivors to 40 wk were available. Relative to the base population average, the restriction reduced genetic gain in IFC from 4.1 and 3.9% to 2.0 and 2.2% per generation in the two selection lines, respectively. Average EBV for egg weight remained nearly constant after a strong increase in the previous five generations. Rates of genetic gain for egg number, body weight, and feed conversion (feed/egg mass) were not affected significantly. In the seventh generation, a genetic gain in feed conversion of 10.3% relative to the phenotypic mean of the base population was obtained.
Rohde, Palle Duun; Demontis, Ditte; Børglum, Anders
is enriched for causal variants. Here we apply the GFBLUP model to a small schizophrenia case-control study to test the promise of this model on psychiatric disorders, and hypothesize that the performance will be increased when applying the model to a larger ADHD case-control study if the genomic feature...... contains the causal variants. Materials and Methods: The schizophrenia study consisted of 882 controls and 888 schizophrenia cases genotyped for 520,000 SNPs. The ADHD study contained 25,954 controls and 16,663 ADHD cases with 8,4 million imputed genotypes. Results: The predictive ability for schizophrenia.......6% for the null model). Conclusion: The improvement in predictive ability for schizophrenia was marginal, however, greater improvement is expected for the larger ADHD data....
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
Optimal Trading with Alpha Predictors
Filippo Passerini; Samuel E. Vazquez
2015-01-01
We study the problem of optimal trading using general alpha predictors with linear costs and temporary impact. We do this within the framework of stochastic optimization with finite horizon using both limit and market orders. Consistently with other studies, we find that the presence of linear costs induces a no-trading zone when using market orders, and a corresponding market-making zone when using limit orders. We show that, when combining both market and limit orders, the problem is furthe...
Mutually unbiased coarse-grained measurements of two or more phase-space variables
Paul, E. C.; Walborn, S. P.; Tasca, D. S.; Rudnicki, Łukasz
2018-05-01
Mutual unbiasedness of the eigenstates of phase-space operators—such as position and momentum, or their standard coarse-grained versions—exists only in the limiting case of infinite squeezing. In Phys. Rev. Lett. 120, 040403 (2018), 10.1103/PhysRevLett.120.040403, it was shown that mutual unbiasedness can be recovered for periodic coarse graining of these two operators. Here we investigate mutual unbiasedness of coarse-grained measurements for more than two phase-space variables. We show that mutual unbiasedness can be recovered between periodic coarse graining of any two nonparallel phase-space operators. We illustrate these results through optics experiments, using the fractional Fourier transform to prepare and measure mutually unbiased phase-space variables. The differences between two and three mutually unbiased measurements is discussed. Our results contribute to bridging the gap between continuous and discrete quantum mechanics, and they could be useful in quantum-information protocols.
An Unbiased Distance-based Outlier Detection Approach for High-dimensional Data
Nguyen, Hoang Vu; Gopalkrishnan, Vivekanand; Assent, Ira
2011-01-01
than a global property. Different from existing approaches, it is not grid-based and dimensionality unbiased. Thus, its performance is impervious to grid resolution as well as the curse of dimensionality. In addition, our approach ranks the outliers, allowing users to select the number of desired...... outliers, thus mitigating the issue of high false alarm rate. Extensive empirical studies on real datasets show that our approach efficiently and effectively detects outliers, even in high-dimensional spaces....
Monofunctional stealth nanoparticle for unbiased single molecule tracking inside living cells.
Lisse, Domenik; Richter, Christian P; Drees, Christoph; Birkholz, Oliver; You, Changjiang; Rampazzo, Enrico; Piehler, Jacob
2014-01-01
On the basis of a protein cage scaffold, we have systematically explored intracellular application of nanoparticles for single molecule studies and discovered that recognition by the autophagy machinery plays a key role for rapid metabolism in the cytosol. Intracellular stealth nanoparticles were achieved by heavy surface PEGylation. By combination with a generic approach for nanoparticle monofunctionalization, efficient labeling of intracellular proteins with high fidelity was accomplished, allowing unbiased long-term tracking of proteins in the outer mitochondrial membrane.
Padilla, Alberto
2009-01-01
Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...
Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza
2017-09-27
Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.
Hubbard, Rebecca A; Johnson, Eric; Chubak, Jessica; Wernli, Karen J; Kamineni, Aruna; Bogart, Andy; Rutter, Carolyn M
2017-06-01
Exposures derived from electronic health records (EHR) may be misclassified, leading to biased estimates of their association with outcomes of interest. An example of this problem arises in the context of cancer screening where test indication, the purpose for which a test was performed, is often unavailable. This poses a challenge to understanding the effectiveness of screening tests because estimates of screening test effectiveness are biased if some diagnostic tests are misclassified as screening. Prediction models have been developed for a variety of exposure variables that can be derived from EHR, but no previous research has investigated appropriate methods for obtaining unbiased association estimates using these predicted probabilities. The full likelihood incorporating information on both the predicted probability of exposure-class membership and the association between the exposure and outcome of interest can be expressed using a finite mixture model. When the regression model of interest is a generalized linear model (GLM), the expectation-maximization algorithm can be used to estimate the parameters using standard software for GLMs. Using simulation studies, we compared the bias and efficiency of this mixture model approach to alternative approaches including multiple imputation and dichotomization of the predicted probabilities to create a proxy for the missing predictor. The mixture model was the only approach that was unbiased across all scenarios investigated. Finally, we explored the performance of these alternatives in a study of colorectal cancer screening with colonoscopy. These findings have broad applicability in studies using EHR data where gold-standard exposures are unavailable and prediction models have been developed for estimating proxies.
Abbiendi, G.; Akesson, P.F.; Alexander, G.; Allison, John; Amaral, P.; Anagnostou, G.; Anderson, K.J.; Arcelli, S.; Asai, S.; Axen, D.; Azuelos, G.; Bailey, I.; Barberio, E.; Barlow, R.J.; Batley, R.J.; Bechtle, P.; Behnke, T.; Bell, Kenneth Watson; Bell, P.J.; Bella, G.; Bellerive, A.; Benelli, G.; Bethke, S.; Biebel, O.; Boeriu, O.; Bock, P.; Boutemeur, M.; Braibant, S.; Brigliadori, L.; Brown, Robert M.; Buesser, K.; Burckhart, H.J.; Campana, S.; Carnegie, R.K.; Caron, B.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Csilling, A.; Cuffiani, M.; Dado, S.; De Roeck, A.; De Wolf, E.A.; Desch, K.; Dienes, B.; Donkers, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Etzion, E.; Fabbri, F.; Feld, L.; Ferrari, P.; Fiedler, F.; Fleck, I.; Ford, M.; Frey, A.; Furtjes, A.; Gagnon, P.; Gary, John William; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Giunta, Marina; Goldberg, J.; Gross, E.; Grunhaus, J.; Gruwe, M.; Gunther, P.O.; Gupta, A.; Hajdu, C.; Hamann, M.; Hanson, G.G.; Harder, K.; Harel, A.; Harin-Dirac, M.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Hensel, C.; Herten, G.; Heuer, R.D.; Hill, J.C.; Hoffman, Kara Dion; Horvath, D.; Igo-Kemenes, P.; Ishii, K.; Jeremie, H.; Jovanovic, P.; Junk, T.R.; Kanaya, N.; Kanzaki, J.; Karapetian, G.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Kim, D.H.; Klein, K.; Klier, A.; Kluth, S.; Kobayashi, T.; Kobel, M.; Komamiya, S.; Kormos, Laura L.; Kramer, T.; Krieger, P.; von Krogh, J.; Kruger, K.; Kuhl, T.; Kupper, M.; Lafferty, G.D.; Landsman, H.; Lanske, D.; Layter, J.G.; Leins, A.; Lellouch, D.; Letts, J.; Levinson, L.; Lillich, J.; Lloyd, S.L.; Loebinger, F.K.; Lu, J.; Ludwig, J.; Macpherson, A.; Mader, W.; Marcellini, S.; Martin, A.J.; Masetti, G.; Mashimo, T.; Mattig, Peter; McDonald, W.J.; McKenna, J.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Menges, W.; Merritt, F.S.; Mes, H.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Moed, S.; Mohr, W.; Mori, T.; Mutter, A.; Nagai, K.; Nakamura, I.; Nanjo, H.; Neal, H.A.; Nisius, R.; O'Neale, S.W.; Oh, A.; Okpara, A.; Oreglia, M.J.; Orito, S.; Pahl, C.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poli, B.; Polok, J.; Pooth, O.; Przybycien, M.; Quadt, A.; Rabbertz, K.; Rembser, C.; Renkel, P.; Rick, H.; Roney, J.M.; Rosati, S.; Rozen, Y.; Runge, K.; Sachs, K.; Saeki, T.; Sarkisyan, E.K.G.; Schaile, A.D.; Schaile, O.; Scharff-Hansen, P.; Schieck, J.; Schoerner-Sadenius, Thomas; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Sherwood, P.; Siroli, G.; Skuja, A.; Smith, A.M.; Sobie, R.; Soldner-Rembold, S.; Spano, F.; Stahl, A.; Stephens, K.; Strom, David M.; Strohmer, R.; Tarem, S.; Tasevsky, M.; Taylor, R.J.; Teuscher, R.; Thomson, M.A.; Torrence, E.; Toya, D.; Tran, P.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turner-Watson, M.F.; Ueda, I.; Ujvari, B.; Vollmer, C.F.; Vannerem, P.; Vertesi, R.; Verzocchi, M.; Voss, H.; Vossebeld, J.; Waller, D.; Ward, C.P.; Ward, D.R.; Warsinsky, M.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wengler, T.; Wermes, N.; Wetterling, D.; Wilson, G.W.; Wilson, J.A.; Wolf, G.; Wyatt, T.R.; Yamashita, S.; Zer-Zion, D.; Zivkovic, Lidija
2004-01-01
We present the first experimental results based on the jet boost algorithm, a technique to select unbiased samples of gluon jets in e+e- annihilations, i.e. gluon jets free of biases introduced by event selection or jet finding criteria. Our results are derived from hadronic Z0 decays observed with the OPAL detector at the LEP e+e- collider at CERN. First, we test the boost algorithm through studies with Herwig Monte Carlo events and find that it provides accurate measurements of the charged particle multiplicity distributions of unbiased gluon jets for jet energies larger than about 5 GeV, and of the jet particle energy spectra (fragmentation functions) for jet energies larger than about 14 GeV. Second, we apply the boost algorithm to our data to derive unbiased measurements of the gluon jet multiplicity distribution for energies between about 5 and 18 GeV, and of the gluon jet fragmentation function at 14 and 18 GeV. In conjunction with our earlier results at 40 GeV, we then test QCD calculations for the en...
Losing the rose tinted glasses: neural substrates of unbiased belief updating in depression
Neil eGarrett
2014-08-01
Full Text Available Recent evidence suggests that a state of good mental health is associated with biased processing of information that supports a positively skewed view of the future. Depression, on the other hand, is associated with unbiased processing of such information. Here, we use brain imaging in conjunction with a belief update task administered to clinically depressed patients and healthy controls to characterize brain activity that supports unbiased belief updating in clinically depressed individuals. Our results reveal that unbiased belief updating in depression is mediated by strong neural coding of estimation errors in response to both good news (in left inferior frontal gyrus and bilateral superior frontal gyrus and bad news (in right inferior parietal lobule and right inferior frontal gyrus regarding the future. In contrast, intact mental health was linked to a relatively attenuated neural coding of bad news about the future. These findings identify a neural substrate mediating the breakdown of biased updating in Major Depression Disorder, which may be essential for mental health.
About mutually unbiased bases in even and odd prime power dimensions
Durt, Thomas
2005-06-01
Mutually unbiased bases generalize the X, Y and Z qubit bases. They possess numerous applications in quantum information science. It is well known that in prime power dimensions N = pm (with p prime and m a positive integer), there exists a maximal set of N + 1 mutually unbiased bases. In the present paper, we derive an explicit expression for those bases, in terms of the (operations of the) associated finite field (Galois division ring) of N elements. This expression is shown to be equivalent to the expressions previously obtained by Ivanovic (1981 J. Phys. A: Math. Gen. 14 3241) in odd prime dimensions, and Wootters and Fields (1989 Ann. Phys. 191 363) in odd prime power dimensions. In even prime power dimensions, we derive a new explicit expression for the mutually unbiased bases. The new ingredients of our approach are, basically, the following: we provide a simple expression of the generalized Pauli group in terms of the additive characters of the field, and we derive an exact groupal composition law between the elements of the commuting subsets of the generalized Pauli group, renormalized by a well-chosen phase-factor.
About mutually unbiased bases in even and odd prime power dimensions
Durt, Thomas
2005-01-01
Mutually unbiased bases generalize the X, Y and Z qubit bases. They possess numerous applications in quantum information science. It is well known that in prime power dimensions N = p m (with p prime and m a positive integer), there exists a maximal set of N + 1 mutually unbiased bases. In the present paper, we derive an explicit expression for those bases, in terms of the (operations of the) associated finite field (Galois division ring) of N elements. This expression is shown to be equivalent to the expressions previously obtained by Ivanovic (1981 J. Phys. A: Math. Gen. 14 3241) in odd prime dimensions, and Wootters and Fields (1989 Ann. Phys. 191 363) in odd prime power dimensions. In even prime power dimensions, we derive a new explicit expression for the mutually unbiased bases. The new ingredients of our approach are, basically, the following: we provide a simple expression of the generalized Pauli group in terms of the additive characters of the field, and we derive an exact groupal composition law between the elements of the commuting subsets of the generalized Pauli group, renormalized by a well-chosen phase-factor
Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation
Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick; Slosar, Anže
2015-01-01
We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g|=0.2
Xu, Yan; Liu, Biao; Ding, Fengan; Zhou, Xiaodie; Tu, Pin; Yu, Bo; He, Yan; Huang, Peilin
2017-06-01
Circulating tumor cells (CTCs), isolated as a 'liquid biopsy', may provide important diagnostic and prognostic information. Therefore, rapid, reliable and unbiased detection of CTCs are required for routine clinical analyses. It was demonstrated that negative enrichment, an epithelial marker-independent technique for isolating CTCs, exhibits a better efficiency in the detection of CTCs compared with positive enrichment techniques that only use specific anti-epithelial cell adhesion molecules. However, negative enrichment techniques incur significant cell loss during the isolation procedure, and as it is a method that uses only one type of antibody, it is inherently biased. The detection procedure and identification of cell types also relies on skilled and experienced technicians. In the present study, the detection sensitivity of using negative enrichment and a previously described unbiased detection method was compared. The results revealed that unbiased detection methods may efficiently detect >90% of cancer cells in blood samples containing CTCs. By contrast, only 40-60% of CTCs were detected by negative enrichment. Additionally, CTCs were identified in >65% of patients with stage I/II lung cancer. This simple yet efficient approach may achieve a high level of sensitivity. It demonstrates a potential for the large-scale clinical implementation of CTC-based diagnostic and prognostic strategies.
Genomic prediction based on data from three layer lines: a comparison between linear methods
Calus, M.P.L.; Huang, H.; Vereijken, J.; Visscher, J.; Napel, ten J.; Windig, J.J.
2014-01-01
Background The prediction accuracy of several linear genomic prediction models, which have previously been used for within-line genomic prediction, was evaluated for multi-line genomic prediction. Methods Compared to a conventional BLUP (best linear unbiased prediction) model using pedigree data, we
Andersen, Anders Holst; Korsgaard, Inge Riis; Jensen, Just
2002-01-01
In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed...... or random effects). In the different models, expressions are given (when these can be found - otherwise unbiased estimates are given) for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non...... Gaussian traits are generalisations of the well-known formulas for Gaussian traits - and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part...
Levin, Bruce; Leu, Cheng-Shiun
2013-01-01
We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.
Advanced statistics: linear regression, part II: multiple linear regression.
Marill, Keith A
2004-01-01
The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
An Unbiased Unscented Transform Based Kalman Filter for 3D Radar
WANGGuohong; XIUJianjuan; HEYou
2004-01-01
As a derivative-free alternative to the Extended Kalman filter (EKF) in the framework of state estimation, the Unscented Kalman filter (UKF) has potential applications in nonlinear filtering. By noting the fact that the unscented transform is generally biased when converting the radar measurements from spherical coordinates into Cartesian coordinates, a new filtering algorithm for 3D radar, called Unbiased unscented Kalman filter (UUKF), is proposed. The new algorithm is validated by Monte Carlo simulation runs. Simulation results show that the UUKF is more effective than the UKF, EKF and the Converted measurement Kalman filter (CMKF).
Towards an unbiased, full-sky clustering search with IceCube in real time
Bernardini, Elisa; Franckowiak, Anna; Kintscher, Thomas; Kowalski, Marek; Stasik, Alexander [DESY, Zeuthen (Germany); Collaboration: IceCube-Collaboration
2016-07-01
The IceCube neutrino observatory is a 1 km{sup 3} detector for Cherenkov light in the ice at the South Pole. Having observed the presence of a diffuse astrophysical neutrino flux, static point source searches have come up empty handed. Thus, transient and variable objects emerge as promising, detectable source candidates. An unbiased, full-sky clustering search - run in real time - can find neutrino events with close temporal and spatial proximity. The most significant of these clusters serve as alerts to third-party observatories in order to obtain a complete picture of cosmic accelerators. The talk showcases the status and prospects of this project.
Vretenar, M
2014-01-01
The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
Li Fengguo; Ai Baoquan
2011-01-01
Graphical abstract: The current J as a function of the phase shift φ and ε at a = 1/2π, b = 0.5/2π, k B T = 0.5, α = 0.1, and F 0 = 0.5. Highlights: → Unbiased forces and spatially modulated white noises affect the current. → In the adiabatic limit, the analytical expression of directed current is obtained. → Their competition will induce current reversals. → For negative asymmetric parameters of the force, there exists an optimum parameter. → The current increases monotonously for positive asymmetric parameters. - Abstract: Transport of Brownian particles in a symmetrically periodic tube is investigated in the presence of asymmetric unbiased external forces and spatially modulated Gaussian white noises. In the adiabatic limit, we obtain the analytical expression of the directed current. It is found that the temporal asymmetry can break thermodynamic equilibrium and induce a net current. Their competition between the temporal asymmetry force and the phase shift between the noise modulation and the tube shape will induce some peculiar phenomena, for example, current reversals. The current changes with the phase shift in the form of the sine function. For negative asymmetric parameters of the force, there exists an optimum parameter at which the current takes its maximum value. However, the current increases monotonously for positive asymmetric parameters.
The Herschel/HIFI unbiased spectral survey of the solar-mass protostar IRAS16293
Bottinelli, S.; Caux, E.; Cecarelli, C.; Kahane, C.
2012-03-01
Unbiased spectral surveys are powerful tools to study the chemistry and the physics of star forming regions, because they can provide a complete census of the molecular content and the observed lines probe the physical structure of the source. While unbiased surveys at the millimeter and sub-millimeter wavelengths observable from ground-based telescopes have previously been performed towards several high-mass protostars, very little data exist on low-mass protostars, with only one such ground-based survey carried out towards this kind of object. However, since low-mass protostars are believed to resemble our own Sun's progenitor, the information provided by spectral surveys is crucial in order to uncover the birth mechanisms of low-mass stars and hence of our Sun. To help fill up this gap in our understanding, we carried out an almost complete spectral survey towards the solar-type protostar IRAS16293-2422 with the HIFI instrument onboard Herschel. The observations covered a range of about 700 GHz, in which a few hundreds lines were detected with more than 3σ confidence interval certainty and identified. All the detected lines which were free from obvious blending effects were fitted with Gaussians to estimate their basic kinematic properties. Contrarily to what is observed in the millimeter range, no lines from complex organic molecules have been observed. In this work, we characterize the different components of IRAS16293-2422 (a known binary at least) by analyzing the numerous emission and absorption lines identified.
Unbiased multi-fidelity estimate of failure probability of a free plane jet
Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin
2017-11-01
Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.
Hua, Xue; Hibar, Derrek P.; Ching, Christopher R.K.; Boyle, Christina P.; Rajagopalan, Priya; Gutman, Boris A.; Leow, Alex D.; Toga, Arthur W.; Jack, Clifford R.; Harvey, Danielle; Weiner, Michael W.; Thompson, Paul M.
2013-01-01
Various neuroimaging measures are being evaluated for tracking Alzheimer’s disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24 months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39 AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. PMID:23153970
Li, N.; Kinzelbach, W.; Li, H.; Li, W.; Chen, F.; Wang, L.
2017-12-01
Data assimilation techniques are widely used in hydrology to improve the reliability of hydrological models and to reduce model predictive uncertainties. This provides critical information for decision makers in water resources management. This study aims to evaluate a data assimilation system for the Guantao groundwater flow model coupled with a one-dimensional soil column simulation (Hydrus 1D) using an Unbiased Ensemble Square Root Filter (UnEnSRF) originating from the Ensemble Kalman Filter (EnKF) to update parameters and states, separately or simultaneously. To simplify the coupling between unsaturated and saturated zone, a linear relationship obtained from analyzing inputs to and outputs from Hydrus 1D is applied in the data assimilation process. Unlike EnKF, the UnEnSRF updates parameter ensemble mean and ensemble perturbations separately. In order to keep the ensemble filter working well during the data assimilation, two factors are introduced in the study. One is called damping factor to dampen the update amplitude of the posterior ensemble mean to avoid nonrealistic values. The other is called inflation factor to relax the posterior ensemble perturbations close to prior to avoid filter inbreeding problems. The sensitivities of the two factors are studied and their favorable values for the Guantao model are determined. The appropriate observation error and ensemble size were also determined to facilitate the further analysis. This study demonstrated that the data assimilation of both model parameters and states gives a smaller model prediction error but with larger uncertainty while the data assimilation of only model states provides a smaller predictive uncertainty but with a larger model prediction error. Data assimilation in a groundwater flow model will improve model prediction and at the same time make the model converge to the true parameters, which provides a successful base for applications in real time modelling or real time controlling strategies
Correlation and simple linear regression.
Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G
2003-06-01
In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Linear zonal atmospheric prediction for adaptive optics
McGuire, Patrick C.; Rhoadarmer, Troy A.; Coy, Hanna A.; Angel, J. Roger P.; Lloyd-Hart, Michael
2000-07-01
We compare linear zonal predictors of atmospheric turbulence for adaptive optics. Zonal prediction has the possible advantage of being able to interpret and utilize wind-velocity information from the wavefront sensor better than modal prediction. For simulated open-loop atmospheric data for a 2- meter 16-subaperture AO telescope with 5 millisecond prediction and a lookback of 4 slope-vectors, we find that Widrow-Hoff Delta-Rule training of linear nets and Back- Propagation training of non-linear multilayer neural networks is quite slow, getting stuck on plateaus or in local minima. Recursive Least Squares training of linear predictors is two orders of magnitude faster and it also converges to the solution with global minimum error. We have successfully implemented Amari's Adaptive Natural Gradient Learning (ANGL) technique for a linear zonal predictor, which premultiplies the Delta-Rule gradients with a matrix that orthogonalizes the parameter space and speeds up the training by two orders of magnitude, like the Recursive Least Squares predictor. This shows that the simple Widrow-Hoff Delta-Rule's slow convergence is not a fluke. In the case of bright guidestars, the ANGL, RLS, and standard matrix-inversion least-squares (MILS) algorithms all converge to the same global minimum linear total phase error (approximately 0.18 rad2), which is only approximately 5% higher than the spatial phase error (approximately 0.17 rad2), and is approximately 33% lower than the total 'naive' phase error without prediction (approximately 0.27 rad2). ANGL can, in principle, also be extended to make non-linear neural network training feasible for these large networks, with the potential to lower the predictor error below the linear predictor error. We will soon scale our linear work to the approximately 108-subaperture MMT AO system, both with simulations and real wavefront sensor data from prime focus.
Simulation experiment on total ionization dose effects of linear CCD
Tang Benqi; Zhang Yong; Xiao Zhigang; Wang Zujun; Huang Shaoyan
2004-01-01
We carry out the ionization radiation experiment of linear CCDs operated in unbiased, biased, biased and driven mode respectively by Co-60 γ source with our self-designed test system, and offline test the Dark signal and Saturation voltage and SNR varied with total dose for TCD132D, and get some valuable results. On the basis of above work, we set forth a primary experiment approaches to simulate the total dose radiation effects of charge coupled devices. (authors)
The New Peabody Picture Vocabulary Test-III: An Illusion of Unbiased Assessment?
Stockman, Ida J
2000-10-01
This article examines whether changes in the ethnic minority composition of the standardization sample for the latest edition of the Peabody Picture Vocabulary Test (PPVT-III, Dunn & Dunn, 1997) can be used as the sole explanation for children's better test scores when compared to an earlier edition, the Peabody Picture Vocabulary Test-Revised (PPVT-R, Dunn & Dunn, 1981). Results from a comparative analysis of these two test editions suggest that other factors may explain improved performances. Among these factors are the number of words and age levels sampled, the types of words and pictures used, and characteristics of the standardization sample other than its ethnic minority composition. This analysis also raises questions regarding the usefulness of converting scores from one edition to the other and the type of criteria that could be used to evaluate whether the PPVT-III is an unbiased test of vocabulary for children from diverse cultural and linguistic backgrounds.
Estimating Unbiased Treatment Effects in Education Using a Regression Discontinuity Design
William C. Smith
2014-08-01
Full Text Available The ability of regression discontinuity (RD designs to provide an unbiased treatment effect while overcoming the ethical concerns plagued by Random Control Trials (RCTs make it a valuable and useful approach in education evaluation. RD is the only explicitly recognized quasi-experimental approach identified by the Institute of Education Statistics to meet the prerequisites of a causal relationship. Unfortunately, the statistical complexity of the RD design has limited its application in education research. This article provides a less technical introduction to RD for education researchers and practitioners. Using visual analysis to aide conceptual understanding, the article walks readers through the essential steps of a Sharp RD design using hypothetical, but realistic, district intervention data and provides additional resources for further exploration.
Unbiased free energy estimates in fast nonequilibrium transformations using Gaussian mixtures
Procacci, Piero
2015-01-01
In this paper, we present an improved method for obtaining unbiased estimates of the free energy difference between two thermodynamic states using the work distribution measured in nonequilibrium driven experiments connecting these states. The method is based on the assumption that any observed work distribution is given by a mixture of Gaussian distributions, whose normal components are identical in either direction of the nonequilibrium process, with weights regulated by the Crooks theorem. Using the prototypical example for the driven unfolding/folding of deca-alanine, we show that the predicted behavior of the forward and reverse work distributions, assuming a combination of only two Gaussian components with Crooks derived weights, explains surprisingly well the striking asymmetry in the observed distributions at fast pulling speeds. The proposed methodology opens the way for a perfectly parallel implementation of Jarzynski-based free energy calculations in complex systems
Overy, Catherine; Blunt, N. S.; Shepherd, James J.; Booth, George H.; Cleland, Deidre; Alavi, Ali
2014-01-01
Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems
Balci, Soner; Czaplewski, David A.; Jung, Il Woong; Kim, Ju-Hyung; Hatami, Fariba; Kung, Patrick; Kim, Seongsin Margaret
2017-07-01
Besides having perfect control on structural features, such as vertical alignment and uniform distribution by fabricating the wires via e-beam lithography and etching process, we also investigated the THz emission from these fabricated nanowires when they are applied DC bias voltage. To be able to apply a voltage bias, an interdigitated gold (Au) electrode was patterned on the high-quality InGaAs epilayer grown on InP substrate bymolecular beam epitaxy. Afterwards, perfect vertically aligned and uniformly distributed nanowires were fabricated in between the electrodes of this interdigitated pattern so that we could apply voltage bias to improve the THz emission. As a result, we achieved enhancement in the emitted THz radiation by ~four times, about 12 dB increase in power ratio at 0.25 THz with a DC biased electric field compared with unbiased NWs.
Unbiased, complete solar charging of a neutral flow battery by a single Si photocathode
Wedege, Kristina; Bae, Dowon; Dražević, Emil
2018-01-01
Solar redox flow batteries have attracted attention as a possible integrated technology for simultaneous conversion and storage of solar energy. In this work, we review current efforts to design aqueous solar flow batteries in terms of battery electrolyte capacity, solar conversion efficiency...... and depth of solar charge. From a materials cost and design perspective, a simple, cost-efficient, aqueous solar redox flow battery will most likely incorporate only one semiconductor, and we demonstrate here a system where a single photocathode is accurately matched to the redox couples to allow...... for a complete solar charge. The single TiO2 protected Si photocathode with a catalytic Pt layer can fully solar charge a neutral TEMPO-sulfate/ferricyanide battery with a cell voltage of 0.35 V. An unbiased solar conversion efficiency of 1.6% is obtained and this system represents a new strategy in solar RFBs...
Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs
van Dam, Wim; Howard, Mark
2011-07-01
We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiołkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships with known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.
Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs
Dam, Wim van; Howard, Mark
2011-01-01
We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiolkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships with known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.
Estimating unbiased economies of scale of HIV prevention projects: a case study of Avahan.
Lépine, Aurélia; Vassall, Anna; Chandrashekar, Sudha; Blanc, Elodie; Le Nestour, Alexis
2015-04-01
Governments and donors are investing considerable resources on HIV prevention in order to scale up these services rapidly. Given the current economic climate, providers of HIV prevention services increasingly need to demonstrate that these investments offer good 'value for money'. One of the primary routes to achieve efficiency is to take advantage of economies of scale (a reduction in the average cost of a health service as provision scales-up), yet empirical evidence on economies of scale is scarce. Methodologically, the estimation of economies of scale is hampered by several statistical issues preventing causal inference and thus making the estimation of economies of scale complex. In order to estimate unbiased economies of scale when scaling up HIV prevention services, we apply our analysis to one of the few HIV prevention programmes globally delivered at a large scale: the Indian Avahan initiative. We costed the project by collecting data from the 138 Avahan NGOs and the supporting partners in the first four years of its scale-up, between 2004 and 2007. We develop a parsimonious empirical model and apply a system Generalized Method of Moments (GMM) and fixed-effects Instrumental Variable (IV) estimators to estimate unbiased economies of scale. At the programme level, we find that, after controlling for the endogeneity of scale, the scale-up of Avahan has generated high economies of scale. Our findings suggest that average cost reductions per person reached are achievable when scaling-up HIV prevention in low and middle income countries. Copyright © 2015 Elsevier Ltd. All rights reserved.
Griaud, François; Denefeld, Blandine; Lang, Manuel; Hensinger, Héloïse; Haberl, Peter; Berg, Matthias
2017-07-01
Characterization of charge-based variants by mass spectrometry (MS) is required for the analytical development of a new biologic entity and its marketing approval by health authorities. However, standard peak-based data analysis approaches are time-consuming and biased toward the detection, identification, and quantification of main variants only. The aim of this study was to characterize in-depth acidic and basic species of a stressed IgG1 monoclonal antibody using comprehensive and unbiased MS data evaluation tools. Fractions collected from cation ion exchange (CEX) chromatography were analyzed as intact, after reduction of disulfide bridges, and after proteolytic cleavage using Lys-C. Data of both intact and reduced samples were evaluated consistently using a time-resolved deconvolution algorithm. Peptide mapping data were processed simultaneously, quantified and compared in a systematic manner for all MS signals and fractions. Differences observed between the fractions were then further characterized and assigned. Time-resolved deconvolution enhanced pattern visualization and data interpretation of main and minor modifications in 3-dimensional maps across CEX fractions. Relative quantification of all MS signals across CEX fractions before peptide assignment enabled the detection of fraction-specific chemical modifications at abundances below 1%. Acidic fractions were shown to be heterogeneous, containing antibody fragments, glycated as well as deamidated forms of the heavy and light chains. In contrast, the basic fractions contained mainly modifications of the C-terminus and pyroglutamate formation at the N-terminus of the heavy chain. Systematic data evaluation was performed to investigate multiple data sets and comprehensively extract main and minor differences between each CEX fraction in an unbiased manner.
Verifying mixing in dilution tunnels How to ensure cookstove emissions samples are unbiased
Wilson, Daniel L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Rapp, Vi H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Caubel, Julien J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Sharon S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gadgil, Ashok J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2017-12-15
A well-mixed diluted sample is essential for unbiased measurement of cookstove emissions. Most cookstove testing labs employ a dilution tunnel, also referred to as a “duct,” to mix clean dilution air with cookstove emissions before sampling. It is important that the emissions be well-mixed and unbiased at the sampling port so that instruments can take representative samples of the emission plume. Some groups have employed mixing baffles to ensure the gaseous and aerosol emissions from cookstoves are well-mixed before reaching the sampling location [2, 4]. The goal of these baffles is to to dilute and mix the emissions stream with the room air entering the fume hood by creating a local zone of high turbulence. However, potential drawbacks of mixing baffles include increased flow resistance (larger blowers needed for the same exhaust flow), nuisance cleaning of baffles as soot collects, and, importantly, the potential for loss of PM2.5 particles on the baffles themselves, thus biasing results. A cookstove emission monitoring system with baffles will collect particles faster than the duct’s walls alone. This is mostly driven by the available surface area for deposition by processes of Brownian diffusion (through the boundary layer) and turbophoresis (i.e. impaction). The greater the surface area available for diffusive and advection-driven deposition to occur, the greater the particle loss will be at the sampling port. As a layer of larger particle “fuzz” builds on the mixing baffles, even greater PM2.5 loss could occur. The micro structure of the deposited aerosol will lead to increased rates of particle loss by interception and a tendency for smaller particles to deposit due to impaction on small features of the micro structure. If the flow stream could be well-mixed without the need for baffles, these drawbacks could be avoided and the cookstove emissions sampling system would be more robust.
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
Alcaraz, J.
2001-01-01
After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
Mühlfeld, Christian; Papadakis, Tamara; Krasteva, Gabriela
2010-01-01
Quantitative information about the innervation is essential to analyze the structure-function relationships of organs. So far, there has been no unbiased stereological tool for this purpose. This study presents a new unbiased and efficient method to quantify the total length of axons in a given...... reference volume, illustrated on the left ventricle of the mouse heart. The method is based on the following steps: 1) estimation of the reference volume; 2) randomization of location and orientation using appropriate sampling techniques; 3) counting of nerve fiber profiles hit by a defined test area within...
Use of generalized linear models and digital data in a forest inventory of Northern Utah
Moisen, Gretchen G.; Edwards, Thomas C.
1999-01-01
Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.
Characterization of Radiation Hardened Bipolar Linear Devices for High Total Dose Missions
McClure, Steven S.; Harris, Richard D.; Rax, Bernard G.; Thorbourn, Dennis O.
2012-01-01
Radiation hardened linear devices are characterized for performance in combined total dose and displacement damage environments for a mission scenario with a high radiation level. Performance at low and high dose rate for both biased and unbiased conditions is compared and the impact to hardness assurance methodology is discussed.
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
Aldehyde-Selective Wacker-Type Oxidation of Unbiased Alkenes Enabled by a Nitrite Co-Catalyst
Wickens, Zachary K.; Morandi, Bill; Grubbs, Robert H.
2013-01-01
Breaking the rules: Reversal of the high Markovnikov selectivity of Wacker-type oxidations was accomplished using a nitrite co-catalyst. Unbiased aliphatic alkenes can be oxidized with high yield and aldehyde selectivity, and several functional groups are tolerated. 18O-labeling experiments indicate that the aldehydic O atom is derived from the nitrite salt.
Aldehyde-Selective Wacker-Type Oxidation of Unbiased Alkenes Enabled by a Nitrite Co-Catalyst
Wickens, Zachary K.
2013-09-13
Breaking the rules: Reversal of the high Markovnikov selectivity of Wacker-type oxidations was accomplished using a nitrite co-catalyst. Unbiased aliphatic alkenes can be oxidized with high yield and aldehyde selectivity, and several functional groups are tolerated. 18O-labeling experiments indicate that the aldehydic O atom is derived from the nitrite salt.
Dumonteil, E.; Diop, C. M.
2009-01-01
This paper derives an unbiased minimum variance estimator (UMVE) of a matrix exponential function of a normal wean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. The last section will present numerical results on a simple example. (authors)
Dumonteil, E.; Diop, C.M.
2011-01-01
External linking scripts between Monte Carlo transport codes and burnup codes, and complete integration of burnup capability into Monte Carlo transport codes, have been or are currently being developed. Monte Carlo linked burnup methodologies may serve as an excellent benchmark for new deterministic burnup codes used for advanced systems; however, there are some instances where deterministic methodologies break down (i.e., heavily angularly biased systems containing exotic materials without proper group structure) and Monte Carlo burn up may serve as an actual design tool. Therefore, researchers are also developing these capabilities in order to examine complex, three-dimensional exotic material systems that do not contain benchmark data. Providing a reference scheme implies being able to associate statistical errors to any neutronic value of interest like k(eff), reaction rates, fluxes, etc. Usually in Monte Carlo, standard deviations are associated with a particular value by performing different independent and identical simulations (also referred to as 'cycles', 'batches', or 'replicas'), but this is only valid if the calculation itself is not biased. And, as will be shown in this paper, there is a bias in the methodology that consists of coupling transport and depletion codes because Bateman equations are not linear functions of the fluxes or of the reaction rates (those quantities being always measured with an uncertainty). Therefore, we have to quantify and correct this bias. This will be achieved by deriving an unbiased minimum variance estimator of a matrix exponential function of a normal mean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. Numerical tests will be performed with an ad hoc Monte Carlo code on a very simple depletion case and will be compared to the theoretical results obtained with the reference scheme. Finally, the statistical error propagation
High levels of absorption in orientation-unbiased, radio-selected 3CR Active Galaxies
Wilkes, Belinda J.; Haas, Martin; Barthel, Peter; Leipski, Christian; Kuraszkiewicz, Joanna; Worrall, Diana; Birkinshaw, Mark; Willner, Steven P.
2014-08-01
A critical problem in understanding active galaxies (AGN) is the separation of intrinsic physical differences from observed differences that are due to orientation. Obscuration of the active nucleus is anisotropic and strongly frequency dependent leading to complex selection effects for observations in most wavebands. These can only be quantified using a sample that is sufficiently unbiased to test orientation effects. Low-frequency radio emission is one way to select a close-to orientation-unbiased sample, albeit limited to the minority of AGN with strong radio emission.Recent Chandra, Spitzer and Herschel observations combined with multi-wavelength data for a complete sample of high-redshift (1half the sample is significantly obscured with ratios of unobscured: Compton thin (22 24.2) = 2.5:1.4:1 in these high-luminosity (log L(0.3-8keV) ~ 44-46) sources. These ratios are consistent with current expectations based on modelingthe Cosmic X-ray Background. A strong correlation with radio orientation constrains the geometry of the obscuring disk/torus to have a ~60 degree opening angle and ~12 degree Compton-thick cross-section. The deduced ~50% obscured fraction of the population contrasts with typical estimates of ~20% obscured in optically- and X-ray-selected high-luminosity samples. Once the primary nuclear emission is obscured, AGN X-ray spectra are frequently dominated by unobscured non-nuclear or scattered nuclear emission which cannot be distinguished from direct nuclear emission with a lower obscuration level unless high quality data is available. As a result, both the level of obscuration and the estimated instrinsic luminosities of highly-obscured AGN are likely to be significantly (*10-1000) underestimated for 25-50% of the population. This may explain the lower obscured fractions reported for optical and X-ray samples which have no independent measure of the AGN luminosity. Correcting AGN samples for these underestimated luminosities would result in
Towards an unbiased comparison of CC, BCC, and FCC lattices in terms of prealiasing
Vad, Viktor
2014-06-01
In the literature on optimal regular volume sampling, the Body-Centered Cubic (BCC) lattice has been proven to be optimal for sampling spherically band-limited signals above the Nyquist limit. On the other hand, if the sampling frequency is below the Nyquist limit, the Face-Centered Cubic (FCC) lattice was demonstrated to be optimal in reducing the prealiasing effect. In this paper, we confirm that the FCC lattice is indeed optimal in this sense in a certain interval of the sampling frequency. By theoretically estimating the prealiasing error in a realistic range of the sampling frequency, we show that in other frequency intervals, the BCC lattice and even the traditional Cartesian Cubic (CC) lattice are expected to minimize the prealiasing. The BCC lattice is superior over the FCC lattice if the sampling frequency is not significantly below the Nyquist limit. Interestingly, if the original signal is drastically undersampled, the CC lattice is expected to provide the lowest prealiasing error. Additionally, we give a comprehensible clarification that the sampling efficiency of the FCC lattice is lower than that of the BCC lattice. Although this is a well-known fact, the exact percentage has been erroneously reported in the literature. Furthermore, for the sake of an unbiased comparison, we propose to rotate the Marschner-Lobb test signal such that an undue advantage is not given to either lattice. © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
Linhoff, Michael W; Laurén, Juha; Cassidy, Robert M; Dobie, Frederick A; Takahashi, Hideto; Nygaard, Haakon B; Airaksinen, Matti S; Strittmatter, Stephen M; Craig, Ann Marie
2009-03-12
Delineating the molecular basis of synapse development is crucial for understanding brain function. Cocultures of neurons with transfected fibroblasts have demonstrated the synapse-promoting activity of candidate molecules. Here, we performed an unbiased expression screen for synaptogenic proteins in the coculture assay using custom-made cDNA libraries. Reisolation of NGL-3/LRRC4B and neuroligin-2 accounts for a minority of positive clones, indicating that current understanding of mammalian synaptogenic proteins is incomplete. We identify LRRTM1 as a transmembrane protein that induces presynaptic differentiation in contacting axons. All four LRRTM family members exhibit synaptogenic activity, LRRTMs localize to excitatory synapses, and artificially induced clustering of LRRTMs mediates postsynaptic differentiation. We generate LRRTM1(-/-) mice and reveal altered distribution of the vesicular glutamate transporter VGLUT1, confirming an in vivo synaptic function. These results suggest a prevalence of LRR domain proteins in trans-synaptic signaling and provide a cellular basis for the reported linkage of LRRTM1 to handedness and schizophrenia.
The role of fire in UK peatland and moorland management: the need for informed, unbiased debate.
Davies, G Matt; Kettridge, Nicholas; Stoof, Cathelijne R; Gray, Alan; Ascoli, Davide; Fernandes, Paulo M; Marrs, Rob; Allen, Katherine A; Doerr, Stefan H; Clay, Gareth D; McMorrow, Julia; Vandvik, Vigdis
2016-06-05
Fire has been used for centuries to generate and manage some of the UK's cultural landscapes. Despite its complex role in the ecology of UK peatlands and moorlands, there has been a trend of simplifying the narrative around burning to present it as an only ecologically damaging practice. That fire modifies peatland characteristics at a range of scales is clearly understood. Whether these changes are perceived as positive or negative depends upon how trade-offs are made between ecosystem services and the spatial and temporal scales of concern. Here we explore the complex interactions and trade-offs in peatland fire management, evaluating the benefits and costs of managed fire as they are currently understood. We highlight the need for (i) distinguishing between the impacts of fires occurring with differing severity and frequency, and (ii) improved characterization of ecosystem health that incorporates the response and recovery of peatlands to fire. We also explore how recent research has been contextualized within both scientific publications and the wider media and how this can influence non-specialist perceptions. We emphasize the need for an informed, unbiased debate on fire as an ecological management tool that is separated from other aspects of moorland management and from political and economic opinions.This article is part of the themed issue 'The interaction of fire and mankind'. © 2016 The Authors.
Multifractals embedded in short time series: An unbiased estimation of probability moment
Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie
2016-12-01
An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.
Towards an unbiased comparison of CC, BCC, and FCC lattices in terms of prealiasing
Vad, Viktor; Csé bfalvi, Balá zs; Rautek, Peter; Grö ller, Eduard M.
2014-01-01
In the literature on optimal regular volume sampling, the Body-Centered Cubic (BCC) lattice has been proven to be optimal for sampling spherically band-limited signals above the Nyquist limit. On the other hand, if the sampling frequency is below the Nyquist limit, the Face-Centered Cubic (FCC) lattice was demonstrated to be optimal in reducing the prealiasing effect. In this paper, we confirm that the FCC lattice is indeed optimal in this sense in a certain interval of the sampling frequency. By theoretically estimating the prealiasing error in a realistic range of the sampling frequency, we show that in other frequency intervals, the BCC lattice and even the traditional Cartesian Cubic (CC) lattice are expected to minimize the prealiasing. The BCC lattice is superior over the FCC lattice if the sampling frequency is not significantly below the Nyquist limit. Interestingly, if the original signal is drastically undersampled, the CC lattice is expected to provide the lowest prealiasing error. Additionally, we give a comprehensible clarification that the sampling efficiency of the FCC lattice is lower than that of the BCC lattice. Although this is a well-known fact, the exact percentage has been erroneously reported in the literature. Furthermore, for the sake of an unbiased comparison, we propose to rotate the Marschner-Lobb test signal such that an undue advantage is not given to either lattice. © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.
Unbiased estimators of coincidence and correlation in non-analogous Monte Carlo particle transport
Szieberth, M.; Kloosterman, J.L.
2014-01-01
Highlights: • The history splitting method was developed for non-Boltzmann Monte Carlo estimators. • The method allows variance reduction for pulse-height and higher moment estimators. • It works in highly multiplicative problems but Russian roulette has to be replaced. • Estimation of higher moments allows the simulation of neutron noise measurements. • Biased sampling of fission helps the effective simulation of neutron noise methods. - Abstract: The conventional non-analogous Monte Carlo methods are optimized to preserve the mean value of the distributions. Therefore, they are not suited to non-Boltzmann problems such as the estimation of coincidences or correlations. This paper presents a general method called history splitting for the non-analogous estimation of such quantities. The basic principle of the method is that a non-analogous particle history can be interpreted as a collection of analogous histories with different weights according to the probability of their realization. Calculations with a simple Monte Carlo program for a pulse-height-type estimator prove that the method is feasible and provides unbiased estimation. Different variance reduction techniques have been tried with the method and Russian roulette turned out to be ineffective in high multiplicity systems. An alternative history control method is applied instead. Simulation results of an auto-correlation (Rossi-α) measurement show that even the reconstruction of the higher moments is possible with the history splitting method, which makes the simulation of neutron noise measurements feasible
Mutually orthogonal Latin squares from the inner products of vectors in mutually unbiased bases
Hall, Joanne L; Rao, Asha
2010-01-01
Mutually unbiased bases (MUBs) are important in quantum information theory. While constructions of complete sets of d + 1 MUBs in C d are known when d is a prime power, it is unknown if such complete sets exist in non-prime power dimensions. It has been conjectured that complete sets of MUBs only exist in C d if a maximal set of mutually orthogonal Latin squares (MOLS) of side length d also exists. There are several constructions (Roy and Scott 2007 J. Math. Phys. 48 072110; Paterek, Dakic and Brukner 2009 Phys. Rev. A 79 012109) of complete sets of MUBs from specific types of MOLS, which use Galois fields to construct the vectors of the MUBs. In this paper, two known constructions of MUBs (Alltop 1980 IEEE Trans. Inf. Theory 26 350-354; Wootters and Fields 1989 Ann. Phys. 191 363-381), both of which use polynomials over a Galois field, are used to construct complete sets of MOLS in the odd prime case. The MOLS come from the inner products of pairs of vectors in the MUBs.
Unbiased estimation of the liver volume by the Cavalieri principle using magnetic resonance images
Sahin, Buenyamin; Emirzeoglu, Mehmet; Uzun, Ahmet; Incesu, Luetfi; Bek, Yueksel; Bilgic, Sait; Kaplan, Sueleyman
2003-01-01
Objective: It is often useful to know the exact volume of the liver, such as in monitoring the effects of a disease, treatment, dieting regime, training program or surgical application. Some non-invasive methodologies have been previously described which estimate the volume of the liver. However, these preliminary techniques need special software or skilled performers and they are not ideal for daily use in clinical practice. Here, we describe a simple, accurate and practical technique for estimating liver volume without changing the routine magnetic resonance imaging scanning procedure. Materials and methods: In this study, five normal livers, obtained from cadavers, were scanned by 0.5 T MR machine, in horizontal and sagittal planes. The consecutive sections, in 10 mm thickness, were used to estimate the whole volume of the liver by means of the Cavalieri principle. The volume estimations were done by three different performers to evaluate the reproducibility. Results: There are no statistical differences between the performers and real liver volumes (P>0.05). There is also high correlation between the estimates of performers and the real liver volume (r=0.993). Conclusion: We conclude that the combination of MR imaging with the Cavalieri principle is a non-invasive, direct and unbiased technique that can be safely applied to estimate liver volume with a very moderate workload per individual
Illouz, Tomer; Madar, Ravit; Louzon, Yoram; Griffioen, Kathleen J; Okun, Eitan
2016-02-01
The assessment of spatial cognitive learning in rodents is a central approach in neuroscience, as it enables one to assess and quantify the effects of treatments and genetic manipulations from a broad perspective. Although the Morris water maze (MWM) is a well-validated paradigm for testing spatial learning abilities, manual categorization of performance in the MWM into behavioral strategies is subject to individual interpretation, and thus to biases. Here we offer a support vector machine (SVM) - based, automated, MWM unbiased strategy classification (MUST-C) algorithm, as well as a cognitive score scale. This model was examined and validated by analyzing data obtained from five MWM experiments with changing platform sizes, revealing a limitation in the spatial capacity of the hippocampus. We have further employed this algorithm to extract novel mechanistic insights on the impact of members of the Toll-like receptor pathway on cognitive spatial learning and memory. The MUST-C algorithm can greatly benefit MWM users as it provides a standardized method of strategy classification as well as a cognitive scoring scale, which cannot be derived from typical analysis of MWM data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Julian, Lisa D.; Hartwig, John F.
2010-01-01
We report a rhodium catalyst that exhibits high reactivity for the hydroamination of primary aminoalkenes that are unbiased toward cyclization and that possess functional groups that would not be tolerated in hydroaminations catalyzed by more electrophilic systems. This catalyst contains an unusual diaminophosphine ligand that binds to rhodium in a κ3-P,O,P mode. The reactions catalyzed by this complex typically proceed at mild temperatures (room temperature to 70 °C), occur with primary aminoalkenes lacking substituents on the alkyl chain that bias the system toward cyclization, occur with primary aminoalkenes containing chloride, ester, ether, enolizable ketone, nitrile, and unprotected alcohol functionality, and occur with primary aminoalkenes containing internal olefins. Mechanistic data imply that these reactions occur with a turnover-limiting step that is different from that of reactions catalyzed by late transition metal complexes of Pd, Pt, and Ir. This change in the turnover-limiting step and resulting high activity of the catalyst stem from favorable relative rates for protonolysis of the M-C bond to release the hydroamination product vs reversion of the aminoalkyl intermediate to regenerate the acyclic precursor. Probes for the origin of the reactivity of the rhodium complex of L1 imply that the aminophosphine groups lead to these favorable rates by effects beyond steric demands and simple electron donation to the metal center. PMID:20839807
Development of an unbiased statistical method for the analysis of unigenic evolution
Shilton Brian H
2006-03-01
Full Text Available Abstract Background Unigenic evolution is a powerful genetic strategy involving random mutagenesis of a single gene product to delineate functionally important domains of a protein. This method involves selection of variants of the protein which retain function, followed by statistical analysis comparing expected and observed mutation frequencies of each residue. Resultant mutability indices for each residue are averaged across a specified window of codons to identify hypomutable regions of the protein. As originally described, the effect of changes to the length of this averaging window was not fully eludicated. In addition, it was unclear when sufficient functional variants had been examined to conclude that residues conserved in all variants have important functional roles. Results We demonstrate that the length of averaging window dramatically affects identification of individual hypomutable regions and delineation of region boundaries. Accordingly, we devised a region-independent chi-square analysis that eliminates loss of information incurred during window averaging and removes the arbitrary assignment of window length. We also present a method to estimate the probability that conserved residues have not been mutated simply by chance. In addition, we describe an improved estimation of the expected mutation frequency. Conclusion Overall, these methods significantly extend the analysis of unigenic evolution data over existing methods to allow comprehensive, unbiased identification of domains and possibly even individual residues that are essential for protein function.
SU{sub 2} nonstandard bases: the case of mutually unbiased bases
Olivier, Albouy; Kibler, Maurice R. [Universite de Lyon, Institut de Physique Nucleaire de Lyon, Universite Lyon, CNRS/IN2P3, 43 bd du 11 novembre 1918, F-69622 Villeurbanne Cedex (France)
2007-02-15
This paper deals with bases in a finite-dimensional Hilbert space. Such a space can be realized as a subspace of the representation space of SU{sub 2} corresponding to an irreducible representation of SU{sub 2}. The representation theory of SU{sub 2} is reconsidered via the use of two truncated deformed oscillators. This leads to replace the familiar scheme [j{sub 2}, j{sub z}] by a scheme [j{sup 2}, v{sub ra}], where the two-parameter operator v{sub ra} is defined in the universal enveloping algebra of the Lie algebra su{sub 2}. The eigenvectors of the commuting set of operators [j{sup 2}, v{sub ra}] are adapted to a tower of chains SO{sub 3} includes C{sub 2j+1} (2j belongs to N{sup *}), where C{sub 2j+1} is the cyclic group of order 2j + 1. In the case where 2j + 1 is prime, the corresponding eigenvectors generate a complete set of mutually unbiased bases. Some useful relations on generalized quadratic Gauss sums are exposed in three appendices. (authors)
The role of fire in UK peatland and moorland management: the need for informed, unbiased debate
Davies, G. Matt; Kettridge, Nicholas; Stoof, Cathelijne R.; Gray, Alan; Ascoli, Davide; Fernandes, Paulo M.; Marrs, Rob; Clay, Gareth D.; McMorrow, Julia; Vandvik, Vigdis
2016-01-01
Fire has been used for centuries to generate and manage some of the UK's cultural landscapes. Despite its complex role in the ecology of UK peatlands and moorlands, there has been a trend of simplifying the narrative around burning to present it as an only ecologically damaging practice. That fire modifies peatland characteristics at a range of scales is clearly understood. Whether these changes are perceived as positive or negative depends upon how trade-offs are made between ecosystem services and the spatial and temporal scales of concern. Here we explore the complex interactions and trade-offs in peatland fire management, evaluating the benefits and costs of managed fire as they are currently understood. We highlight the need for (i) distinguishing between the impacts of fires occurring with differing severity and frequency, and (ii) improved characterization of ecosystem health that incorporates the response and recovery of peatlands to fire. We also explore how recent research has been contextualized within both scientific publications and the wider media and how this can influence non-specialist perceptions. We emphasize the need for an informed, unbiased debate on fire as an ecological management tool that is separated from other aspects of moorland management and from political and economic opinions. This article is part of the themed issue ‘The interaction of fire and mankind’. PMID:27216512
Cornely, Pierre-Richard; Hughes, John
2018-02-01
Earthquakes are among the most dangerous events that occur on earth and many scientists have been investigating the underlying processes that take place before earthquakes occur. These investigations are fueling efforts towards developing both single and multiple parameter earthquake forecasting methods based on earthquake precursors. One potential earthquake precursor parameter that has received significant attention within the last few years is the ionospheric total electron content (TEC). Despite its growing popularity as an earthquake precursor, TEC has been under great scrutiny because of the underlying biases associated with the process of acquiring and processing TEC data. Future work in the field will need to demonstrate our ability to acquire TEC data with the least amount of biases possible thereby preserving the integrity of the data. This paper describes a process for removing biases using raw TEC data from the standard Rinex files obtained from any global positioning satellites system. The process is based on developing an unbiased TEC (UTEC) data and model that can be more adaptable to serving as a precursor signal for earthquake forecasting. The model was used during the days and hours leading to the earthquake off the coast of Tohoku, Japan on March 11, 2011 with interesting results. The model takes advantage of the large amount of data available from the GPS Earth Observation Network of Japan to display near real-time UTEC data as the earthquake approaches and for a period of time after the earthquake occurred.
Lukas, Manuel; Hillebrand, Eric
Relations between economic variables can often not be exploited for forecasting, suggesting that predictors are weak in the sense that estimation uncertainty is larger than bias from ignoring the relation. In this paper, we propose a novel bagging predictor designed for such weak predictor variab...
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Multiple Imputation of Predictor Variables Using Generalized Additive Models
de Jong, Roel; van Buuren, Stef; Spiess, Martin
2016-01-01
The sensitivity of multiple imputation methods to deviations from their distributional assumptions is investigated using simulations, where the parameters of scientific interest are the coefficients of a linear regression model, and values in predictor variables are missing at random. The
Nonlinear vs. linear biasing in Trp-cage folding simulations
Spiwok, Vojtěch, E-mail: spiwokv@vscht.cz; Oborský, Pavel; Králová, Blanka [Department of Biochemistry and Microbiology, University of Chemistry and Technology, Prague, Technická 3, Prague 6 166 28 (Czech Republic); Pazúriková, Jana [Institute of Computer Science, Masaryk University, Botanická 554/68a, 602 00 Brno (Czech Republic); Křenek, Aleš [Institute of Computer Science, Masaryk University, Botanická 554/68a, 602 00 Brno (Czech Republic); Center CERIT-SC, Masaryk Univerzity, Šumavská 416/15, 602 00 Brno (Czech Republic)
2015-03-21
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.
Unbiased and non-supervised learning methods for disruption prediction at JET
Murari, A.; Vega, J.; Ratta, G.A.; Vagliasindi, G.; Johnson, M.F.; Hong, S.H.
2009-01-01
The importance of predicting the occurrence of disruptions is going to increase significantly in the next generation of tokamak devices. The expected energy content of ITER plasmas, for example, is such that disruptions could have a significant detrimental impact on various parts of the device, ranging from erosion of plasma facing components to structural damage. Early detection of disruptions is therefore needed with evermore increasing urgency. In this paper, the results of a series of methods to predict disruptions at JET are reported. The main objective of the investigation consists of trying to determine how early before a disruption it is possible to perform acceptable predictions on the basis of the raw data, keeping to a minimum the number of 'ad hoc' hypotheses. Therefore, the chosen learning techniques have the common characteristic of requiring a minimum number of assumptions. Classification and Regression Trees (CART) is a supervised but, on the other hand, a completely unbiased and nonlinear method, since it simply constructs the best classification tree by working directly on the input data. A series of unsupervised techniques, mainly K-means and hierarchical, have also been tested, to investigate to what extent they can autonomously distinguish between disruptive and non-disruptive groups of discharges. All these independent methods indicate that, in general, prediction with a success rate above 80% can be achieved not earlier than 180 ms before the disruption. The agreement between various completely independent methods increases the confidence in the results, which are also confirmed by a visual inspection of the data performed with pseudo Grand Tour algorithms.
Galili, Tal; Meilijson, Isaac
2016-01-02
The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].
Chen, Charles H; Wiedman, Gregory; Khan, Ayesha; Ulmschneider, Martin B
2014-09-01
Unbiased molecular simulation is a powerful tool to study the atomic details driving functional structural changes or folding pathways of highly fluid systems, which present great challenges experimentally. Here we apply unbiased long-timescale molecular dynamics simulation to study the ab initio folding and partitioning of melittin, a template amphiphilic membrane active peptide. The simulations reveal that the peptide binds strongly to the lipid bilayer in an unstructured configuration. Interfacial folding results in a localized bilayer deformation. Akin to purely hydrophobic transmembrane segments the surface bound native helical conformer is highly resistant against thermal denaturation. Circular dichroism spectroscopy experiments confirm the strong binding and thermostability of the peptide. The study highlights the utility of molecular dynamics simulations for studying transient mechanisms in fluid lipid bilayer systems. This article is part of a Special Issue entitled: Interfacially Active Peptides and Proteins. Guest Editors: William C. Wimley and Kalina Hristova. Copyright © 2014. Published by Elsevier B.V.
Tanwiwat Jaikuna
2017-02-01
Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
Jensen, Eva B. Vedel; Kiêu, K
1994-01-01
Unbiased stereological estimators of d-dimensional volume in R(n) are derived, based on information from an isotropic random r-slice through a specified point. The content of the slice can be subsampled by means of a spatial grid. The estimators depend only on spatial distances. As a fundamental ...... lemma, an explicit formula for the probability that an isotropic random r-slice in R(n) through 0 hits a fixed point in R(n) is given....
Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon
2014-05-27
Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the "artificial enrichment" and "analogue bias" of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD.
Multicollinearity in hierarchical linear models.
Yu, Han; Jiang, Shanhe; Land, Kenneth C
2015-09-01
This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig
2015-04-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig; Al-Naffouri, Tareq Y.
2015-01-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
Zaretzki, J.; Bergeron, C.; Huang, T.-W.
2013-01-01
Regioselectivity-WebPredictor (RS-WebPredictor) is a server that predicts isozyme-specific cytochrome P450 (CYP)-mediated sites of metabolism (SOMs) on drug-like molecules. Predictions may be made for the promiscuous 2C9, 2D6 and 3A4 CYP isozymes, as well as CYPs 1A2, 2A6, 2B6, 2C8, 2C19 and 2E1....... RS-WebPredictor is the first freely accessible server that predicts the regioselectivity of the last six isozymes. Server execution time is fast, taking on average 2s to encode a submitted molecule and 1s to apply a given model, allowing for high-throughput use in lead optimization projects.......Availability: RS-WebPredictor is accessible for free use at http://reccr.chem.rpi.edu/ Software/RS-WebPredictor....
Predictors of transformational leadership of nurse managers.
Echevarria, Ilia M; Patterson, Barbara J; Krouse, Anne
2017-04-01
The aim of this study was to examine the relationships among education, leadership experience, emotional intelligence and transformational leadership of nurse managers. Nursing leadership research provides limited evidence of predictors of transformational leadership style in nurse managers. A predictive correlational design was used with a sample of nurse managers (n = 148) working in varied health care settings. Data were collected using the Genos Emotional Intelligence Inventory, the Multi-factor Leadership Questionnaire and a demographic questionnaire. Simple linear and multiple regression analyses were used to examine relationships. A statistically significant relationship was found between emotional intelligence and transformational leadership (r = 0.59, P transformational leadership. Nurse managers should be well informed of the predictors of transformational leadership in order to pursue continuing education and development opportunities related to those predictors. The results of this study emphasise the need for emotional intelligence continuing education, leadership development and leader assessment programmes. © 2016 John Wiley & Sons Ltd.
Comparing linear probability model coefficients across groups
Holm, Anders; Ejrnæs, Mette; Karlson, Kristian Bernt
2015-01-01
of the following three components: outcome truncation, scale parameters and distributional shape of the predictor variable. These results point to limitations in using linear probability model coefficients for group comparisons. We also provide Monte Carlo simulations and real examples to illustrate......This article offers a formal identification analysis of the problem in comparing coefficients from linear probability models between groups. We show that differences in coefficients from these models can result not only from genuine differences in effects, but also from differences in one or more...... these limitations, and we suggest a restricted approach to using linear probability model coefficients in group comparisons....
Linear Algebra and Smarandache Linear Algebra
Vasantha, Kandasamy
2003-01-01
The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...
Arevalo, P. A.; Olofsson, P.; Woodcock, C. E.
2017-12-01
Unbiased estimation of the areas of conversion between land categories ("activity data") and their uncertainty is crucial for providing more robust calculations of carbon emissions to the atmosphere, as well as their removals. This is particularly important for the REDD+ mechanism of UNFCCC where an economic compensation is tied to the magnitude and direction of such fluxes. Dense time series of Landsat data and statistical protocols are becoming an integral part of forest monitoring efforts, but there are relatively few studies in the tropics focused on using these methods to advance operational MRV systems (Monitoring, Reporting and Verification). We present the results of a prototype methodology for continuous monitoring and unbiased estimation of activity data that is compliant with the IPCC Approach 3 for representation of land. We used a break detection algorithm (Continuous Change Detection and Classification, CCDC) to fit pixel-level temporal segments to time series of Landsat data in the Colombian Amazon. The segments were classified using a Random Forest classifier to obtain annual maps of land categories between 2001 and 2016. Using these maps, a biannual stratified sampling approach was implemented and unbiased stratified estimators constructed to calculate area estimates with confidence intervals for each of the stable and change classes. Our results provide evidence of a decrease in primary forest as a result of conversion to pastures, as well as increase in secondary forest as pastures are abandoned and the forest allowed to regenerate. Estimating areas of other land transitions proved challenging because of their very small mapped areas compared to stable classes like forest, which corresponds to almost 90% of the study area. Implications on remote sensing data processing, sample allocation and uncertainty reduction are also discussed.
Multivariate covariance generalized linear models
Bonat, W. H.; Jørgensen, Bent
2016-01-01
are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...
Banhatti, D.G.; Ananthakrishnan, S.
1989-01-01
We present 327-MHz interplanetary scintillation (IPS) observations of an unbiased sample of 90 extragalactic radio sources selected from the ninth Ooty lunar occultation list. The sources are brighter than 0.75 Jy at 327 MHz and lie outside the galactic plane. We derive values, the fraction of scintillating flux density, and the equivalent Gaussian diameter for the scintillating structure. Various correlations are found between the observed parameters. In particular, the scintillating component weakens and broadens with increasing largest angular size, and stronger scintillators have more compact scintillating components. (author)
Linear regression and the normality assumption.
Schmidt, Amand F; Finan, Chris
2017-12-16
Researchers often perform arbitrary outcome transformations to fulfill the normality assumption of a linear regression model. This commentary explains and illustrates that in large data settings, such transformations are often unnecessary, and worse may bias model estimates. Linear regression assumptions are illustrated using simulated data and an empirical example on the relation between time since type 2 diabetes diagnosis and glycated hemoglobin levels. Simulation results were evaluated on coverage; i.e., the number of times the 95% confidence interval included the true slope coefficient. Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings. Given that modern healthcare research typically includes thousands of subjects focusing on the normality assumption is often unnecessary, does not guarantee valid results, and worse may bias estimates due to the practice of outcome transformations. Copyright © 2017 Elsevier Inc. All rights reserved.
Predictors of Video Game Console Aggression
Bean, Anthony Martin; Ferro, Lauren
2016-01-01
This study was designed to investigate the aggression levels of college students found in the Northeastern part of the United States following exposure to video games. The 59 participants played their assigned game, Mortal Kombat on Nintendo Wii or Halo 2 on the Xbox, for 45 minutes with a partner. The researchers employed twelve t-tests (alpha adjusted to .004) and three multiple linear regressions to explore the difference of aggression levels in gender, violent video game, and predictors o...
Malm, S; Sørensen, Anders Christian; Fikse, W F
2013-01-01
Breeding to reduce the prevalence of categorically scored hip dysplasia (HD), based on phenotypic assessment of radiographic hip status, has had limited success. The aim of this study was to evaluate two selection strategies for improved hip status: truncation selection based on phenotypic record...
Moreira, Joao; Zeng, Xiaohan; Amaral, Luis
2013-03-01
Assessing the career performance of scientists has become essential to modern science. Bibliometric indicators, like the h-index are becoming more and more decisive in evaluating grants and approving publication of articles. However, many of the more used indicators can be manipulated or falsified by publishing with very prolific researchers or self-citing papers with a certain number of citations, for instance. Accounting for these factors is possible but it introduces unwanted complexity that drives us further from the purpose of the indicator: to represent in a clear way the prestige and importance of a given scientist. Here we try to overcome this challenge. We used Thompson Reuter's Web of Science database and analyzed all the papers published until 2000 by ~1500 researchers in the top 30 departments of seven scientific fields. We find that over 97% of them have a citation distribution that is consistent with a discrete lognormal model. This suggests that our model can be used to accurately predict the performance of a researcher. Furthermore, this predictor does not depend on the individual number of publications and is not easily ``gamed'' on. The authors acknowledge support from FCT Portugal, and NSF grants
Russell, Joseph A; Campos, Brittany; Stone, Jennifer; Blosser, Erik M; Burkett-Cadena, Nathan; Jacobs, Jonathan L
2018-04-03
The future of infectious disease surveillance and outbreak response is trending towards smaller hand-held solutions for point-of-need pathogen detection. Here, samples of Culex cedecei mosquitoes collected in Southern Florida, USA were tested for Venezuelan Equine Encephalitis Virus (VEEV), a previously-weaponized arthropod-borne RNA-virus capable of causing acute and fatal encephalitis in animal and human hosts. A single 20-mosquito pool tested positive for VEEV by quantitative reverse transcription polymerase chain reaction (RT-qPCR) on the Biomeme two3. The virus-positive sample was subjected to unbiased metatranscriptome sequencing on the Oxford Nanopore MinION and shown to contain Everglades Virus (EVEV), an alphavirus in the VEEV serocomplex. Our results demonstrate, for the first time, the use of unbiased sequence-based detection and subtyping of a high-consequence biothreat pathogen directly from an environmental sample using field-forward protocols. The development and validation of methods designed for field-based diagnostic metagenomics and pathogen discovery, such as those suitable for use in mobile "pocket laboratories", will address a growing demand for public health teams to carry out their mission where it is most urgent: at the point-of-need.
Geng, Xiujuan; Gu, Hong; Shin, Wanyong; Ross, Thomas J; Yang, Yihong
2011-10-01
We propose an unbiased implicit-reference group-wise (IRG) image registration method and demonstrate its applications in the construction of a brain white matter fiber tract atlas and the analysis of resting-state functional MRI (fMRI) connectivity. Most image registration techniques pair-wise align images to a selected reference image and group analyses are performed in the reference space, which may produce bias. The proposed method jointly estimates transformations, with an elastic deformation model, registering all images to an implicit reference corresponding to the group average. The unbiased registration is applied to build a fiber tract atlas by registering a group of diffusion tensor images. Compared to reference-based registration, the IRG registration improves the fiber track overlap within the group. After applying the method in the fMRI connectivity analysis, results suggest a general improvement in functional connectivity maps at a group level in terms of larger cluster size and higher average t-scores.
Ip, Hon S.; Wiley, Michael R.; Long, Renee; Gustavo, Palacios; Shearn-Bochsler, Valerie; Whitehouse, Chris A.
2014-01-01
Advances in massively parallel DNA sequencing platforms, commonly termed next-generation sequencing (NGS) technologies, have greatly reduced time, labor, and cost associated with DNA sequencing. Thus, NGS has become a routine tool for new viral pathogen discovery and will likely become the standard for routine laboratory diagnostics of infectious diseases in the near future. This study demonstrated the application of NGS for the rapid identification and characterization of a virus isolated from the brain of an endangered Mississippi sandhill crane. This bird was part of a population restoration effort and was found in an emaciated state several days after Hurricane Isaac passed over the refuge in Mississippi in 2012. Post-mortem examination had identified trichostrongyliasis as the possible cause of death, but because a virus with morphology consistent with a togavirus was isolated from the brain of the bird, an arboviral etiology was strongly suspected. Because individual molecular assays for several known arboviruses were negative, unbiased NGS by Illumina MiSeq was used to definitively identify and characterize the causative viral agent. Whole genome sequencing and phylogenetic analysis revealed the viral isolate to be the Highlands J virus, a known avian pathogen. This study demonstrates the use of unbiased NGS for the rapid detection and characterization of an unidentified viral pathogen and the application of this technology to wildlife disease diagnostics and conservation medicine.
AN UNBIASED 1.3 mm EMISSION LINE SURVEY OF THE PROTOPLANETARY DISK ORBITING LkCa 15
Punzi, K. M.; Kastner, J. H. [Center for Imaging Science, School of Physics and Astronomy, and Laboratory for Multiwavelength Astrophysics, Rochester Institute of Technology, 54 Lomb Memorial Drive, Rochester, NY 14623 (United States); Hily-Blant, P.; Forveille, T. [UJF—Grenoble 1/CNRS-INSU, Institut de Planétologie et d’Astrophysique de Grenoble (IPAG) UMR 5274, F-38041, Grenoble (France); Sacco, G. G. [INAF—Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, I-50125, Firenze (Italy)
2015-06-01
The outer (>30 AU) regions of the dusty circumstellar disk orbiting the ∼2–5 Myr old, actively accreting solar analog LkCa 15 are known to be chemically rich, and the inner disk may host a young protoplanet within its central cavity. To obtain a complete census of the brightest molecular line emission emanating from the LkCa 15 disk over the 210–270 GHz (1.4–1.1 mm) range, we have conducted an unbiased radio spectroscopic survey with the Institute de Radioastronomie Millimétrique (IRAM) 30 m telescope. The survey demonstrates that in this spectral region, the most readily detectable lines are those of CO and its isotopologues {sup 13}CO and C{sup 18}O, as well as HCO{sup +}, HCN, CN, C{sub 2}H, CS, and H{sub 2}CO. All of these species had been previously detected in the LkCa 15 disk; however, the present survey includes the first complete coverage of the CN (2–1) and C{sub 2}H (3–2) hyperfine complexes. Modeling of these emission complexes indicates that the CN and C{sub 2}H either reside in the coldest regions of the disk or are subthermally excited, and that their abundances are enhanced relative to molecular clouds and young stellar object environments. These results highlight the value of unbiased single-dish line surveys in guiding future high-resolution interferometric imaging of disks.
Truncated predictor feedback for time-delay systems
Zhou, Bin
2014-01-01
This book provides a systematic approach to the design of predictor based controllers for (time-varying) linear systems with either (time-varying) input or state delays. Differently from those traditional predictor based controllers, which are infinite-dimensional static feedback laws and may cause difficulties in their practical implementation, this book develops a truncated predictor feedback (TPF) which involves only finite dimensional static state feedback. Features and topics: A novel approach referred to as truncated predictor feedback for the stabilization of (time-varying) time-delay systems in both the continuous-time setting and the discrete-time setting is built systematically Semi-global and global stabilization problems of linear time-delay systems subject to either magnitude saturation or energy constraints are solved in a systematic manner Both stabilization of a single system and consensus of a group of systems (multi-agent systems) are treated in a unified manner by applying the truncated pre...
Predictors of Relationship Power among Drug-involved Women
Campbell, Aimee N. C.; Tross, Susan; Hu, Mei-chen; Pavlicova, Martina; Nunes, Edward V.
2012-01-01
Gender-based relationship power is frequently linked to women’s capacity to reduce sexual risk behaviors. This study offers an exploration of predictors of relationship power, as measured by the multidimensional and theoretically grounded Sexual Relationship Power Scale (SRPS), among women in outpatient substance abuse treatment. Linear models were used to test nine predictors (age, race/ethnicity, education, time in treatment, economic dependence, substance use, sexual concurrency, partner a...
Predictor feedback for delay systems implementations and approximations
Karafyllis, Iasson
2017-01-01
This monograph bridges the gap between the nonlinear predictor as a concept and as a practical tool, presenting a complete theory of the application of predictor feedback to time-invariant, uncertain systems with constant input delays and/or measurement delays. It supplies several methods for generating the necessary real-time solutions to the systems’ nonlinear differential equations, which the authors refer to as approximate predictors. Predictor feedback for linear time-invariant (LTI) systems is presented in Part I to provide a solid foundation on the necessary concepts, as LTI systems pose fewer technical difficulties than nonlinear systems. Part II extends all of the concepts to nonlinear time-invariant systems. Finally, Part III explores extensions of predictor feedback to systems described by integral delay equations and to discrete-time systems. The book’s core is the design of control and observer algorithms with which global stabilization, guaranteed in the previous literature with idealized (b...
Predictors of depression stigma
Jorm Anthony F
2008-04-01
Full Text Available Abstract Background To investigate and compare the predictors of personal and perceived stigma associated with depression. Method Three samples were surveyed to investigate the predictors: a national sample of 1,001 Australian adults; a local community sample of 5,572 residents of the Australian Capital Territory and Queanbeyan aged 18 to 50 years; and a psychologically distressed subset (n = 487 of the latter sample. Personal and Perceived Stigma were measured using the two subscales of the Depression Stigma Scale. Potential predictors included demographic variables (age, gender, education, country of birth, remoteness of residence, psychological distress, awareness of Australia's national depression initiative beyondblue, depression literacy and level of exposure to depression. Not all predictors were used for all samples. Results Personal stigma was consistently higher among men, those with less education and those born overseas. It was also associated with greater current psychological distress, lower prior contact with depression, not having heard of a national awareness raising initiative, and lower depression literacy. These findings differed from those for perceived stigma except for psychological distress which was associated with both higher personal and higher perceived stigma. Remoteness of residence was not associated with either type of stigma. Conclusion The findings highlight the importance of treating the concepts of personal and perceived stigma separately in designing measures of stigma, in interpreting the pattern of findings in studies of the predictors of stigma, and in designing, interpreting the impact of and disseminating interventions for stigma.
Linearly constrained minimax optimization
Madsen, Kaj; Schjær-Jacobsen, Hans
1978-01-01
We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems...
Wirenfeldt, Martin; Dalmau, Ishar; Finsen, Bente
2003-01-01
Stereology offers a set of unbiased principles to obtain precise estimates of total cell numbers in a defined region. In terms of microglia, which in the traumatized and diseased CNS is an extremely dynamic cell population, the strength of stereology is that the resultant estimate is unaffected...... of microglia, although with this thickness, the intensity of the staining is too high to distinguish single cells. Lectin histochemistry does not visualize microglia throughout the section and, accordingly, is not suited for the optical fractionator. The mean total number of Mac-1+ microglial cells...... in the unilateral dentate gyrus of the normal young adult male C57BL/6 mouse was estimated to be 12,300 (coefficient of variation (CV)=0.13) with a mean coefficient of error (CE) of 0.06. The perspective of estimating microglial cell numbers using stereology is to establish a solid basis for studying the dynamics...
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
examined, which in turn leads to any of the known stereological estimates, including size distributions and spatial distributions. The unbiasedness is not a function of the assumed relation between the weight and the structure, which is in practice always a biased relation from a stereological (integral......, the desired number of fields are sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections...... geometric) point of view. The efficiency of the proportionator depends, however, directly on this relation to be positive. The sampling and estimation procedure is simulated in sections with characteristics and various kinds of noises in possibly realistic ranges. In all cases examined, the proportionator...
Darré, Leonardo; Machado, Matías Rodrigo; Brandner, Astrid Febe; González, Humberto Carlos; Ferreira, Sebastián; Pantano, Sergio
2015-02-10
Modeling of macromolecular structures and interactions represents an important challenge for computational biology, involving different time and length scales. However, this task can be facilitated through the use of coarse-grained (CG) models, which reduce the number of degrees of freedom and allow efficient exploration of complex conformational spaces. This article presents a new CG protein model named SIRAH, developed to work with explicit solvent and to capture sequence, temperature, and ionic strength effects in a topologically unbiased manner. SIRAH is implemented in GROMACS, and interactions are calculated using a standard pairwise Hamiltonian for classical molecular dynamics simulations. We present a set of simulations that test the capability of SIRAH to produce a qualitatively correct solvation on different amino acids, hydrophilic/hydrophobic interactions, and long-range electrostatic recognition leading to spontaneous association of unstructured peptides and stable structures of single polypeptides and protein-protein complexes.
Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua
2018-05-01
High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
2014-01-01
Background Descendants from the extinct aurochs (Bos primigenius), taurine (Bos taurus) and zebu cattle (Bos indicus) were domesticated 10,000 years ago in Southwestern and Southern Asia, respectively, and colonized the world undergoing complex events of admixture and selection. Molecular data, in particular genome-wide single nucleotide polymorphism (SNP) markers, can complement historic and archaeological records to elucidate these past events. However, SNP ascertainment in cattle has been optimized for taurine breeds, imposing limitations to the study of diversity in zebu cattle. As amplified fragment length polymorphism (AFLP) markers are discovered and genotyped as the samples are assayed, this type of marker is free of ascertainment bias. In order to obtain unbiased assessments of genetic differentiation and structure in taurine and zebu cattle, we analyzed a dataset of 135 AFLP markers in 1,593 samples from 13 zebu and 58 taurine breeds, representing nine continental areas. Results We found a geographical pattern of expected heterozygosity in European taurine breeds decreasing with the distance from the domestication centre, arguing against a large-scale introgression from European or African aurochs. Zebu cattle were found to be at least as diverse as taurine cattle. Western African zebu cattle were found to have diverged more from Indian zebu than South American zebu. Model-based clustering and ancestry informative markers analyses suggested that this is due to taurine introgression. Although a large part of South American zebu cattle also descend from taurine cows, we did not detect significant levels of taurine ancestry in these breeds, probably because of systematic backcrossing with zebu bulls. Furthermore, limited zebu introgression was found in Podolian taurine breeds in Italy. Conclusions The assessment of cattle diversity reported here contributes an unbiased global view to genetic differentiation and structure of taurine and zebu cattle
Shota Nakamura
Full Text Available With the severe acute respiratory syndrome epidemic of 2003 and renewed attention on avian influenza viral pandemics, new surveillance systems are needed for the earlier detection of emerging infectious diseases. We applied a "next-generation" parallel sequencing platform for viral detection in nasopharyngeal and fecal samples collected during seasonal influenza virus (Flu infections and norovirus outbreaks from 2005 to 2007 in Osaka, Japan. Random RT-PCR was performed to amplify RNA extracted from 0.1-0.25 ml of nasopharyngeal aspirates (N = 3 and fecal specimens (N = 5, and more than 10 microg of cDNA was synthesized. Unbiased high-throughput sequencing of these 8 samples yielded 15,298-32,335 (average 24,738 reads in a single 7.5 h run. In nasopharyngeal samples, although whole genome analysis was not available because the majority (>90% of reads were host genome-derived, 20-460 Flu-reads were detected, which was sufficient for subtype identification. In fecal samples, bacteria and host cells were removed by centrifugation, resulting in gain of 484-15,260 reads of norovirus sequence (78-98% of the whole genome was covered, except for one specimen that was under-detectable by RT-PCR. These results suggest that our unbiased high-throughput sequencing approach is useful for directly detecting pathogenic viruses without advance genetic information. Although its cost and technological availability make it unlikely that this system will very soon be the diagnostic standard worldwide, this system could be useful for the earlier discovery of novel emerging viruses and bioterrorism, which are difficult to detect with conventional procedures.
Bragg, Elise M; Briggs, Farran
2017-02-15
This protocol outlines large-scale reconstructions of neurons combined with the use of independent and unbiased clustering analyses to create a comprehensive survey of the morphological characteristics observed among a selective neuronal population. Combination of these techniques constitutes a novel approach for the collection and analysis of neuroanatomical data. Together, these techniques enable large-scale, and therefore more comprehensive, sampling of selective neuronal populations and establish unbiased quantitative methods for describing morphologically unique neuronal classes within a population. The protocol outlines the use of modified rabies virus to selectively label neurons. G-deleted rabies virus acts like a retrograde tracer following stereotaxic injection into a target brain structure of interest and serves as a vehicle for the delivery and expression of EGFP in neurons. Large numbers of neurons are infected using this technique and express GFP throughout their dendrites, producing "Golgi-like" complete fills of individual neurons. Accordingly, the virus-mediated retrograde tracing method improves upon traditional dye-based retrograde tracing techniques by producing complete intracellular fills. Individual well-isolated neurons spanning all regions of the brain area under study are selected for reconstruction in order to obtain a representative sample of neurons. The protocol outlines procedures to reconstruct cell bodies and complete dendritic arborization patterns of labeled neurons spanning multiple tissue sections. Morphological data, including positions of each neuron within the brain structure, are extracted for further analysis. Standard programming functions were utilized to perform independent cluster analyses and cluster evaluations based on morphological metrics. To verify the utility of these analyses, statistical evaluation of a cluster analysis performed on 160 neurons reconstructed in the thalamic reticular nucleus of the thalamus
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Foundations of linear and generalized linear models
Agresti, Alan
2015-01-01
A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,
Diagnostics for Linear Models With Functional Responses
Xu, Hongquan; Shen, Qing
2005-01-01
Linear models where the response is a function and the predictors are vectors are useful in analyzing data from designed experiments and other situations with functional observations. Residual analysis and diagnostics are considered for such models. Studentized residuals are defined and their properties are studied. Chi-square quantile-quantile plots are proposed to check the assumption of Gaussian error process and outliers. Jackknife residuals and an associated test are proposed to det...
Correlations and Non-Linear Probability Models
Breen, Richard; Holm, Anders; Karlson, Kristian Bernt
2014-01-01
the dependent variable of the latent variable model and its predictor variables. We show how this correlation can be derived from the parameters of non-linear probability models, develop tests for the statistical significance of the derived correlation, and illustrate its usefulness in two applications. Under......Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations between...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....
Jensen Just
2002-05-01
Full Text Available Abstract In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects. In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model or a generalised version of heritability plays a central role in these formulas.
Exploratory regression analysis: a tool for selecting models and determining predictor importance.
Braun, Michael T; Oswald, Frederick L
2011-06-01
Linear regression analysis is one of the most important tools in a researcher's toolbox for creating and testing predictive models. Although linear regression analysis indicates how strongly a set of predictor variables, taken together, will predict a relevant criterion (i.e., the multiple R), the analysis cannot indicate which predictors are the most important. Although there is no definitive or unambiguous method for establishing predictor variable importance, there are several accepted methods. This article reviews those methods for establishing predictor importance and provides a program (in Excel) for implementing them (available for direct download at http://dl.dropbox.com/u/2480715/ERA.xlsm?dl=1) . The program investigates all 2(p) - 1 submodels and produces several indices of predictor importance. This exploratory approach to linear regression, similar to other exploratory data analysis techniques, has the potential to yield both theoretical and practical benefits.
Gender and distance influence performance predictors in young swimmers
Paulo Victor Mezzaroba
2013-12-01
Full Text Available Predictors of performance in adult swimmers are constantly changing during youth especially because the training routine begins even before puberty in the modality. Therefore this study aimed to determine the group of parameters that best predict short and middle swimming distance performances of young swimmers of both genders. Thirty-three 10-to 16-years-old male and female competitive swimmers participated in the study. Multiple linear regression (MLR was used considering mean speed of maximum 100, 200 and 400 m efforts as dependent variables, and five parameters groups as possible predictors (anthropometry, body composition, physiological and biomechanical parameters, chronological age/pubic hair. The main results revealed explanatory powers of almost 100% for both genders and all performances, but with different predictors entered in MLR models of each parameter group or all variables. Thus, there are considerable differences in short and middle swimming distance, and males and females predictors that should be considered in training programs.
Predictors of nurses' experience of verbal abuse by nurse colleagues.
Keller, Ronald; Krainovich-Miller, Barbara; Budin, Wendy; Djukic, Maja
Between 45% and 94% of registered nurses (RNs) experience verbal abuse, which is associated with physical and psychological harm. Although several studies examined predictors of RNs' verbal abuse, none examined predictors of RNs' experiences of verbal abuse by RN colleagues. To examine individual, workplace, dispositional, contextual, and interpersonal predictors of RNs' reported experiences of verbal abuse from RN colleagues. In this secondary analysis, a cross-sectional design with multiple linear regression analysis was used to examine the effect of 23 predictors on verbal abuse by RN colleagues in a sample of 1,208 early career RNs. Selected variables in the empirical intragroup conflict model explained 23.8% of variance in RNs' experiences of verbal abuse by RN colleagues. A number of previously unstudied factors were identified that organizational leaders can monitor and develop or modify policies to prevent early career RNs' experiences of verbal abuse by RN colleagues. Copyright © 2017 Elsevier Inc. All rights reserved.
Predictors of relationship power among drug-involved women.
Campbell, Aimee N C; Tross, Susan; Hu, Mei-chen; Pavlicova, Martina; Nunes, Edward V
2012-08-01
Gender-based relationship power is frequently linked to women's capacity to reduce sexual risk behaviors. This study offers an exploration of predictors of relationship power, as measured by the multidimensional and theoretically grounded sexual relationship power scale, among women in outpatient substance abuse treatment. Linear models were used to test nine predictors (age, race/ethnicity, education, time in treatment, economic dependence, substance use, sexual concurrency, partner abuse, and sex role orientation) of relationship power among 513 women participating in a multi-site HIV risk reduction intervention study. Significant predictors of relationship control included having a non-abusive male partner, only one male partner, and endorsing traditional masculine (or both masculine and feminine) sex role attributes. Predictors of decision-making dominance were interrelated, with substance use × partner abuse and age × sex role orientation interactions. Results contribute to the understanding of factors which may influence relationship power and to their potential role in HIV sexual risk reduction interventions.
PCX, Interior-Point Linear Programming Solver
Czyzyk, J.
2004-01-01
1 - Description of program or function: PCX solves linear programming problems using the Mehrota predictor-corrector interior-point algorithm. PCX can be called as a subroutine or used in stand-alone mode, with data supplied from an MPS file. The software incorporates modules that can be used separately from the linear programming solver, including a pre-solve routine and data structure definitions. 2 - Methods: The Mehrota predictor-corrector method is a primal-dual interior-point method for linear programming. The starting point is determined from a modified least squares heuristic. Linear systems of equations are solved at each interior-point iteration via a sparse Cholesky algorithm native to the code. A pre-solver is incorporated in the code to eliminate inefficiencies in the user's formulation of the problem. 3 - Restriction on the complexity of the problem: There are no size limitations built into the program. The size of problem solved is limited by RAM and swap space on the user's computer
Rosenblum, Michael; van der Laan, Mark J.
2010-01-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation. PMID:20628636
Rosenblum, Michael; van der Laan, Mark J
2010-04-01
Models, such as logistic regression and Poisson regression models, are often used to estimate treatment effects in randomized trials. These models leverage information in variables collected before randomization, in order to obtain more precise estimates of treatment effects. However, there is the danger that model misspecification will lead to bias. We show that certain easy to compute, model-based estimators are asymptotically unbiased even when the working model used is arbitrarily misspecified. Furthermore, these estimators are locally efficient. As a special case of our main result, we consider a simple Poisson working model containing only main terms; in this case, we prove the maximum likelihood estimate of the coefficient corresponding to the treatment variable is an asymptotically unbiased estimator of the marginal log rate ratio, even when the working model is arbitrarily misspecified. This is the log-linear analog of ANCOVA for linear models. Our results demonstrate one application of targeted maximum likelihood estimation.
Predictors of weight maintenance
Pasman, W.J.; Saris, W.H.M.; Westerterp-Plantenga, M.S.
1999-01-01
Objective: To obtain predictors of weight maintenance after a weight-loss intervention. Research Methods and Procedures: An overall analysis of data from two-long intervention studies [n = 67 women; age: 37.9±1.0 years; body weight (BW): 87.0±1.2 kg; body mass index: 32.1±0.5 kg·m-2; % body fat:
Predictors of Adolescent Breakfast Consumption: Longitudinal Findings from Project EAT
Bruening, Meg; Larson, Nicole; Story, Mary; Neumark-Sztainer, Dianne; Hannan, Peter
2011-01-01
Objective: To identify predictors of breakfast consumption among adolescents. Methods: Five-year longitudinal study Project EAT (Eating Among Teens). Baseline surveys were completed in Minneapolis-St. Paul schools and by mail at follow-up by youth (n = 800) transitioning from middle to high school. Linear regression models examined associations…
Epistemological Predictors of Prospective Biology Teachers' Nature of Science Understandings
Köseoglu, Pinar; Köksal, Mustafa Serdar
2015-01-01
The purpose of this study was to investigate epistemological predictors of nature of science understandings of 281 prospective biology teachers surveyed using the Epistemological Beliefs Scale Regarding Science and the Nature of Science Scale. The findings on multiple linear regression showed that understandings about definition of science and…
Tuey, R. C.
1972-01-01
Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.
Peterson, David; Stofleth, Jerome H.; Saul, Venner W.
2017-07-11
Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.
Classifying Linear Canonical Relations
Lorand, Jonathan
2015-01-01
In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.
Malevergne, Yannick; Pisarenko, Vladilen; Sornette, Didier
2011-03-01
Fat-tail distributions of sizes abound in natural, physical, economic, and social systems. The lognormal and the power laws have historically competed for recognition with sometimes closely related generating processes and hard-to-distinguish tail properties. This state-of-affair is illustrated with the debate between Eeckhout [Amer. Econ. Rev. 94, 1429 (2004)] and Levy [Amer. Econ. Rev. 99, 1672 (2009)] on the validity of Zipf's law for US city sizes. By using a uniformly most powerful unbiased (UMPU) test between the lognormal and the power-laws, we show that conclusive results can be achieved to end this debate. We advocate the UMPU test as a systematic tool to address similar controversies in the literature of many disciplines involving power laws, scaling, "fat" or "heavy" tails. In order to demonstrate that our procedure works for data sets other than the US city size distribution, we also briefly present the results obtained for the power-law tail of the distribution of personal identity (ID) losses, which constitute one of the major emergent risks at the interface between cyberspace and reality.
Jo Nishino
2018-04-01
Full Text Available Genome-wide association studies (GWAS suggest that the genetic architecture of complex diseases consists of unexpectedly numerous variants with small effect sizes. However, the polygenic architectures of many diseases have not been well characterized due to lack of simple and fast methods for unbiased estimation of the underlying proportion of disease-associated variants and their effect-size distribution. Applying empirical Bayes estimation of semi-parametric hierarchical mixture models to GWAS summary statistics, we confirmed that schizophrenia was extremely polygenic [~40% of independent genome-wide SNPs are risk variants, most within odds ratio (OR = 1.03], whereas rheumatoid arthritis was less polygenic (~4 to 8% risk variants, significant portion reaching OR = 1.05 to 1.1. For rheumatoid arthritis, stratified estimations revealed that expression quantitative loci in blood explained large genetic variance, and low- and high-frequency derived alleles were prone to be risk and protective, respectively, suggesting a predominance of deleterious-risk and advantageous-protective mutations. Despite genetic correlation, effect-size distributions for schizophrenia and bipolar disorder differed across allele frequency. These analyses distinguished disease polygenic architectures and provided clues for etiological differences in complex diseases.
Lawson, C. L.; Krogh, F. T.; Gold, S. S.; Kincaid, D. R.; Sullivan, J.; Williams, E.; Hanson, R. J.; Haskell, K.; Dongarra, J.; Moler, C. B.
1982-01-01
The Basic Linear Algebra Subprograms (BLAS) library is a collection of 38 FORTRAN-callable routines for performing basic operations of numerical linear algebra. BLAS library is portable and efficient source of basic operations for designers of programs involving linear algebriac computations. BLAS library is supplied in portable FORTRAN and Assembler code versions for IBM 370, UNIVAC 1100 and CDC 6000 series computers.
Nonlinear price impact from linear models
Patzelt, Felix; Bouchaud, Jean-Philippe
2017-12-01
The impact of trades on asset prices is a crucial aspect of market dynamics for academics, regulators, and practitioners alike. Recently, universal and highly nonlinear master curves were observed for price impacts aggregated on all intra-day scales (Patzelt and Bouchaud 2017 arXiv:1706.04163). Here we investigate how well these curves, their scaling, and the underlying return dynamics are captured by linear ‘propagator’ models. We find that the classification of trades as price-changing versus non-price-changing can explain the price impact nonlinearities and short-term return dynamics to a very high degree. The explanatory power provided by the change indicator in addition to the order sign history increases with increasing tick size. To obtain these results, several long-standing technical issues for model calibration and testing are addressed. We present new spectral estimators for two- and three-point cross-correlations, removing the need for previously used approximations. We also show when calibration is unbiased and how to accurately reveal previously overlooked biases. Therefore, our results contribute significantly to understanding both recent empirical results and the properties of a popular class of impact models.
Non linear system become linear system
Petre Bucur
2007-01-01
Full Text Available The present paper refers to the theory and the practice of the systems regarding non-linear systems and their applications. We aimed the integration of these systems to elaborate their response as well as to highlight some outstanding features.
Linear motor coil assembly and linear motor
2009-01-01
An ironless linear motor (5) comprising a magnet track (53) and a coil assembly (50) operating in cooperation with said magnet track (53) and having a plurality of concentrated multi-turn coils (31 a-f, 41 a-d, 51 a-k), wherein the end windings (31E) of the coils (31 a-f, 41 a-e) are substantially
Ye, Hao; Luo, Heng; Ng, Hui Wen; Meehan, Joe; Ge, Weigong; Tong, Weida; Hong, Huixiao
2016-01-01
ToxCast data have been used to develop models for predicting in vivo toxicity. To predict the in vivo toxicity of a new chemical using a ToxCast data based model, its ToxCast bioactivity data are needed but not normally available. The capability of predicting ToxCast bioactivity data is necessary to fully utilize ToxCast data in the risk assessment of chemicals. We aimed to understand and elucidate the relationships between the chemicals and bioactivity data of the assays in ToxCast and to develop a network analysis based method for predicting ToxCast bioactivity data. We conducted modularity analysis on a quantitative network constructed from ToxCast data to explore the relationships between the assays and chemicals. We further developed Nebula (neighbor-edges based and unbiased leverage algorithm) for predicting ToxCast bioactivity data. Modularity analysis on the network constructed from ToxCast data yielded seven modules. Assays and chemicals in the seven modules were distinct. Leave-one-out cross-validation yielded a Q(2) of 0.5416, indicating ToxCast bioactivity data can be predicted by Nebula. Prediction domain analysis showed some types of ToxCast assay data could be more reliably predicted by Nebula than others. Network analysis is a promising approach to understand ToxCast data. Nebula is an effective algorithm for predicting ToxCast bioactivity data, helping fully utilize ToxCast data in the risk assessment of chemicals. Published by Elsevier Ltd.
Elena Hilario
Full Text Available Genotyping by sequencing (GBS is a restriction enzyme based targeted approach developed to reduce the genome complexity and discover genetic markers when a priori sequence information is unavailable. Sufficient coverage at each locus is essential to distinguish heterozygous from homozygous sites accurately. The number of GBS samples able to be pooled in one sequencing lane is limited by the number of restriction sites present in the genome and the read depth required at each site per sample for accurate calling of single-nucleotide polymorphisms. Loci bias was observed using a slight modification of the Elshire et al.some restriction enzyme sites were represented in higher proportions while others were poorly represented or absent. This bias could be due to the quality of genomic DNA, the endonuclease and ligase reaction efficiency, the distance between restriction sites, the preferential amplification of small library restriction fragments, or bias towards cluster formation of small amplicons during the sequencing process. To overcome these issues, we have developed a GBS method based on randomly tagging genomic DNA (rtGBS. By randomly landing on the genome, we can, with less bias, find restriction sites that are far apart, and undetected by the standard GBS (stdGBS method. The study comprises two types of biological replicates: six different kiwifruit plants and two independent DNA extractions per plant; and three types of technical replicates: four samples of each DNA extraction, stdGBS vs. rtGBS methods, and two independent library amplifications, each sequenced in separate lanes. A statistically significant unbiased distribution of restriction fragment size by rtGBS showed that this method targeted 49% (39,145 of BamH I sites shared with the reference genome, compared to only 14% (11,513 by stdGBS.
Kyle C Wilcox
Full Text Available Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs. AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer's dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs. This method gives a soluble membrane protein library (SMPL--a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer's model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can
LaFramboise William A
2011-01-01
Full Text Available Abstract Background Genomic instability in cancer leads to abnormal genome copy number alterations (CNA as a mechanism underlying tumorigenesis. Using microarrays and other technologies, tumor CNA are detected by comparing tumor sample CN to normal reference sample CN. While advances in microarray technology have improved detection of copy number alterations, the increase in the number of measured signals, noise from array probes, variations in signal-to-noise ratio across batches and disparity across laboratories leads to significant limitations for the accurate identification of CNA regions when comparing tumor and normal samples. Methods To address these limitations, we designed a novel "Virtual Normal" algorithm (VN, which allowed for construction of an unbiased reference signal directly from test samples within an experiment using any publicly available normal reference set as a baseline thus eliminating the need for an in-lab normal reference set. Results The algorithm was tested using an optimal, paired tumor/normal data set as well as previously uncharacterized pediatric malignant gliomas for which a normal reference set was not available. Using Affymetrix 250K Sty microarrays, we demonstrated improved signal-to-noise ratio and detected significant copy number alterations using the VN algorithm that were validated by independent PCR analysis of the target CNA regions. Conclusions We developed and validated an algorithm to provide a virtual normal reference signal directly from tumor samples and minimize noise in the derivation of the raw CN signal. The algorithm reduces the variability of assays performed across different reagent and array batches, methods of sample preservation, multiple personnel, and among different laboratories. This approach may be valuable when matched normal samples are unavailable or the paired normal specimens have been subjected to variations in methods of preservation.
PAM50: Unbiased multimodal template of the brainstem and spinal cord aligned with the ICBM152 space.
De Leener, Benjamin; Fonov, Vladimir S; Collins, D Louis; Callot, Virginie; Stikov, Nikola; Cohen-Adad, Julien
2018-01-15
Template-based analysis of multi-parametric MRI data of the spinal cord sets the foundation for standardization and reproducibility, thereby helping the discovery of new biomarkers of spinal-related diseases. While MRI templates of the spinal cord have been recently introduced, none of them cover the entire spinal cord. In this study, we introduced an unbiased multimodal MRI template of the spinal cord and the brainstem, called PAM50, which is anatomically compatible with the ICBM152 brain template and uses the same coordinate system. The PAM50 template is based on 50 healthy subjects, covers the full spinal cord (C1 to L2 vertebral levels) and the brainstem, is available for T1-, T2-and T2*-weighted MRI contrasts and includes a probabilistic atlas of the gray matter and white matter tracts. Template creation accuracy was assessed by computing the mean and maximum distance error between each individual spinal cord centerline and the PAM50 centerline, after registration to the template. Results showed high accuracy for both T1- (mean = 0.37 ± 0.06 mm; max = 1.39 ± 0.58 mm) and T2-weighted (mean = 0.11 ± 0.03 mm; max = 0.71 ± 0.27 mm) contrasts. Additionally, the preservation of the spinal cord topology during the template creation process was verified by comparing the cross-sectional area (CSA) profile, averaged over all subjects, and the CSA profile of the PAM50 template. The fusion of the PAM50 and ICBM152 templates will facilitate group and multi-center studies of combined brain and spinal cord MRI, and enable the use of existing atlases of the brainstem compatible with the ICBM space. Copyright © 2017 Elsevier Inc. All rights reserved.
Wilcox, Kyle C.; Marunde, Matthew R.; Das, Aditi; Velasco, Pauline T.; Kuhns, Benjamin D.; Marty, Michael T.; Jiang, Haoming; Luan, Chi-Hao; Sligar, Stephen G.; Klein, William L.
2015-01-01
Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS) tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs). AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer’s dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs). This method gives a soluble membrane protein library (SMPL)—a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer’s model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can facilitate drug
Wilcox, Kyle C; Marunde, Matthew R; Das, Aditi; Velasco, Pauline T; Kuhns, Benjamin D; Marty, Michael T; Jiang, Haoming; Luan, Chi-Hao; Sligar, Stephen G; Klein, William L
2015-01-01
Despite their value as sources of therapeutic drug targets, membrane proteomes are largely inaccessible to high-throughput screening (HTS) tools designed for soluble proteins. An important example comprises the membrane proteins that bind amyloid β oligomers (AβOs). AβOs are neurotoxic ligands thought to instigate the synapse damage that leads to Alzheimer's dementia. At present, the identities of initial AβO binding sites are highly uncertain, largely because of extensive protein-protein interactions that occur following attachment of AβOs to surface membranes. Here, we show that AβO binding sites can be obtained in a state suitable for unbiased HTS by encapsulating the solubilized synaptic membrane proteome into nanoscale lipid bilayers (Nanodiscs). This method gives a soluble membrane protein library (SMPL)--a collection of individualized synaptic proteins in a soluble state. Proteins within SMPL Nanodiscs showed enzymatic and ligand binding activity consistent with conformational integrity. AβOs were found to bind SMPL Nanodiscs with high affinity and specificity, with binding dependent on intact synaptic membrane proteins, and selective for the higher molecular weight oligomers known to accumulate at synapses. Combining SMPL Nanodiscs with a mix-incubate-read chemiluminescence assay provided a solution-based HTS platform to discover antagonists of AβO binding. Screening a library of 2700 drug-like compounds and natural products yielded one compound that potently reduced AβO binding to SMPL Nanodiscs, synaptosomes, and synapses in nerve cell cultures. Although not a therapeutic candidate, this small molecule inhibitor of synaptic AβO binding will provide a useful experimental antagonist for future mechanistic studies of AβOs in Alzheimer's model systems. Overall, results provide proof of concept for using SMPLs in high throughput screening for AβO binding antagonists, and illustrate in general how a SMPL Nanodisc system can facilitate drug discovery
Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.
Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi
2017-12-01
We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.
Estimation of group means when adjusting for covariates in generalized linear models.
Qu, Yongming; Luo, Junxiang
2015-01-01
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
Facebook Addiction: Onset Predictors.
Biolcati, Roberta; Mancini, Giacomo; Pupi, Virginia; Mugheddu, Valeria
2018-05-23
Worldwide, Facebook is becoming increasingly widespread as a communication platform. Young people especially use this social networking site daily to maintain and establish relationships. Despite the Facebook expansion in the last few years and the widespread acceptance of this social network, research into Facebook Addiction (FA) is still in its infancy. Hence, the potential predictors of Facebook overuse represent an important matter for investigation. This study aimed to deepen the understanding of the relationship between personality traits, social and emotional loneliness, life satisfaction, and Facebook addiction. A total of 755 participants (80.3% female; n = 606) aged between 18 and 40 (mean = 25.17; SD = 4.18) completed the questionnaire packet including the Bergen Facebook Addiction Scale, the Big Five, the short version of Social and Emotional Loneliness Scale for Adults, and the Satisfaction with Life Scale. A regression analysis was used with personality traits, social, family, romantic loneliness, and life satisfaction as independent variables to explain variance in Facebook addiction. The findings showed that Conscientiousness, Extraversion, Neuroticism, and Loneliness (Social, Family, and Romantic) were strong significant predictors of FA. Age, Openness, Agreeableness, and Life Satisfaction, although FA-related variables, were not significant in predicting Facebook overuse. The risk profile of this peculiar behavioral addiction is also discussed.
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.
Blyth, T S
2002-01-01
Basic Linear Algebra is a text for first year students leading from concrete examples to abstract theorems, via tutorial-type exercises. More exercises (of the kind a student may expect in examination papers) are grouped at the end of each section. The book covers the most important basics of any first course on linear algebra, explaining the algebra of matrices with applications to analytic geometry, systems of linear equations, difference equations and complex numbers. Linear equations are treated via Hermite normal forms which provides a successful and concrete explanation of the notion of linear independence. Another important highlight is the connection between linear mappings and matrices leading to the change of basis theorem which opens the door to the notion of similarity. This new and revised edition features additional exercises and coverage of Cramer's rule (omitted from the first edition). However, it is the new, extra chapter on computer assistance that will be of particular interest to readers:...
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center
Matrices and linear transformations
Cullen, Charles G
1990-01-01
""Comprehensive . . . an excellent introduction to the subject."" - Electronic Engineer's Design Magazine.This introductory textbook, aimed at sophomore- and junior-level undergraduates in mathematics, engineering, and the physical sciences, offers a smooth, in-depth treatment of linear algebra and matrix theory. The major objects of study are matrices over an arbitrary field. Contents include Matrices and Linear Systems; Vector Spaces; Determinants; Linear Transformations; Similarity: Part I and Part II; Polynomials and Polynomial Matrices; Matrix Analysis; and Numerical Methods. The first
Efficient Non Linear Loudspeakers
Petersen, Bo R.; Agerkvist, Finn T.
2006-01-01
Loudspeakers have traditionally been designed to be as linear as possible. However, as techniques for compensating non linearities are emerging, it becomes possible to use other design criteria. This paper present and examines a new idea for improving the efficiency of loudspeakers at high levels...... by changing the voice coil layout. This deliberate non-linear design has the benefit that a smaller amplifier can be used, which has the benefit of reducing system cost as well as reducing power consumption....
Carr, Joseph
1996-01-01
The linear IC market is large and growing, as is the demand for well trained technicians and engineers who understand how these devices work and how to apply them. Linear Integrated Circuits provides in-depth coverage of the devices and their operation, but not at the expense of practical applications in which linear devices figure prominently. This book is written for a wide readership from FE and first degree students, to hobbyists and professionals.Chapter 1 offers a general introduction that will provide students with the foundations of linear IC technology. From chapter 2 onwa
Fault tolerant linear actuator
Tesar, Delbert
2004-09-14
In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.
Superconducting linear accelerator cryostat
Ben-Zvi, I.; Elkonin, B.V.; Sokolowski, J.S.
1984-01-01
A large vertical cryostat for a superconducting linear accelerator using quarter wave resonators has been developed. The essential technical details, operational experience and performance are described. (author)
F. Vergari
2011-05-01
Full Text Available In this work the conditional multivariate analysis was applied to evaluate landslide susceptibility in the Upper Orcia River Basin (Tuscany, Italy, where widespread denudation processes and agricultural practices have a mutual impact. We introduced an unbiased procedure for causal factor selection based on some intuitive statistical indices. This procedure is aimed at detecting among different potential factors the most discriminant ones in a given study area. Moreover, this step avoids generating too small and statistically insignificant spatial units by intersecting the factor maps. Finally, a validation procedure was applied based on the partition of the landslide inventory from multi-temporal aerial photo interpretation.
Although encompassing some sources of uncertainties, the applied susceptibility assessment method provided a satisfactory and unbiased prediction for the Upper Orcia Valley. The results confirmed the efficiency of the selection procedure, as an unbiased step of the landslide susceptibility evaluation. Furthermore, we achieved the purpose of presenting a conceptually simple but, at the same time, effective statistical procedure for susceptibility analysis to be used as well by decision makers in land management.
Patten, B.C.
1983-04-01
Two issues concerning linearity or nonlinearity of natural systems are considered. Each is related to one of the two alternative defining properties of linear systems, superposition and decomposition. Superposition exists when a linear combination of inputs to a system results in the same linear combination of outputs that individually correspond to the original inputs. To demonstrate this property it is necessary that all initial states and inputs of the system which impinge on the output in question be included in the linear combination manipulation. As this is difficult or impossible to do with real systems of any complexity, nature appears nonlinear even though it may be linear. A linear system that displays nonlinear behavior for this reason is termed pseudononlinear. The decomposition property exists when the dynamic response of a system can be partitioned into an input-free portion due to state plus a state-free portion due to input. This is a characteristic of all linear systems, but not of nonlinear systems. Without the decomposition property, it is not possible to distinguish which portions of a system's behavior are due to innate characteristics (self) vs. outside conditions (environment), which is an important class of questions in biology and ecology. Some philosophical aspects of these findings are then considered. It is suggested that those ecologists who hold to the view that organisms and their environments are separate entities are in effect embracing a linear view of nature, even though their belief systems and mathematical models tend to be nonlinear. On the other hand, those who consider that organism-environment complex forms a single inseparable unit are implictly involved in non-linear thought, which may be in conflict with the linear modes and models that some of them use. The need to rectify these ambivalences on the part of both groups is indicated.
Linear colliders - prospects 1985
Rees, J.
1985-06-01
We discuss the scaling laws of linear colliders and their consequences for accelerator design. We then report on the SLAC Linear Collider project and comment on experience gained on that project and its application to future colliders. 9 refs., 2 figs
Richter, B.
1985-01-01
A report is given on the goals and progress of the SLAC Linear Collider. The author discusses the status of the machine and the detectors and give an overview of the physics which can be done at this new facility. He also gives some ideas on how (and why) large linear colliders of the future should be built
Rogner, H.H.
1989-01-01
The submitted sections on linear programming are extracted from 'Theorie und Technik der Planung' (1978) by W. Blaas and P. Henseler and reformulated for presentation at the Workshop. They consider a brief introduction to the theory of linear programming and to some essential aspects of the SIMPLEX solution algorithm for the purposes of economic planning processes. 1 fig
Rowe, C.H.; Wilton, M.S. de.
1979-01-01
An improved recirculating electron beam linear accelerator of the racetrack type is described. The system comprises a beam path of four straight legs with four Pretzel bending magnets at the end of each leg to direct the beam into the next leg of the beam path. At least one of the beam path legs includes a linear accelerator. (UK)
Predictors of postpartum depression.
Katon, Wayne; Russo, Joan; Gavin, Amelia
2014-09-01
To examine sociodemographic factors, pregnancy-associated psychosocial stress and depression, health risk behaviors, prepregnancy medical and psychiatric illness, pregnancy-related illnesses, and birth outcomes as risk factors for post-partum depression (PPD). A prospective cohort study screened women at 4 and 8 months of pregnancy and used hierarchical logistic regression analyses to examine predictors of PPD. The study sample include 1,423 pregnant women at a university-based high risk obstetrics clinic. A score of ≥10 on the Patient Health Questionnaire-9 (PHQ-9) indicated clinically significant depressive symptoms. Compared with women without significant postpartum depressive symptoms, women with PPD were significantly younger (pdepressive symptoms (pdepression case finding for pregnant women.
Semidefinite linear complementarity problems
Eckhardt, U.
1978-04-01
Semidefinite linear complementarity problems arise by discretization of variational inequalities describing e.g. elastic contact problems, free boundary value problems etc. In the present paper linear complementarity problems are introduced and the theory as well as the numerical treatment of them are described. In the special case of semidefinite linear complementarity problems a numerical method is presented which combines the advantages of elimination and iteration methods without suffering from their drawbacks. This new method has very attractive properties since it has a high degree of invariance with respect to the representation of the set of all feasible solutions of a linear complementarity problem by linear inequalities. By means of some practical applications the properties of the new method are demonstrated. (orig.) [de
Axler, Sheldon
2015-01-01
This best-selling textbook for a second course in linear algebra is aimed at undergrad math majors and graduate students. The novel approach taken here banishes determinants to the end of the book. The text focuses on the central goal of linear algebra: understanding the structure of linear operators on finite-dimensional vector spaces. The author has taken unusual care to motivate concepts and to simplify proofs. A variety of interesting exercises in each chapter helps students understand and manipulate the objects of linear algebra. The third edition contains major improvements and revisions throughout the book. More than 300 new exercises have been added since the previous edition. Many new examples have been added to illustrate the key ideas of linear algebra. New topics covered in the book include product spaces, quotient spaces, and dual spaces. Beautiful new formatting creates pages with an unusually pleasant appearance in both print and electronic versions. No prerequisites are assumed other than the ...
Handbook on linear motor application
1988-10-01
This book guides the application for Linear motor. It lists classification and speciality of Linear Motor, terms of linear-induction motor, principle of the Motor, types on one-side linear-induction motor, bilateral linear-induction motor, linear-DC Motor on basic of the motor, linear-DC Motor for moving-coil type, linear-DC motor for permanent-magnet moving type, linear-DC motor for electricity non-utility type, linear-pulse motor for variable motor, linear-pulse motor for permanent magneto type, linear-vibration actuator, linear-vibration actuator for moving-coil type, linear synchronous motor, linear electromagnetic motor, linear electromagnetic solenoid, technical organization and magnetic levitation and linear motor and sensor.
Oldenburg, J; Aparicio, J; Beyer, J; Cohn-Cedermark, G; Cullen, M; Gilligan, T; De Giorgi, U; De Santis, M; de Wit, R; Fosså, S D; Germà-Lluch, J R; Gillessen, S; Haugnes, H S; Honecker, F; Horwich, A; Lorch, A; Ondruš, D; Rosti, G; Stephenson, A J; Tandstad, T
2015-05-01
Testicular cancer (TC) is the most common neoplasm in males aged 15-40 years. The majority of patients have no evidence of metastases at diagnosis and thus have clinical stage I (CSI) disease [Oldenburg J, Fossa SD, Nuver J et al. Testicular seminoma and non-seminoma: ESMO Clinical Practice Guidelines for diagnosis, treatment and follow-up. Ann Oncol 2013; 24(Suppl 6): vi125-vi132; de Wit R, Fizazi K. Controversies in the management of clinical stage I testis cancer. J Clin Oncol 2006; 24: 5482-5492.]. Management of CSI TC is controversial and options include surveillance and active treatment. Different forms of adjuvant therapy exist, including either one or two cycles of carboplatin chemotherapy or radiotherapy for seminoma and either one or two cycles of cisplatin-based chemotherapy or retroperitoneal lymph node dissection for non-seminoma. Long-term disease-specific survival is ∼99% with any of these approaches, including surveillance. While surveillance allows most patients to avoid additional treatment, adjuvant therapy markedly lowers the relapse rate. Weighing the net benefits of surveillance against those of adjuvant treatment depends on prioritizing competing aims such as avoiding unnecessary treatment, avoiding more burdensome treatment with salvage chemotherapy and minimizing the anxiety, stress and life disruption associated with relapse. Unbiased information about the advantages and disadvantages of surveillance and adjuvant treatment is a prerequisite for informed consent by the patient. In a clinical scenario like CSI TC, where different disease-management options produce indistinguishable long-term survival rates, patient values, priorities and preferences should be taken into account. In this review, we provide an overview about risk factors for relapse, potential benefits and harms of adjuvant chemotherapy and active surveillance and a rationale for involving patients in individualized decision making about their treatment rather than adopting
Linear ubiquitination in immunity.
Shimizu, Yutaka; Taraborrelli, Lucia; Walczak, Henning
2015-07-01
Linear ubiquitination is a post-translational protein modification recently discovered to be crucial for innate and adaptive immune signaling. The function of linear ubiquitin chains is regulated at multiple levels: generation, recognition, and removal. These chains are generated by the linear ubiquitin chain assembly complex (LUBAC), the only known ubiquitin E3 capable of forming the linear ubiquitin linkage de novo. LUBAC is not only relevant for activation of nuclear factor-κB (NF-κB) and mitogen-activated protein kinases (MAPKs) in various signaling pathways, but importantly, it also regulates cell death downstream of immune receptors capable of inducing this response. Recognition of the linear ubiquitin linkage is specifically mediated by certain ubiquitin receptors, which is crucial for translation into the intended signaling outputs. LUBAC deficiency results in attenuated gene activation and increased cell death, causing pathologic conditions in both, mice, and humans. Removal of ubiquitin chains is mediated by deubiquitinases (DUBs). Two of them, OTULIN and CYLD, are constitutively associated with LUBAC. Here, we review the current knowledge on linear ubiquitination in immune signaling pathways and the biochemical mechanisms as to how linear polyubiquitin exerts its functions distinctly from those of other ubiquitin linkage types. © 2015 The Authors. Immunological Reviews Published by John Wiley & Sons Ltd.
Voltage splay modes and enhanced phase locking in a modified linear Josephson array
Harris, E. B.; Garland, J. C.
1997-02-01
We analyze a modified linear Josephson-junction array in which additional unbiased junctions are used to greatly enhance phase locking. This geometry exhibits strong correlated behavior, with an external magnetic field tuning the voltage splay angle between adjacent Josephson oscillators. The array displays a coherent in-phase mode for f=, where f is the magnetic frustration, while for 0tolerant of critical current disorder approaching 100%. The stability of the array has also been studied by computing Floquet exponents. These exponents are found to be negative for all array lengths, with a 1/N2 dependence, N being the number of series-connected junctions.
Krivonos, S.O.; Sorin, A.S.
1994-06-01
We show that the Zamolodchikov's and Polyakov-Bershadsky nonlinear algebras W 3 and W (2) 3 can be embedded as subalgebras into some linear algebras with finite set of currents. Using these linear algebras we find new field realizations of W (2) 3 and W 3 which could be a starting point for constructing new versions of W-string theories. We also reveal a number of hidden relationships between W 3 and W (2) 3 . We conjecture that similar linear algebras can exist for other W-algebra as well. (author). 10 refs
Schneider, Hans
1989-01-01
Linear algebra is one of the central disciplines in mathematics. A student of pure mathematics must know linear algebra if he is to continue with modern algebra or functional analysis. Much of the mathematics now taught to engineers and physicists requires it.This well-known and highly regarded text makes the subject accessible to undergraduates with little mathematical experience. Written mainly for students in physics, engineering, economics, and other fields outside mathematics, the book gives the theory of matrices and applications to systems of linear equations, as well as many related t
Linearity in Process Languages
Nygaard, Mikkel; Winskel, Glynn
2002-01-01
The meaning and mathematical consequences of linearity (managing without a presumed ability to copy) are studied for a path-based model of processes which is also a model of affine-linear logic. This connection yields an affine-linear language for processes, automatically respecting open......-map bisimulation, in which a range of process operations can be expressed. An operational semantics is provided for the tensor fragment of the language. Different ways to make assemblies of processes lead to different choices of exponential, some of which respect bisimulation....
Amir-Moez, A R; Sneddon, I N
1962-01-01
Elements of Linear Space is a detailed treatment of the elements of linear spaces, including real spaces with no more than three dimensions and complex n-dimensional spaces. The geometry of conic sections and quadric surfaces is considered, along with algebraic structures, especially vector spaces and transformations. Problems drawn from various branches of geometry are given.Comprised of 12 chapters, this volume begins with an introduction to real Euclidean space, followed by a discussion on linear transformations and matrices. The addition and multiplication of transformations and matrices a
Weisberg, Sanford
2013-01-01
Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus
Meng, Yilin; Roux, Benoît
2015-08-11
The weighted histogram analysis method (WHAM) is a standard protocol for postprocessing the information from biased umbrella sampling simulations to construct the potential of mean force with respect to a set of order parameters. By virtue of the WHAM equations, the unbiased density of state is determined by satisfying a self-consistent condition through an iterative procedure. While the method works very effectively when the number of order parameters is small, its computational cost grows rapidly in higher dimension. Here, we present a simple and efficient alternative strategy, which avoids solving the self-consistent WHAM equations iteratively. An efficient multivariate linear regression framework is utilized to link the biased probability densities of individual umbrella windows and yield an unbiased global free energy landscape in the space of order parameters. It is demonstrated with practical examples that free energy landscapes that are comparable in accuracy to WHAM can be generated at a small fraction of the cost.
Callier, Frank M.; Desoer, Charles A.
1991-01-01
The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.
Post-processing through linear regression
van Schaeybroeck, B.; Vannitsem, S.
2011-03-01
Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.
Post-processing through linear regression
B. Van Schaeybroeck
2011-03-01
Full Text Available Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS method, a new time-dependent Tikhonov regularization (TDTR method, the total least-square method, a new geometric-mean regression (GM, a recently introduced error-in-variables (EVMOS method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified.
These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise. At long lead times the regression schemes (EVMOS, TDTR which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.
Predictors of disability retirement.
Krause, N; Lynch, J; Kaplan, G A; Cohen, R D; Goldberg, D E; Salonen, J T
1997-12-01
Disability retirement may increase as the work force ages, but there is little information on factors associated with retirement because of disability. This is the first prospective population-based study of predictors of disability retirement including information on workplace, socioeconomic, behavioral, and health-related factors. The subjects were 1038 Finnish men who were enrolled in the Kuopio Ischemic Heart Disease Risk Factor Study, who were 42, 48, 54, or 60 years of age at the beginning of the study, and who participated in a 4-year follow-up medical examination. Various job characteristics predicted disability retirement. Heavy work, work in uncomfortable positions, long workhours, noise at work, physical job strain, musculoskeletal strain, repetitive or continuous muscle strain, mental job strain, and job dissatisfaction were all significantly associated with the incidence of disability retirement. The ability to communicate with fellow workers and social support from supervisors tended to reduce the risk of disability retirement. The relationships persisted after control for socioeconomic factors, prevalent disease, and health behavior, which were also associated with disability retirement. The strong associations found between workplace factors and the incidence of disability retirement link the problem of disability retirement to the problem of poor work conditions.
A Comparison of Alternative Estimators of Linearly Aggregated Macro Models
Fikri Akdeniz
2012-07-01
Full Text Available Normal 0 false false false TR X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif"; mso-ansi-language:TR; mso-fareast-language:TR;} This paper deals with the linear aggregation problem. For the true underlying micro relations, which explain the micro behavior of the individuals, no restrictive rank conditions are assumed. Thus the analysis is presented in a framework utilizing generalized inverses of singular matrices. We investigate several estimators for certain linear transformations of the systematic part of the corresponding macro relations. Homogeneity of micro parameters is discussed. Best linear unbiased estimation for micro parameters is described.
Predictors of Longitudinal Quality of Life in Juvenile Localized Scleroderma.
Ardalan, Kaveh; Zigler, Christina K; Torok, Kathryn S
2017-07-01
Localized scleroderma can negatively affect children's quality of life (QoL), but predictors of impact have not been well described. We sought to identify predictors of QoL impact in juvenile localized scleroderma patients. We analyzed longitudinal data from a single-center cohort of juvenile localized scleroderma patients, using hierarchical generalized linear modeling (HGLM) to identify predictors of QoL impact. HGLM is useful for nested data and allows for evaluation of both time-variant and time-invariant predictors. The number of extracutaneous manifestations (ECMs; e.g., joint contracture and hemifacial atrophy) and female sex predicted negative QoL impact, defined as a Children's Dermatology Life Quality Index score >1 (P = 0.019 for ECMs and P = 0.002 for female sex). As the time since the initial visit increased, the odds of reporting a negative QoL impact decreased (P scleroderma than cutaneous features. Further study is required to determine which ECMs have the most impact on QoL, which factors underlie sex differences in QoL in localized scleroderma, and why increasing the time since the initial visit appears to be protective. An improved understanding of predictors of QoL impact may allow for the identification of patients at risk of poorer outcomes and for the tailoring of treatment and psychosocial support. © 2016, American College of Rheumatology.
Gender and distance influence performance predictors in young swimmers
Mezzaroba, Paulo Victor; Papoti, Marcelo; Machado, Fabiana Andrade
2013-01-01
Predictors of performance in adult swimmers are constantly changing during youth especially because the training routine begins even before puberty in the modality. Therefore this study aimed to determine the group of parameters that best predict short and middle swimming distance performances of young swimmers of both genders. Thirty-three 10-to 16-years-old male and female competitive swimmers participated in the study. Multiple linear regression (MLR) was used considering mean speed of max...
Prevalence and predictors of musculoskeletal pain among Danish fishermen
Berg-Beckhoff, Gabriele; Østergaard, Helle; Jepsen, Jørgen Riis
2016-01-01
at sea, age, BMI and education were used as predictors for the overall musculoskeletal pain score (multiple linear regression) and for each single pain site (multinomial logistic regression). RESULTS: The prevalence of pain was high for all musculoskeletal locations. Overall, more than 80...... demanding and impacting their musculoskeletal pain. Potential explanation for this unexpected result like increased work pressure and reduced financial attractiveness in small scale commercial fishery needs to be confirmed in future research....
Multiple predictor smoothing methods for sensitivity analysis
Helton, Jon Craig; Storlie, Curtis B.
2006-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Multiple predictor smoothing methods for sensitivity analysis.
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
Blyth, T S
2002-01-01
Most of the introductory courses on linear algebra develop the basic theory of finite dimensional vector spaces, and in so doing relate the notion of a linear mapping to that of a matrix. Generally speaking, such courses culminate in the diagonalisation of certain matrices and the application of this process to various situations. Such is the case, for example, in our previous SUMS volume Basic Linear Algebra. The present text is a continuation of that volume, and has the objective of introducing the reader to more advanced properties of vector spaces and linear mappings, and consequently of matrices. For readers who are not familiar with the contents of Basic Linear Algebra we provide an introductory chapter that consists of a compact summary of the prerequisites for the present volume. In order to consolidate the student's understanding we have included a large num ber of illustrative and worked examples, as well as many exercises that are strategi cally placed throughout the text. Solutions to the ex...
Mamyrin, B.A.; Shmikk, D.V.
1979-01-01
A description and operating principle of a linear mass reflectron with V-form trajectory of ion motion -a new non-magnetic time-of-flight mass spectrometer with high resolution are presented. The ion-optical system of the device consists of an ion source with ionization by electron shock, of accelerating gaps, reflector gaps, a drift space and ion detector. Ions move in the linear mass refraction along the trajectories parallel to the axis of the analyzer chamber. The results of investigations into the experimental device are given. With an ion drift length of 0.6 m the device resolution is 1200 with respect to the peak width at half-height. Small-sized mass spectrometric transducers with high resolution and sensitivity may be designed on the base of the linear mass reflectron principle
Olver, Peter J
2018-01-01
This textbook develops the essential tools of linear algebra, with the goal of imparting technique alongside contextual understanding. Applications go hand-in-hand with theory, each reinforcing and explaining the other. This approach encourages students to develop not only the technical proficiency needed to go on to further study, but an appreciation for when, why, and how the tools of linear algebra can be used across modern applied mathematics. Providing an extensive treatment of essential topics such as Gaussian elimination, inner products and norms, and eigenvalues and singular values, this text can be used for an in-depth first course, or an application-driven second course in linear algebra. In this second edition, applications have been updated and expanded to include numerical methods, dynamical systems, data analysis, and signal processing, while the pedagogical flow of the core material has been improved. Throughout, the text emphasizes the conceptual connections between each application and the un...
Banach, S
1987-01-01
This classic work by the late Stefan Banach has been translated into English so as to reach a yet wider audience. It contains the basics of the algebra of operators, concentrating on the study of linear operators, which corresponds to that of the linear forms a1x1 + a2x2 + ... + anxn of algebra.The book gathers results concerning linear operators defined in general spaces of a certain kind, principally in Banach spaces, examples of which are: the space of continuous functions, that of the pth-power-summable functions, Hilbert space, etc. The general theorems are interpreted in various mathematical areas, such as group theory, differential equations, integral equations, equations with infinitely many unknowns, functions of a real variable, summation methods and orthogonal series.A new fifty-page section (``Some Aspects of the Present Theory of Banach Spaces'''') complements this important monograph.
Høskuldsson, Agnar
1996-01-01
Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four of these cri......Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....
Linear programming using Matlab
Ploskas, Nikolaos
2017-01-01
This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus. The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...
Anon.
1994-01-01
The aim of the TESLA (TeV Superconducting Linear Accelerator) collaboration (at present 19 institutions from seven countries) is to establish the technology for a high energy electron-positron linear collider using superconducting radiofrequency cavities to accelerate its beams. Another basic goal is to demonstrate that such a collider can meet its performance goals in a cost effective manner. For this the TESLA collaboration is preparing a 500 MeV superconducting linear test accelerator at the DESY Laboratory in Hamburg. This TTF (TESLA Test Facility) consists of four cryomodules, each approximately 12 m long and containing eight 9-cell solid niobium cavities operating at a frequency of 1.3 GHz
Linearly Adjustable International Portfolios
Fonseca, R. J.; Kuhn, D.; Rustem, B.
2010-09-01
We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.
Linearly Adjustable International Portfolios
Fonseca, R. J.; Kuhn, D.; Rustem, B.
2010-01-01
We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.
Barkman, W.E.; Adams, W.Q.; Berrier, B.R.
1978-01-01
A linear induction motor has been operated on a test bed with a feedback pulse resolution of 5 nm (0.2 μin). Slewing tests with this slide drive have shown positioning errors less than or equal to 33 nm (1.3 μin) at feedrates between 0 and 25.4 mm/min (0-1 ipm). A 0.86-m (34-in)-stroke linear motor is being investigated, using the SPACO machine as a test bed. Initial results were encouraging, and work is continuing to optimize the servosystem compensation
Hogben, Leslie
2013-01-01
With a substantial amount of new material, the Handbook of Linear Algebra, Second Edition provides comprehensive coverage of linear algebra concepts, applications, and computational software packages in an easy-to-use format. It guides you from the very elementary aspects of the subject to the frontiers of current research. Along with revisions and updates throughout, the second edition of this bestseller includes 20 new chapters.New to the Second EditionSeparate chapters on Schur complements, additional types of canonical forms, tensors, matrix polynomials, matrix equations, special types of
Linear Algebra Thoroughly Explained
Vujičić, Milan
2008-01-01
Linear Algebra Thoroughly Explained provides a comprehensive introduction to the subject suitable for adoption as a self-contained text for courses at undergraduate and postgraduate level. The clear and comprehensive presentation of the basic theory is illustrated throughout with an abundance of worked examples. The book is written for teachers and students of linear algebra at all levels and across mathematics and the applied sciences, particularly physics and engineering. It will also be an invaluable addition to research libraries as a comprehensive resource book for the subject.
2013-05-10
AND VICTIM- ~ vAP BLAMING 4. AMERICA, LINEARLY CYCUCAL AF IMT 1768, 19840901, V5 PREVIOUS EDITION WILL BE USED. C2C Jessica Adams Dr. Brissett...his desires, his failings, and his aspirations follow the same general trend throughout history and throughout cultures. The founding fathers sought
Southworth, B.
1985-01-01
The peak of the construction phase of the Stanford Linear Collider, SLC, to achieve 50 GeV electron-positron collisions has now been passed. The work remains on schedule to attempt colliding beams, initially at comparatively low luminosity, early in 1987. (orig./HSI).
Mafra Neto, F.
1992-01-01
The dose of gamma radiation from a linear source of cesium 137 is obtained, presenting two difficulties: oblique filtration of radiation when cross the platinum wall, in different directions, and dose connection due to the scattering by the material mean of propagation. (C.G.C.)
Resistors Improve Ramp Linearity
Kleinberg, L. L.
1982-01-01
Simple modification to bootstrap ramp generator gives more linear output over longer sweep times. New circuit adds just two resistors, one of which is adjustable. Modification cancels nonlinearities due to variations in load on charging capacitor and due to changes in charging current as the voltage across capacitor increases.
LINEAR COLLIDERS: 1992 workshop
Settles, Ron; Coignet, Guy
1992-01-01
As work on designs for future electron-positron linear colliders pushes ahead at major Laboratories throughout the world in a major international collaboration framework, the LC92 workshop held in Garmisch Partenkirchen this summer, attended by 200 machine and particle physicists, provided a timely focus
Brameier, Markus
2007-01-01
Presents a variant of Genetic Programming that evolves imperative computer programs as linear sequences of instructions, in contrast to the more traditional functional expressions or syntax trees. This book serves as a reference for researchers, but also contains sufficient introduction for students and those who are new to the field
Dobbs, David E.
2013-01-01
A direct method is given for solving first-order linear recurrences with constant coefficients. The limiting value of that solution is studied as "n to infinity." This classroom note could serve as enrichment material for the typical introductory course on discrete mathematics that follows a calculus course.
Takeda, Seishi
1992-01-01
The status of R and D of future e + e - linear colliders proposed by the institutions throughout the world is described including the JLC, NLC, VLEPP, CLIC, DESY/THD and TESLA projects. The parameters and RF sources are discussed. (G.P.) 36 refs.; 1 tab
Sven Bocklandt
Full Text Available From the moment of conception, we begin to age. A decay of cellular structures, gene regulation, and DNA sequence ages cells and organisms. DNA methylation patterns change with increasing age and contribute to age related disease. Here we identify 88 sites in or near 80 genes for which the degree of cytosine methylation is significantly correlated with age in saliva of 34 male identical twin pairs between 21 and 55 years of age. Furthermore, we validated sites in the promoters of three genes and replicated our results in a general population sample of 31 males and 29 females between 18 and 70 years of age. The methylation of three sites--in the promoters of the EDARADD, TOM1L1, and NPTX2 genes--is linear with age over a range of five decades. Using just two cytosines from these loci, we built a regression model that explained 73% of the variance in age, and is able to predict the age of an individual with an average accuracy of 5.2 years. In forensic science, such a model could estimate the age of a person, based on a biological sample alone. Furthermore, a measurement of relevant sites in the genome could be a tool in routine medical screening to predict the risk of age-related diseases and to tailor interventions based on the epigenetic bio-age instead of the chronological age.
Finite-dimensional linear algebra
Gockenbach, Mark S
2010-01-01
Some Problems Posed on Vector SpacesLinear equationsBest approximationDiagonalizationSummaryFields and Vector SpacesFields Vector spaces Subspaces Linear combinations and spanning sets Linear independence Basis and dimension Properties of bases Polynomial interpolation and the Lagrange basis Continuous piecewise polynomial functionsLinear OperatorsLinear operatorsMore properties of linear operatorsIsomorphic vector spaces Linear operator equations Existence and uniqueness of solutions The fundamental theorem; inverse operatorsGaussian elimination Newton's method Linear ordinary differential eq
Distillation Column Flooding Predictor
George E. Dzyacky
2010-11-23
The Flooding Predictor™ is a patented advanced control technology proven in research at the Separations Research Program, University of Texas at Austin, to increase distillation column throughput by over 6%, while also increasing energy efficiency by 10%. The research was conducted under a U. S. Department of Energy Cooperative Agreement awarded to George Dzyacky of 2ndpoint, LLC. The Flooding Predictor™ works by detecting the incipient flood point and controlling the column closer to its actual hydraulic limit than historical practices have allowed. Further, the technology uses existing column instrumentation, meaning no additional refining infrastructure is required. Refiners often push distillation columns to maximize throughput, improve separation, or simply to achieve day-to-day optimization. Attempting to achieve such operating objectives is a tricky undertaking that can result in flooding. Operators and advanced control strategies alike rely on the conventional use of delta-pressure instrumentation to approximate the column’s approach to flood. But column delta-pressure is more an inference of the column’s approach to flood than it is an actual measurement of it. As a consequence, delta pressure limits are established conservatively in order to operate in a regime where the column is never expected to flood. As a result, there is much “left on the table” when operating in such a regime, i.e. the capacity difference between controlling the column to an upper delta-pressure limit and controlling it to the actual hydraulic limit. The Flooding Predictor™, an innovative pattern recognition technology, controls columns at their actual hydraulic limit, which research shows leads to a throughput increase of over 6%. Controlling closer to the hydraulic limit also permits operation in a sweet spot of increased energy-efficiency. In this region of increased column loading, the Flooding Predictor is able to exploit the benefits of higher liquid
Linear Polarimetry with γ→e+e− Conversions
Denis Bernard
2017-11-01
Full Text Available γ -rays are emitted by cosmic sources by non-thermal processes that yield either non-polarized photons, such as those from π 0 decay in hadronic interactions, or linearly polarized photons from synchrotron radiation and the inverse-Compton up-shifting of these on high-energy charged particles. Polarimetry in the MeV energy range would provide a powerful tool to discriminate among “leptonic” and “hadronic” emission models of blazars, for example, but no polarimeter sensitive above 1 MeV has ever been flown into space. Low-Z converter telescopes such as silicon detectors are developed to improve the angular resolution and the point-like sensitivity below 100 MeV. We have shown that in the case of a homogeneous, low-density active target such as a gas time-projection chamber (TPC, the single-track angular resolution is even better and is so good that in addition the linear polarimetry of the incoming radiation can be performed. We actually characterized the performance of a prototype of such a telescope on beam. Track momentum measurement in the tracker would enable calorimeter-free, large effective area telescopes on low-mass space missions. An optimal unbiased momentum estimate can be obtained in the tracker alone based on the momentum dependence of multiple scattering, from a Bayesian analysis of the innovations of Kalman filters applied to the tracks.
Linearity and Non-linearity of Photorefractive effect in Materials ...
In this paper we have studied the Linearity and Non-linearity of Photorefractive effect in materials using the band transport model. For low light beam intensities the change in the refractive index is proportional to the electric field for linear optics while for non- linear optics the change in refractive index is directly proportional ...
Linearly Refined Session Types
Pedro Baltazar
2012-11-01
Full Text Available Session types capture precise protocol structure in concurrent programming, but do not specify properties of the exchanged values beyond their basic type. Refinement types are a form of dependent types that can address this limitation, combining types with logical formulae that may refer to program values and can constrain types using arbitrary predicates. We present a pi calculus with assume and assert operations, typed using a session discipline that incorporates refinement formulae written in a fragment of Multiplicative Linear Logic. Our original combination of session and refinement types, together with the well established benefits of linearity, allows very fine-grained specifications of communication protocols in which refinement formulae are treated as logical resources rather than persistent truths.
Kuznetsov, N.; Maz'ya, V.; Vainberg, B.
2002-08-01
This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'
The International Linear Collider
List Benno
2014-04-01
Full Text Available The International Linear Collider (ILC is a proposed e+e− linear collider with a centre-of-mass energy of 200–500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.
The International Linear Collider
List, Benno
2014-04-01
The International Linear Collider (ILC) is a proposed e+e- linear collider with a centre-of-mass energy of 200-500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.
Høskuldsson, Agnar
1996-01-01
Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....... of these criteria are widely used ones, while the remaining four are ones derived from the H-principle of mathematical modeling. Many examples from practice show that the criteria derived from the H-principle function better than the known and popular criteria for the number of components. We shall briefly review...
Goldowsky, Michael P. (Inventor)
1987-01-01
A reciprocating linear motor is formed with a pair of ring-shaped permanent magnets having opposite radial polarizations, held axially apart by a nonmagnetic yoke, which serves as an axially displaceable armature assembly. A pair of annularly wound coils having axial lengths which differ from the axial lengths of the permanent magnets are serially coupled together in mutual opposition and positioned with an outer cylindrical core in axial symmetry about the armature assembly. One embodiment includes a second pair of annularly wound coils serially coupled together in mutual opposition and an inner cylindrical core positioned in axial symmetry inside the armature radially opposite to the first pair of coils. Application of a potential difference across a serial connection of the two pairs of coils creates a current flow perpendicular to the magnetic field created by the armature magnets, thereby causing limited linear displacement of the magnets relative to the coils.
Henneaux, Marc; Teitelboim, Claudio
2005-01-01
We show that duality transformations of linearized gravity in four dimensions, i.e., rotations of the linearized Riemann tensor and its dual into each other, can be extended to the dynamical fields of the theory so as to be symmetries of the action and not just symmetries of the equations of motion. Our approach relies on the introduction of two superpotentials, one for the spatial components of the spin-2 field and the other for their canonically conjugate momenta. These superpotentials are two-index, symmetric tensors. They can be taken to be the basic dynamical fields and appear locally in the action. They are simply rotated into each other under duality. In terms of the superpotentials, the canonical generator of duality rotations is found to have a Chern-Simons-like structure, as in the Maxwell case
Phinney, N.
1992-01-01
The SLAC Linear Collider has begun a new era of operation with the SLD detector. During 1991 there was a first engineering run for the SLD in parallel with machine improvements to increase luminosity and reliability. For the 1992 run, a polarized electron source was added and more than 10,000 Zs with an average of 23% polarization have been logged by the SLD. This paper discusses the performance of the SLC in 1991 and 1992 and the technical advances that have produced higher luminosity. Emphasis will be placed on issues relevant to future linear colliders such as producing and maintaining high current, low emittance beams and focusing the beams to the micron scale for collisions. (Author) tab., 2 figs., 18 refs
Linear waves and instabilities
Bers, A.
1975-01-01
The electrodynamic equations for small-amplitude waves and their dispersion relation in a homogeneous plasma are outlined. For such waves, energy and momentum, and their flow and transformation, are described. Perturbation theory of waves is treated and applied to linear coupling of waves, and the resulting instabilities from such interactions between active and passive waves. Linear stability analysis in time and space is described where the time-asymptotic, time-space Green's function for an arbitrary dispersion relation is developed. The perturbation theory of waves is applied to nonlinear coupling, with particular emphasis on pump-driven interactions of waves. Details of the time--space evolution of instabilities due to coupling are given. (U.S.)
Extended linear chain compounds
Linear chain substances span a large cross section of contemporary chemistry ranging from covalent polymers, to organic charge transfer com plexes to nonstoichiometric transition metal coordination complexes. Their commonality, which coalesced intense interest in the theoretical and exper imental solid state physics/chemistry communities, was based on the obser vation that these inorganic and organic polymeric substrates exhibit striking metal-like electrical and optical properties. Exploitation and extension of these systems has led to the systematic study of both the chemistry and physics of highly and poorly conducting linear chain substances. To gain a salient understanding of these complex materials rich in anomalous aniso tropic electrical, optical, magnetic, and mechanical properties, the conver gence of diverse skills and talents was required. The constructive blending of traditionally segregated disciplines such as synthetic and physical organic, inorganic, and polymer chemistry, crystallog...
Diamond, Jared M.
1966-01-01
1. The relation between osmotic gradient and rate of osmotic water flow has been measured in rabbit gall-bladder by a gravimetric procedure and by a rapid method based on streaming potentials. Streaming potentials were directly proportional to gravimetrically measured water fluxes. 2. As in many other tissues, water flow was found to vary with gradient in a markedly non-linear fashion. There was no consistent relation between the water permeability and either the direction or the rate of water flow. 3. Water flow in response to a given gradient decreased at higher osmolarities. The resistance to water flow increased linearly with osmolarity over the range 186-825 m-osM. 4. The resistance to water flow was the same when the gall-bladder separated any two bathing solutions with the same average osmolarity, regardless of the magnitude of the gradient. In other words, the rate of water flow is given by the expression (Om — Os)/[Ro′ + ½k′ (Om + Os)], where Ro′ and k′ are constants and Om and Os are the bathing solution osmolarities. 5. Of the theories advanced to explain non-linear osmosis in other tissues, flow-induced membrane deformations, unstirred layers, asymmetrical series-membrane effects, and non-osmotic effects of solutes could not explain the results. However, experimental measurements of water permeability as a function of osmolarity permitted quantitative reconstruction of the observed water flow—osmotic gradient curves. Hence non-linear osmosis in rabbit gall-bladder is due to a decrease in water permeability with increasing osmolarity. 6. The results suggest that aqueous channels in the cell membrane behave as osmometers, shrinking in concentrated solutions of impermeant molecules and thereby increasing membrane resistance to water flow. A mathematical formulation of such a membrane structure is offered. PMID:5945254
Fundamentals of linear algebra
Dash, Rajani Ballav
2008-01-01
FUNDAMENTALS OF LINEAR ALGEBRA is a comprehensive Text Book, which can be used by students and teachers of All Indian Universities. The Text has easy, understandable form and covers all topics of UGC Curriculum. There are lots of worked out examples which helps the students in solving the problems without anybody's help. The Problem sets have been designed keeping in view of the questions asked in different examinations.
Sander, K F
1964-01-01
Linear Network Theory covers the significant algebraic aspect of network theory, with minimal reference to practical circuits. The book begins the presentation of network analysis with the exposition of networks containing resistances only, and follows it up with a discussion of networks involving inductance and capacity by way of the differential equations. Classification and description of certain networks, equivalent networks, filter circuits, and network functions are also covered. Electrical engineers, technicians, electronics engineers, electricians, and students learning the intricacies
Non linear viscoelastic models
Agerkvist, Finn T.
2011-01-01
Viscoelastic eects are often present in loudspeaker suspensions, this can be seen in the displacement transfer function which often shows a frequency dependent value below the resonance frequency. In this paper nonlinear versions of the standard linear solid model (SLS) are investigated....... The simulations show that the nonlinear version of the Maxwell SLS model can result in a time dependent small signal stiness while the Kelvin Voight version does not....
Relativistic Linear Restoring Force
Clark, D.; Franklin, J.; Mann, N.
2012-01-01
We consider two different forms for a relativistic version of a linear restoring force. The pair comes from taking Hooke's law to be the force appearing on the right-hand side of the relativistic expressions: d"p"/d"t" or d"p"/d["tau"]. Either formulation recovers Hooke's law in the non-relativistic limit. In addition to these two forces, we…
Superconducting linear colliders
Anon.
1990-01-01
The advantages of superconducting radiofrequency (SRF) for particle accelerators have been demonstrated by successful operation of systems in the TRISTAN and LEP electron-positron collider rings respectively at the Japanese KEK Laboratory and at CERN. If performance continues to improve and costs can be lowered, this would open an attractive option for a high luminosity TeV (1000 GeV) linear collider
Perturbed asymptotically linear problems
Bartolo, R.; Candela, A. M.; Salvatore, A.
2012-01-01
The aim of this paper is investigating the existence of solutions of some semilinear elliptic problems on open bounded domains when the nonlinearity is subcritical and asymptotically linear at infinity and there is a perturbation term which is just continuous. Also in the case when the problem has not a variational structure, suitable procedures and estimates allow us to prove that the number of distinct crtitical levels of the functional associated to the unperturbed problem is "stable" unde...
Miniature linear cooler development
Pruitt, G.R.
1993-01-01
An overview is presented of the status of a family of miniature linear coolers currently under development by Hughes Aircraft Co. for use in hand held, volume limited or power limited infrared applications. These coolers, representing the latest additions to the Hughes family of TOP trademark [twin-opposed piston] linear coolers, have been fabricated and tested in three different configurations. Each configuration is designed to utilize a common compressor assembly resulting in reduced manufacturing costs. The baseline compressor has been integrated with two different expander configurations and has been operated with two different levels of input power. These various configuration combinations offer a wide range of performance and interface characteristics which may be tailored to applications requiring limited power and size without significantly compromising cooler capacity or cooldown characteristics. Key cooler characteristics and test data are summarized for three combinations of cooler configurations which are representative of the versatility of this linear cooler design. Configurations reviewed include the shortened coldfinger [1.50 to 1.75 inches long], limited input power [less than 17 Watts] for low power availability applications; the shortened coldfinger with higher input power for lightweight, higher performance applications; and coldfingers compatible with DoD 0.4 Watt Common Module coolers for wider range retrofit capability. Typical weight of these miniature linear coolers is less than 500 grams for the compressor, expander and interconnecting transfer line. Cooling capacity at 80K at room ambient conditions ranges from 400 mW to greater than 550 mW. Steady state power requirements for maintaining a heat load of 150 mW at 80K has been shown to be less than 8 Watts. Ongoing reliability growth testing is summarized including a review of the latest test article results
Avram Mihai
2017-01-01
Full Text Available The paper presents a linear pneumatic actuator with short working stroke. It consists of a pneumatic motor (a simple stroke cylinder or a membrane chamber, two 2/2 pneumatic distributors “all or nothing” electrically commanded for controlling the intake/outtake flow to/from the active chamber of the motor, a position transducer and a microcontroller. There is also presented the theoretical analysis (mathematical modelling and numerical simulation accomplished.
Avram Mihai; Niţu Constantin; Bucşan Constantin; Grămescu Bogdan
2017-01-01
The paper presents a linear pneumatic actuator with short working stroke. It consists of a pneumatic motor (a simple stroke cylinder or a membrane chamber), two 2/2 pneumatic distributors “all or nothing” electrically commanded for controlling the intake/outtake flow to/from the active chamber of the motor, a position transducer and a microcontroller. There is also presented the theoretical analysis (mathematical modelling and numerical simulation) accomplished.
Scheffel, J.
1984-03-01
The linear Grad-Shafranov equation for a toroidal, axisymmetric plasma is solved analytically. Exact solutions are given in terms of confluent hyper-geometric functions. As an alternative, simple and accurate WKBJ solutions are presented. With parabolic pressure profiles, both hollow and peaked toroidal current density profiles are obtained. As an example the equilibrium of a z-pinch with a square-shaped cross section is derived.(author)
Buttram, M.T.; Ginn, J.W.
1988-06-21
A linear induction accelerator includes a plurality of adder cavities arranged in a series and provided in a structure which is evacuated so that a vacuum inductance is provided between each adder cavity and the structure. An energy storage system for the adder cavities includes a pulsed current source and a respective plurality of bipolar converting networks connected thereto. The bipolar high-voltage, high-repetition-rate square pulse train sets and resets the cavities. 4 figs.
Springer, T A
1998-01-01
"[The first] ten chapters...are an efficient, accessible, and self-contained introduction to affine algebraic groups over an algebraically closed field. The author includes exercises and the book is certainly usable by graduate students as a text or for self-study...the author [has a] student-friendly style… [The following] seven chapters... would also be a good introduction to rationality issues for algebraic groups. A number of results from the literature…appear for the first time in a text." –Mathematical Reviews (Review of the Second Edition) "This book is a completely new version of the first edition. The aim of the old book was to present the theory of linear algebraic groups over an algebraically closed field. Reading that book, many people entered the research field of linear algebraic groups. The present book has a wider scope. Its aim is to treat the theory of linear algebraic groups over arbitrary fields. Again, the author keeps the treatment of prerequisites self-contained. The material of t...
Parametric Linear Dynamic Logic
Peter Faymonville
2014-08-01
Full Text Available We introduce Parametric Linear Dynamic Logic (PLDL, which extends Linear Dynamic Logic (LDL by temporal operators equipped with parameters that bound their scope. LDL was proposed as an extension of Linear Temporal Logic (LTL that is able to express all ω-regular specifications while still maintaining many of LTL's desirable properties like an intuitive syntax and a translation into non-deterministic Büchi automata of exponential size. But LDL lacks capabilities to express timing constraints. By adding parameterized operators to LDL, we obtain a logic that is able to express all ω-regular properties and that subsumes parameterized extensions of LTL like Parametric LTL and PROMPT-LTL. Our main technical contribution is a translation of PLDL formulas into non-deterministic Büchi word automata of exponential size via alternating automata. This yields a PSPACE model checking algorithm and a realizability algorithm with doubly-exponential running time. Furthermore, we give tight upper and lower bounds on optimal parameter values for both problems. These results show that PLDL model checking and realizability are not harder than LTL model checking and realizability.
Quantum linear Boltzmann equation
Vacchini, Bassano; Hornberger, Klaus
2009-01-01
We review the quantum version of the linear Boltzmann equation, which describes in a non-perturbative fashion, by means of scattering theory, how the quantum motion of a single test particle is affected by collisions with an ideal background gas. A heuristic derivation of this Lindblad master equation is presented, based on the requirement of translation-covariance and on the relation to the classical linear Boltzmann equation. After analyzing its general symmetry properties and the associated relaxation dynamics, we discuss a quantum Monte Carlo method for its numerical solution. We then review important limiting forms of the quantum linear Boltzmann equation, such as the case of quantum Brownian motion and pure collisional decoherence, as well as the application to matter wave optics. Finally, we point to the incorporation of quantum degeneracies and self-interactions in the gas by relating the equation to the dynamic structure factor of the ambient medium, and we provide an extension of the equation to include internal degrees of freedom.
Emma, P.
1995-01-01
The Stanford Linear Collider (SLC) is the first and only high-energy e + e - linear collider in the world. Its most remarkable features are high intensity, submicron sized, polarized (e - ) beams at a single interaction point. The main challenges posed by these unique characteristics include machine-wide emittance preservation, consistent high intensity operation, polarized electron production and transport, and the achievement of a high degree of beam stability on all time scales. In addition to serving as an important machine for the study of Z 0 boson production and decay using polarized beams, the SLC is also an indispensable source of hands-on experience for future linear colliders. Each new year of operation has been highlighted with a marked improvement in performance. The most significant improvements for the 1994-95 run include new low impedance vacuum chambers for the damping rings, an upgrade to the optics and diagnostics of the final focus systems, and a higher degree of polarization from the electron source. As a result, the average luminosity has nearly doubled over the previous year with peaks approaching 10 30 cm -2 s -1 and an 80% electron polarization at the interaction point. These developments as well as the remaining identifiable performance limitations will be discussed
Psychosocial predictors of energy underreporting in a large doubly labeled water study.
Tooze, Janet A; Subar, Amy F; Thompson, Frances E; Troiano, Richard; Schatzkin, Arthur; Kipnis, Victor
2004-05-01
Underreporting of energy intake is associated with self-reported diet measures and appears to be selective according to personal characteristics. Doubly labeled water is an unbiased reference biomarker for energy intake that may be used to assess underreporting. Our objective was to determine which factors are associated with underreporting of energy intake on food-frequency questionnaires (FFQs) and 24-h dietary recalls (24HRs). The study participants were 484 men and women aged 40-69 y who resided in Montgomery County, MD. Using the doubly labeled water method to measure total energy expenditure, we considered numerous psychosocial, lifestyle, and sociodemographic factors in multiple logistic regression models for prediction of the probability of underreporting on the FFQ and 24HR. In the FFQ models, fear of negative evaluation, weight-loss history, and percentage of energy from fat were the best predictors of underreporting in women (R(2) = 0.09); body mass index, comparison of activity level with that of others of the same sex and age, and eating frequency were the best predictors in men (R(2) = 0.10). In the 24HR models, social desirability, fear of negative evaluation, body mass index, percentage of energy from fat, usual activity, and variability in number of meals per day were the best predictors of underreporting in women (R(2) = 0.22); social desirability, dietary restraint, body mass index, eating frequency, dieting history, and education were the best predictors in men (R(2) = 0.25). Although the final models were significantly related to underreporting on both the FFQ and the 24HR, the amount of variation explained by these models was relatively low, especially for the FFQ.
Generalized Linear Models in Vehicle Insurance
Silvie Kafková
2014-01-01
Full Text Available Actuaries in insurance companies try to find the best model for an estimation of insurance premium. It depends on many risk factors, e.g. the car characteristics and the profile of the driver. In this paper, an analysis of the portfolio of vehicle insurance data using a generalized linear model (GLM is performed. The main advantage of the approach presented in this article is that the GLMs are not limited by inflexible preconditions. Our aim is to predict the relation of annual claim frequency on given risk factors. Based on a large real-world sample of data from 57 410 vehicles, the present study proposed a classification analysis approach that addresses the selection of predictor variables. The models with different predictor variables are compared by analysis of deviance and Akaike information criterion (AIC. Based on this comparison, the model for the best estimate of annual claim frequency is chosen. All statistical calculations are computed in R environment, which contains stats package with the function for the estimation of parameters of GLM and the function for analysis of deviation.
Sparsity in Linear Predictive Coding of Speech
Giacobello, Daniele
of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...
Genomic prediction based on data from three layer lines using non-linear regression models.
Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L
2014-11-06
Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional
Garbet, X.; Mourgues, F.; Samain, A.
1987-01-01
Among the various instabilities which could explain the anomalous electron heat transport observed in tokamaks during additional heating, a microtearing turbulence is a reasonable candidate since it affects directly the magnetic topology. This turbulence may be described in a proper frame rotating around the majors axis by a static potential vector. In strong non linear regimes, the flow of electrons along the stochastic field lines induces a current. The point is to know whether this current can sustain the turbulence. The mechanisms of this self-consistency, involving the combined effects of the thermal diamagnetism and of the electric drift are presented here
Wangler, Thomas P
2008-01-01
Thomas P. Wangler received his B.S. degree in physics from Michigan State University, and his Ph.D. degree in physics and astronomy from the University of Wisconsin. After postdoctoral appointments at the University of Wisconsin and Brookhaven National Laboratory, he joined the staff of Argonne National Laboratory in 1966, working in the fields of experimental high-energy physics and accelerator physics. He joined the Accelerator Technology Division at Los Alamos National Laboratory in 1979, where he specialized in high-current beam physics and linear accelerator design and technology. In 2007
Richter, B.; Bell, R.A.; Brown, K.L.
1980-06-01
The SLAC LINEAR COLLIDER is designed to achieve an energy of 100 GeV in the electron-positron center-of-mass system by accelerating intense bunches of particles in the SLAC linac and transporting the electron and positron bunches in a special magnet system to a point where they are focused to a radius of about 2 microns and made to collide head on. The rationale for this new type of colliding beam system is discussed, the project is described, some of the novel accelerator physics issues involved are discussed, and some of the critical technical components are described
Lopez, Cesar
2014-01-01
MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. MATLAB Linear Algebra introduces you to the MATLAB language with practical hands-on instructions and results, allowing you to quickly achieve your goals. In addition to giving an introduction to
Special set linear algebra and special set fuzzy linear algebra
Kandasamy, W. B. Vasantha; Smarandache, Florentin; Ilanthenral, K.
2009-01-01
The authors in this book introduce the notion of special set linear algebra and special set fuzzy Linear algebra, which is an extension of the notion set linear algebra and set fuzzy linear algebra. These concepts are best suited in the application of multi expert models and cryptology. This book has five chapters. In chapter one the basic concepts about set linear algebra is given in order to make this book a self contained one. The notion of special set linear algebra and their fuzzy analog...
Maurice R. Kibler
2010-07-01
Full Text Available We propose a group-theoretical approach to the generalized oscillator algebra Aκ recently investigated in J. Phys. A: Math. Theor. 2010, 43, 115303. The case κ ≥ 0 corresponds to the noncompact group SU(1,1 (as for the harmonic oscillator and the Pöschl-Teller systems while the case κ < 0 is described by the compact group SU(2 (as for the Morse system. We construct the phase operators and the corresponding temporally stable phase eigenstates for Aκ in this group-theoretical context. The SU(2 case is exploited for deriving families of mutually unbiased bases used in quantum information. Along this vein, we examine some characteristics of a quadratic discrete Fourier transform in connection with generalized quadratic Gauss sums and generalized Hadamard matrices.
Munehiro, H
1980-05-29
When driving the carriage of a printer through a rotating motor, there are problems regarding the limited accuracy of the carriage position due to rotation or contraction and ageing of the cable. In order to solve the problem, a direct drive system was proposed, in which the printer carriage is driven by a linear motor. If one wants to keep the motor circuit of such a motor compact, then the magnetic flux density in the air gap must be reduced or the motor travel must be reduced. It is the purpose of this invention to create an electrodynamic linear motor, which on the one hand is compact and light and on the other hand has a relatively high constant force over a large travel. The invention is characterised by the fact that magnetic fields of alternating polarity are generated at equal intervals in the magnetic field, and that the coil arrangement has 2 adjacent coils, whose size corresponds to half the length of each magnetic pole. A logic circuit is provided to select one of the two coils and to determine the direction of the current depending on the signals of a magnetic field sensor on the coil arrangement.
Kozarov, A.; Petrov, O.; Antonov, J.; Sotirova, S.; Petrova, B.
2006-01-01
The purpose of the linear wind-power generator described in this article is to decrease the following disadvantages of the common wind-powered turbine: 1) large bending and twisting moments to the blades and the shaft, especially when strong winds and turbulence exist; 2) significant values of the natural oscillation period of the construction result in the possibility of occurrence of destroying resonance oscillations; 3) high velocity of the peripheral parts of the rotor creating a danger for birds; 4) difficulties, connected with the installation and the operation on the mountain ridges and passages where the wind energy potential is the largest. The working surfaces of the generator in questions driven by the wind are not connected with a joint shaft but each moves along a railway track with few oscillations. So the sizes of each component are small and their number can be rather large. The mechanical trajectory is not a circle but a closed outline in a vertical plain, which consists of two rectilinear sectors, one above the other, connected in their ends by semi-circumferences. The mechanical energy of each component turns into electrical on the principle of the linear electrical generator. A regulation is provided when the direction of the wind is perpendicular to the route. A possibility of effectiveness is shown through aiming of additional quantities of air to the movable components by static barriers
Gao, Wen; Yang, Hua; Qi, Lian-Wen; Liu, E-Hu; Ren, Mei-Ting; Yan, Yu-Ting; Chen, Jun; Li, Ping
2012-07-06
Plant-based medicines become increasingly popular over the world. Authentication of herbal raw materials is important to ensure their safety and efficacy. Some herbs belonging to closely related species but differing in medicinal properties are difficult to be identified because of similar morphological and microscopic characteristics. Chromatographic fingerprinting is an alternative method to distinguish them. Existing approaches do not allow a comprehensive analysis for herbal authentication. We have now developed a strategy consisting of (1) full metabolic profiling of herbal medicines by rapid resolution liquid chromatography (RRLC) combined with quadrupole time-of-flight mass spectrometry (QTOF MS), (2) global analysis of non-targeted compounds by molecular feature extraction algorithm, (3) multivariate statistical analysis for classification and prediction, and (4) marker compounds characterization. This approach has provided a fast and unbiased comparative multivariate analysis of the metabolite composition of 33-batch samples covering seven Lonicera species. Individual metabolic profiles are performed at the level of molecular fragments without prior structural assignment. In the entire set, the obtained classifier for seven Lonicera species flower buds showed good prediction performance and a total of 82 statistically different components were rapidly obtained by the strategy. The elemental compositions of discriminative metabolites were characterized by the accurate mass measurement of the pseudomolecular ions and their chemical types were assigned by the MS/MS spectra. The high-resolution, comprehensive and unbiased strategy for metabolite data analysis presented here is powerful and opens the new direction of authentication in herbal analysis. Copyright © 2012 Elsevier B.V. All rights reserved.
Gao, Shuai; Hou, Xinfeng; Jiang, Yonghua; Xu, Zijian; Cai, Tao; Chen, Jiajie; Chang, Gang
2017-01-23
Transcription factor-mediated reprogramming can reset the epigenetics of somatic cells into a pluripotency compatible state. Recent studies show that induced pluripotent stem cells (iPSCs) always inherit starting cell-specific characteristics, called epigenetic memory, which may be advantageous, as directed differentiation into specific cell types is still challenging; however, it also may be unpredictable when uncontrollable differentiation occurs. In consideration of biosafety in disease modeling and personalized medicine, the availability of high-quality iPSCs which lack a biased differentiation capacity and somatic memory could be indispensable. Herein, we evaluate the hematopoietic differentiation capacity and somatic memory state of hematopoietic progenitor and stem cell (HPC/HSC)-derived-iPSCs (HPC/HSC-iPSCs) using a previously established sequential reprogramming system. We found that HPC/HSCs are amenable to being reprogrammed into iPSCs with unbiased differentiation capacity to hematopoietic progenitors and mature hematopoietic cells. Genome-wide analyses revealed that no global epigenetic memory was detectable in HPC/HSC-iPSCs, but only a minor transcriptional memory of HPC/HSCs existed in a specific tetraploid complementation (4 N)-incompetent HPC/HSC-iPSC line. However, the observed minor transcriptional memory had no influence on the hematopoietic differentiation capacity, indicating the reprogramming of the HPC/HSCs was nearly complete. Further analysis revealed the correlation of minor transcriptional memory with the aberrant distribution of H3K27me3. This work provides a comprehensive framework for obtaining high-quality iPSCs from HPC/HSCs with unbiased hematopoietic differentiation capacity and minor transcriptional memory.
Media and Life Dissatisfaction as Predictors of Body Dissatisfaction
Melissa Bittencourt Jaeger
2015-08-01
Full Text Available Body dissatisfaction can contribute to social, occupational and recreational losses, constituting a risk factor to health. This study aimed to evaluate the predictors of body dissatisfaction regarding demographic variables, media and life satisfaction among university students. The sample consisted of 321 participants older than 18 years. Body dissatisfaction, life dissatisfaction and media messages internalization were evaluated by Escala de Silhuetas para Adultos Brasileiros, Subjective Well-Being Scale and Sociocultural Attitudes Towards Appearance Questionnaire-3, respectively. Data were collected by an online survey tool (SurveyMonkey® and were analyzed using multiple linear regression. It was found that body dissatisfaction was positively related to inaccuracy in the perception of body size, Body Mass Index, life dissatisfaction, media messages internalization and television exposure. These findings evidence the importance of these predictors in the dynamics of body dissatisfaction, which support the development of preventive and treatment interventions.
Linearization of the Lorenz system
Li, Chunbiao; Sprott, Julien Clinton; Thio, Wesley
2015-01-01
A partial and complete piecewise linearized version of the Lorenz system is proposed. The linearized versions have an independent total amplitude control parameter. Additional further linearization leads naturally to a piecewise linear version of the diffusionless Lorenz system. A chaotic circuit with a single amplitude controller is then implemented using a new switch element, producing a chaotic oscillation that agrees with the numerical calculation for the piecewise linear diffusionless Lorenz system. - Highlights: • A partial and complete piecewise linearized version of the Lorenz system are addressed. • The linearized versions have an independent total amplitude control parameter. • A piecewise linear version of the diffusionless Lorenz system is derived by further linearization. • A corresponding chaotic circuit without any multiplier is implemented for the chaotic oscillation
Topics in computational linear optimization
Hultberg, Tim Helge
2000-01-01
Linear optimization has been an active area of research ever since the pioneering work of G. Dantzig more than 50 years ago. This research has produced a long sequence of practical as well as theoretical improvements of the solution techniques avilable for solving linear optimization problems...... of high quality solvers and the use of algebraic modelling systems to handle the communication between the modeller and the solver. This dissertation features four topics in computational linear optimization: A) automatic reformulation of mixed 0/1 linear programs, B) direct solution of sparse unsymmetric...... systems of linear equations, C) reduction of linear programs and D) integration of algebraic modelling of linear optimization problems in C++. Each of these topics is treated in a separate paper included in this dissertation. The efficiency of solving mixed 0-1 linear programs by linear programming based...
Linearization of the Lorenz system
Li, Chunbiao, E-mail: goontry@126.com [School of Electronic & Information Engineering, Nanjing University of Information Science & Technology, Nanjing 210044 (China); Engineering Technology Research and Development Center of Jiangsu Circulation Modernization Sensor Network, Jiangsu Institute of Commerce, Nanjing 211168 (China); Sprott, Julien Clinton [Department of Physics, University of Wisconsin–Madison, Madison, WI 53706 (United States); Thio, Wesley [Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210 (United States)
2015-05-08
A partial and complete piecewise linearized version of the Lorenz system is proposed. The linearized versions have an independent total amplitude control parameter. Additional further linearization leads naturally to a piecewise linear version of the diffusionless Lorenz system. A chaotic circuit with a single amplitude controller is then implemented using a new switch element, producing a chaotic oscillation that agrees with the numerical calculation for the piecewise linear diffusionless Lorenz system. - Highlights: • A partial and complete piecewise linearized version of the Lorenz system are addressed. • The linearized versions have an independent total amplitude control parameter. • A piecewise linear version of the diffusionless Lorenz system is derived by further linearization. • A corresponding chaotic circuit without any multiplier is implemented for the chaotic oscillation.
On the linear programming bound for linear Lee codes.
Astola, Helena; Tabus, Ioan
2016-01-01
Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.
Measures for Predictors of Innovation Adoption
Chor, Ka Ho Brian; Wisdom, Jennifer P.; Olin, Su-Chin Serene; Hoagwood, Kimberly E.; Horwitz, Sarah M.
2014-01-01
Building on a narrative synthesis of adoption theories by Wisdom et al. (2013), this review identifies 118 measures associated with the 27 adoption predictors in the synthesis. The distribution of measures is uneven across the predictors and predictors vary in modifiability. Multiple dimensions and definitions of predictors further complicate measurement efforts. For state policymakers and researchers, more effective and integrated measurement can advance the adoption of complex innovations such as evidence-based practices. PMID:24740175
Introduction to linear elasticity
Gould, Phillip L
2013-01-01
Introduction to Linear Elasticity, 3rd Edition, provides an applications-oriented grounding in the tensor-based theory of elasticity for students in mechanical, civil, aeronautical, and biomedical engineering, as well as materials and earth science. The book is distinct from the traditional text aimed at graduate students in solid mechanics by introducing the subject at a level appropriate for advanced undergraduate and beginning graduate students. The author's presentation allows students to apply the basic notions of stress analysis and move on to advanced work in continuum mechanics, plasticity, plate and shell theory, composite materials, viscoelasticity and finite method analysis. This book also: Emphasizes tensor-based approach while still distilling down to explicit notation Provides introduction to theory of plates, theory of shells, wave propagation, viscoelasticity and plasticity accessible to advanced undergraduate students Appropriate for courses following emerging trend of teaching solid mechan...
Haniger, L.; Elger, R.; Kocandrle, L.; Zdebor, J.
1986-01-01
A linear step drive is described developed in Czechoslovak-Soviet cooperation and intended for driving WWER-1000 control rods. The functional principle is explained of the motor and the mechanical and electrical parts of the drive, power control, and the indicator of position are described. The motor has latches situated in the reactor at a distance of 3 m from magnetic armatures, it has a low structural height above the reactor cover, which suggests its suitability for seismic localities. Its magnetic circuits use counterpoles; the mechanical shocks at the completion of each step are damped using special design features. The position indicator is of a special design and evaluates motor position within ±1% of total travel. A drive diagram and the flow chart of both the control electronics and the position indicator are presented. (author) 4 figs
Tjutju, R.L.
1977-01-01
Pulse amplifier is standard significant part of spectrometer. Apart from other type of amplification, it's a combination of amplification and pulse shaping. Because of its special purpose the device should fulfill the following : High resolution is desired to gain a high yield comparable to its actual state of condition. High signal to noise is desired to nhν resolution. High linearity to facilitate calibration. A good overload recovery, in order to the device will capable of analizing a low energy radiation which appear joinly on the high energy fields. Other expections of the device are its economical and practical use its extentive application. For that reason it's built on a standard NIM principle. Taking also into account the above mentioned considerations. High quality component parts are used throughout, while its availability in the domestic market is secured. (author)
1976-01-01
This report covers the activity of the Linear Accelerator Laboratory during the period June 1974-June 1976. The activity of the Laboratory is essentially centered on high energy physics. The main activities were: experiments performed with the colliding rings (ACO), construction of the new colliding rings and beginning of the work at higher energy (DCI), bubble chamber experiments with the CERN PS neutrino beam, counter experiments with CERN's PS and setting-up of equipment for new experiments with CERN's SPS. During this period a project has also been prepared for an experiment with the new PETRA colliding ring at Hamburg. On the other hand, intense collaboration with the LURE Laboratory, using the electron synchrotron radiation emitted by ACO and DCI, has been developed [fr
Van Atta, C.M.; Beringer, R.; Smith, L.
1959-01-01
A linear accelerator of heavy ions is described. The basic contributions of the invention consist of a method and apparatus for obtaining high energy particles of an element with an increased charge-to-mass ratio. The method comprises the steps of ionizing the atoms of an element, accelerating the resultant ions to an energy substantially equal to one Mev per nucleon, stripping orbital electrons from the accelerated ions by passing the ions through a curtain of elemental vapor disposed transversely of the path of the ions to provide a second charge-to-mass ratio, and finally accelerating the resultant stripped ions to a final energy of at least ten Mev per nucleon.
Tip, A.
1998-06-01
Starting from Maxwell's equations for a linear, nonconducting, absorptive, and dispersive medium, characterized by the constitutive equations D(x,t)=ɛ1(x)E(x,t)+∫t-∞dsχ(x,t-s)E(x,s) and H(x,t)=B(x,t), a unitary time evolution and canonical formalism is obtained. Given the complex, coordinate, and frequency-dependent, electric permeability ɛ(x,ω), no further assumptions are made. The procedure leads to a proper definition of band gaps in the periodic case and a new continuity equation for energy flow. An S-matrix formalism for scattering from lossy objects is presented in full detail. A quantized version of the formalism is derived and applied to the generation of Čerenkov and transition radiation as well as atomic decay. The last case suggests a useful generalization of the density of states to the absorptive situation.
Computer Program For Linear Algebra
Krogh, F. T.; Hanson, R. J.
1987-01-01
Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.
Quaternion Linear Canonical Transform Application
Bahri, Mawardi
2015-01-01
Quaternion linear canonical transform (QLCT) is a generalization of the classical linear canonical transfom (LCT) using quaternion algebra. The focus of this paper is to introduce an application of the QLCT to study of generalized swept-frequency filter
Recursive Algorithm For Linear Regression
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
Dynamical systems and linear algebra
Colonius, Fritz (Prof.)
2007-01-01
Dynamical systems and linear algebra / F. Colonius, W. Kliemann. - In: Handbook of linear algebra / ed. by Leslie Hogben. - Boca Raton : Chapman & Hall/CRC, 2007. - S. 56,1-56,22. - (Discrete mathematics and its applications)
Linear spaces: history and theory
Albrecht Beutelspracher
1990-01-01
Linear spaces belong to the most fundamental geometric and combinatorial structures. In this paper I would like to give an onerview about the theory of embedding finite linear spaces in finite projective planes.
Discrimination, acculturation and other predictors of depression among pregnant Hispanic women.
Walker, Janiece L; Ruiz, R Jeanne; Chinn, Juanita J; Marti, Nathan; Ricks, Tiffany N
2012-01-01
The purpose of our study was to examine the effects of socioeconomic status, acculturative stress, discrimination, and marginalization as predictors of depression in pregnant Hispanic women. A prospective observational design was used. Central and Gulf coast areas of Texas in obstetrical offices. A convenience sample of 515 pregnant, low income, low medical risk, and self-identified Hispanic women who were between 22-24 weeks gestation was used to collect data. The predictor variables were socioeconomic status, discrimination, acculturative stress, and marginalization. The outcome variable was depression. Education, frequency of discrimination, age, and Anglo marginality were significant predictors of depressive symptoms in a linear regression model, F (6, 458) = 8.36, Pdiscrimination was the strongest positive predictor of increased depressive symptoms. It is important that health care providers further understand the impact that age and experiences of discrimination throughout the life course have on depressive symptoms during pregnancy.
Wright, Bruce R; Barbosa-Leiker, Celestina; Hoekstra, T.
Objective: To determine whether law enforcement officer (LEO) status and perceived stress are longitudinal predictors of traditional and inflammatory cardiovascular (CV) risk factors. Method: Linear hierarchical regression was employed to investigate the longitudinal (more than 7 years) relationship
Linear versus non-linear supersymmetry, in general
Ferrara, Sergio [Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); INFN - Laboratori Nazionali di Frascati,Via Enrico Fermi 40, I-00044 Frascati (Italy); Department of Physics and Astronomy, UniversityC.L.A.,Los Angeles, CA 90095-1547 (United States); Kallosh, Renata [SITP and Department of Physics, Stanford University,Stanford, California 94305 (United States); Proeyen, Antoine Van [Institute for Theoretical Physics, Katholieke Universiteit Leuven,Celestijnenlaan 200D, B-3001 Leuven (Belgium); Wrase, Timm [Institute for Theoretical Physics, Technische Universität Wien,Wiedner Hauptstr. 8-10, A-1040 Vienna (Austria)
2016-04-12
We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.
Linear versus non-linear supersymmetry, in general
Ferrara, Sergio; Kallosh, Renata; Proeyen, Antoine Van; Wrase, Timm
2016-01-01
We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.
Monahan, John F
2008-01-01
Preface Examples of the General Linear Model Introduction One-Sample Problem Simple Linear Regression Multiple Regression One-Way ANOVA First Discussion The Two-Way Nested Model Two-Way Crossed Model Analysis of Covariance Autoregression Discussion The Linear Least Squares Problem The Normal Equations The Geometry of Least Squares Reparameterization Gram-Schmidt Orthonormalization Estimability and Least Squares Estimators Assumptions for the Linear Mean Model Confounding, Identifiability, and Estimability Estimability and Least Squares Estimators F
A Predictor-Corrector Approach for the Numerical Solution of Fractional Differential Equations
Diethelm, Kai; Ford, Neville J.; Freed, Alan D.; Gray, Hugh R. (Technical Monitor)
2002-01-01
We discuss an Adams-type predictor-corrector method for the numerical solution of fractional differential equations. The method may be used both for linear and for nonlinear problems, and it may be extended to multi-term equations (involving more than one differential operator) too.
Predictors of Career Adaptability Skill among Higher Education Students in Nigeria
Ebenehi, Amos Shaibu; Rashid, Abdullah Mat; Bakar, Ab Rahim
2016-01-01
This paper examined predictors of career adaptability skill among higher education students in Nigeria. A sample of 603 higher education students randomly selected from six colleges of education in Nigeria participated in this study. A set of self-reported questionnaire was used for data collection, and multiple linear regression analysis was used…
Templates for Linear Algebra Problems
Bai, Z.; Day, D.; Demmel, J.; Dongarra, J.; Gu, M.; Ruhe, A.; Vorst, H.A. van der
1995-01-01
The increasing availability of advanced-architecture computers is having a very signicant eect on all spheres of scientic computation, including algorithm research and software development in numerical linear algebra. Linear algebra {in particular, the solution of linear systems of equations and
Linearization of CIF through SOS
Nadales Agut, D.E.; Reniers, M.A.; Luttik, B.; Valencia, F.
2011-01-01
Linearization is the procedure of rewriting a process term into a linear form, which consist only of basic operators of the process language. This procedure is interesting both from a theoretical and a practical point of view. In particular, a linearization algorithm is needed for the Compositional
Engberg, Uffe Henrik; Winskel, Glynn
This article shows how individual Petri nets form models of Girard's intuitionistic linear logic. It explores questions of expressiveness and completeness of linear logic with respect to this interpretation. An aim is to use Petri nets to give an understanding of linear logic and give some apprai...
Optimization of piezoelectric cantilever energy harvesters including non-linear effects
Patel, R; McWilliam, S; Popov, A A
2014-01-01
This paper proposes a versatile non-linear model for predicting piezoelectric energy harvester performance. The presented model includes (i) material non-linearity, for both substrate and piezoelectric layers, and (ii) geometric non-linearity incorporated by assuming inextensibility and accurately representing beam curvature. The addition of a sub-model, which utilizes the transfer matrix method to predict eigenfrequencies and eigenvectors for segmented beams, allows for accurate optimization of piezoelectric layer coverage. A validation of the overall theoretical model is performed through experimental testing on both uniform and non-uniform samples manufactured in-house. For the harvester composition used in this work, the magnitude of material non-linearity exhibited by the piezoelectric layer is 35 times greater than that of the substrate layer. It is also observed that material non-linearity, responsible for reductions in resonant frequency with increases in base acceleration, is dominant over geometric non-linearity for standard piezoelectric harvesting devices. Finally, over the tested range, energy loss due to damping is found to increase in a quasi-linear fashion with base acceleration. During an optimization study on piezoelectric layer coverage, results from the developed model were compared with those from a linear model. Unbiased comparisons between harvesters were realized by using devices with identical natural frequencies—created by adjusting the device substrate thickness. Results from three studies, each with a different assumption on mechanical damping variations, are presented. Findings showed that, depending on damping variation, a non-linear model is essential for such optimization studies with each model predicting vastly differing optimum configurations. (paper)
Richards, J.A.
1977-01-01
A linear particle accelerator which provides a pulsed beam of charged particles of uniform energy is described. The accelerator is in the form of an evacuated dielectric tube, inside of which a particle source is located at one end of the tube, with a target or window located at the other end of the dielectric tube. Along the length of the tube are externally located pairs of metal plates, each insulated from each other in an insulated housing. Each of the plates of a pair are connected to an electrical source of voltage of opposed polarity, with the polarity of the voltage of the plates oriented so that the plate of a pair, nearer to the particle source, is of the opposed polarity to the charge of the particle emitted by the source. Thus, a first plate about the tube located nearest the particle source, attracts a particle which as it passes through the tube past the first plate is then repelled by the reverse polarity of the second plate of the pair to continue moving towards the target
Generalized Linear Covariance Analysis
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Equipartitioning in linear accelerators
Jameson, R.A.
1982-01-01
Emittance growth has long been a concern in linear accelerators, as has the idea that some kind of energy balance, or equipartitioning, between the degrees of freedom, would ameliorate the growth. M. Prome observed that the average transverse and longitudinal velocity spreads tend to equalize as current in the channel is increased, while the sum of the energy in the system stays nearly constant. However, only recently have we shown that an equipartitioning requirement on a bunched injected beam can indeed produce remarkably small emittance growth. The simple set of equations leading to this condition are outlined. At the same time, Hofmann has investigated collective instabilities in transported beams and has identified thresholds and regions in parameter space where instabilities occur. Evidence is presented that shows transport system boundaries to be quite accurate in computer simulations of accelerating systems. Discussed are preliminary results of efforts to design accelerators that avoid parameter regions where emittance is affected by the instabilities identified by Hofmann. These efforts suggest that other mechanisms are present. The complicated behavior of the RFQ linac in this framework also is shown
Equipartitioning in linear accelerators
Jameson, R.A.
1981-01-01
Emittance growth has long been a concern in linear accelerators, as has the idea that some kind of energy balance, or equipartitioning, between the degrees of freedom, would ameliorate the growth. M. Prome observed that the average transverse and longitudinal velocity spreads tend to equalize as current in the channel is increased, while the sum of the energy in the system stays nearly constant. However, only recently have we shown that an equipartitioning requirement on a bunched injected beam can indeed produce remarkably small emittance growth. The simple set of equations leading to this condition are outlined below. At the same time, Hofmann, using powerful analytical and computational methods, has investigated collective instabilities in transported beams and has identified thresholds and regions in parameter space where instabilities occur. This is an important generalization. Work that he will present at this conference shows that the results are essentially the same in r-z coordinates for transport systems, and evidence is presented that shows transport system boundaries to be quite accurate in computer simulations of accelerating systems also. Discussed are preliminary results of efforts to design accelerators that avoid parameter regions where emittance is affected by the instabilities identified by Hofmann. These efforts suggest that other mechanisms are present. The complicated behavior of the RFQ linac in this framework also is shown
Briggs, R.J.
1986-06-01
The development of linear induction accelerators has been motivated by applications requiring high-pulsed currents of charged particles at voltages exceeding the capability of single-stage, diode-type accelerators and at currents too high for rf accelerators. In principle, one can accelerate charged particles to arbitrarily high voltages using a multi-stage induction machine, but the 50-MeV, 10-kA Advanced Test Accelerator (ATA) at LLNL is the highest voltage machine in existence at this time. The advent of magnetic pulse power systems makes sustained operation at high-repetition rates practical, and this capability for high-average power is very likely to open up many new applications of induction machines in the future. This paper surveys the US induction linac technology with primary emphasis on electron machines. A simplified description of how induction machines couple energy to the electron beam is given, to illustrate many of the general issues that bound the design space of induction linacs
Berkeley Proton Linear Accelerator
Alvarez, L. W.; Bradner, H.; Franck, J.; Gordon, H.; Gow, J. D.; Marshall, L. C.; Oppenheimer, F. F.; Panofsky, W. K. H.; Richman, C.; Woodyard, J. R.
1953-10-13
A linear accelerator, which increases the energy of protons from a 4 Mev Van de Graaff injector, to a final energy of 31.5 Mev, has been constructed. The accelerator consists of a cavity 40 feet long and 39 inches in diameter, excited at resonance in a longitudinal electric mode with a radio-frequency power of about 2.2 x 10{sup 6} watts peak at 202.5 mc. Acceleration is made possible by the introduction of 46 axial "drift tubes" into the cavity, which is designed such that the particles traverse the distance between the centers of successive tubes in one cycle of the r.f. power. The protons are longitudinally stable as in the synchrotron, and are stabilized transversely by the action of converging fields produced by focusing grids. The electrical cavity is constructed like an inverted airplane fuselage and is supported in a vacuum tank. Power is supplied by 9 high powered oscillators fed from a pulse generator of the artificial transmission line type.
Predictors of restraint use among child occupants.
Benedetti, Marco; Klinich, Kathleen D; Manary, Miriam A; Flannagan, Carol A
2017-11-17
The objective of this study was to identify factors that predict restraint use and optimal restraint use among children aged 0 to 13 years. The data set is a national sample of police-reported crashes for years 2010-2014 in which type of child restraint is recorded. The data set was supplemented with demographic census data linked by driver ZIP code, as well as a score for the state child restraint law during the year of the crash relative to best practice recommendations for protecting child occupants. Analysis used linear regression techniques. The main predictor of unrestrained child occupants was the presence of an unrestrained driver. Among restrained children, children had 1.66 (95% confidence interval, 1.27, 2.17) times higher odds of using the recommended type of restraint system if the state law at the time of the crash included requirements based on best practice recommendations. Children are more likely to ride in the recommended type of child restraint when their state's child restraint law includes wording that follows best practice recommendations for child occupant protection. However, state child restraint law requirements do not influence when caregivers fail to use an occupant restraint for their child passengers.
Random linear codes in steganography
Kamil Kaczyński
2016-12-01
Full Text Available Syndrome coding using linear codes is a technique that allows improvement in the steganographic algorithms parameters. The use of random linear codes gives a great flexibility in choosing the parameters of the linear code. In parallel, it offers easy generation of parity check matrix. In this paper, the modification of LSB algorithm is presented. A random linear code [8, 2] was used as a base for algorithm modification. The implementation of the proposed algorithm, along with practical evaluation of algorithms’ parameters based on the test images was made.[b]Keywords:[/b] steganography, random linear codes, RLC, LSB
Linear Algebraic Method for Non-Linear Map Analysis
Yu, L.; Nash, B.
2009-01-01
We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.
Masuda, Y; Misztal, I; Legarra, A; Tsuruta, S; Lourenco, D A L; Fragomeni, B O; Aguilar, I
2017-01-01
This paper evaluates an efficient implementation to multiply the inverse of a numerator relationship matrix for genotyped animals () by a vector (). The computation is required for solving mixed model equations in single-step genomic BLUP (ssGBLUP) with the preconditioned conjugate gradient (PCG). The inverse can be decomposed into sparse matrices that are blocks of the sparse inverse of a numerator relationship matrix () including genotyped animals and their ancestors. The elements of were rapidly calculated with the Henderson's rule and stored as sparse matrices in memory. Implementation of was by a series of sparse matrix-vector multiplications. Diagonal elements of , which were required as preconditioners in PCG, were approximated with a Monte Carlo method using 1,000 samples. The efficient implementation of was compared with explicit inversion of with 3 data sets including about 15,000, 81,000, and 570,000 genotyped animals selected from populations with 213,000, 8.2 million, and 10.7 million pedigree animals, respectively. The explicit inversion required 1.8 GB, 49 GB, and 2,415 GB (estimated) of memory, respectively, and 42 s, 56 min, and 13.5 d (estimated), respectively, for the computations. The efficient implementation required <1 MB, 2.9 GB, and 2.3 GB of memory, respectively, and <1 sec, 3 min, and 5 min, respectively, for setting up. Only <1 sec was required for the multiplication in each PCG iteration for any data sets. When the equations in ssGBLUP are solved with the PCG algorithm, is no longer a limiting factor in the computations.
Modelling of diffuse solar fraction with multiple predictors
Ridley, Barbara; Boland, John [Centre for Industrial and Applied Mathematics, University of South Australia, Mawson Lakes Boulevard, Mawson Lakes, SA 5095 (Australia); Lauret, Philippe [Laboratoire de Physique du Batiment et des Systemes, University of La Reunion, Reunion (France)
2010-02-15
For some locations both global and diffuse solar radiation are measured. However, for many locations, only global radiation is measured, or inferred from satellite data. For modelling solar energy applications, the amount of radiation on a tilted surface is needed. Since only the direct component on a tilted surface can be calculated from direct on some other plane using trigonometry, we need to have diffuse radiation on the horizontal plane available. There are regression relationships for estimating the diffuse on a tilted surface from diffuse on the horizontal. Models for estimating the diffuse on the horizontal from horizontal global that have been developed in Europe or North America have proved to be inadequate for Australia. Boland et al. developed a validated model for Australian conditions. Boland et al. detailed our recent advances in developing the theoretical framework for the use of the logistic function instead of piecewise linear or simple nonlinear functions and was the first step in identifying the means for developing a generic model for estimating diffuse from global and other predictors. We have developed a multiple predictor model, which is much simpler than previous models, and uses hourly clearness index, daily clearness index, solar altitude, apparent solar time and a measure of persistence of global radiation level as predictors. This model performs marginally better than currently used models for locations in the Northern Hemisphere and substantially better for Southern Hemisphere locations. We suggest it can be used as a universal model. (author)
Predictors of Immunosuppressive Regulatory T Lymphocytes in Healthy Women
Hampras, S. S.; Nesline, M.; Davis, W.; Moysich, K. B.; Wallace, P. K.; Odunsi, K.; Furlani, N.
2012-01-01
Immunosuppressive regulatory T (Treg) cells play an important role in antitumor immunity, self-tolerance, transplantation tolerance, and attenuation of allergic response. Higher proportion of Treg cells has been observed in peripheral blood of cancer cases compared to controls. Little is known about potential epidemiological predictors of Treg cell levels in healthy individuals. We conducted a cross-sectional study including 75 healthy women, between 20 and 80 years of age, who participated in the Data Bank and Bio Repository (DBBR) program at Roswell Park Cancer Institute (RPCI), Buffalo, NY, USA. Peripheral blood levels of CD4 + CD25 + FOXP3 + Treg cells were measured using flow cytometric analysis. A range of risk factors was evaluated using Wilcoxon Rank-Sum test, Kruskal-Wallis test, and linear regression. Age, smoking, medications for treatment of osteoporosis, postmenopausal status, body mass index (BMI), and hormone replacement therapy (HRT) were found to be significant positive predictors of Treg cell levels in peripheral blood (π≤0.05 ). Higher education, exercise, age at first birth, oral contraceptives, and use of Ibuprofen were found be significant (π<0.05) negative predictors of Treg levels. Thus, various epidemiological risk factors might explain interindividual variation in immune response to pathological conditions, including cancer.
Childhood Depression: Relation to Adaptive, Clinical and Predictor Variables
Maite Garaigordobil
2017-05-01
Full Text Available The study had two goals: (1 to explore the relations between self-assessed childhood depression and other adaptive and clinical variables (2 to identify predictor variables of childhood depression. Participants were 420 students aged 7–10 years old (53.3% boys, 46.7% girls. Results revealed: (1 positive correlations between depression and clinical maladjustment, school maladjustment, emotional symptoms, internalizing and externalizing problems, problem behaviors, emotional reactivity, and childhood stress; and (2 negative correlations between depression and personal adaptation, global self-concept, social skills, and resilience (sense of competence and affiliation. Linear regression analysis including the global dimensions revealed 4 predictors of childhood depression that explained 50.6% of the variance: high clinical maladjustment, low global self-concept, high level of stress, and poor social skills. However, upon introducing the sub-dimensions, 9 predictor variables emerged that explained 56.4% of the variance: many internalizing problems, low family self-concept, high anxiety, low responsibility, low personal self-assessment, high social stress, few aggressive behaviors toward peers, many health/psychosomatic problems, and external locus of control. The discussion addresses the importance of implementing prevention programs for childhood depression at early ages.
Predictors of specific phobia in children with Williams syndrome.
Pitts, C H; Klein-Tasman, B P; Osborne, J W; Mervis, C B
2016-10-01
Specific phobia (SP) is the most common anxiety disorder among children with Williams syndrome (WS); prevalence rates derived from Diagnostic and Statistical Manual of Mental Disorders-based diagnostic interviews range from 37% to 56%. We evaluated the effects of gender, age, intellectual abilities and/or behaviour regulation difficulties on the likelihood that a child with WS would be diagnosed with SP. A total of 194 6-17 year-olds with WS were evaluated. To best characterise the relations between the predictors and the probability of a SP diagnosis, we explored not only possible linear effects but also curvilinear effects. No gender differences were detected. As age increased, the likelihood of receiving a SP diagnosis decreased. As IQ increased, the probability of receiving a SP diagnosis also decreased. Behaviour regulation difficulties were the strongest predictor of a positive diagnosis. A quadratic relation was detected: The probability of receiving a SP diagnosis gradually rose as behaviour regulation difficulties increased. However, once behaviour regulation difficulties approached the clinical range, the probability of receiving a SP diagnosis asymptoted at a high level. Children with behaviour regulation difficulties in or just below the clinical range were at the greatest risk of developing SP. These findings highlight the value of large samples and the importance of evaluating for nonlinear effects to provide accurate model specification when characterising relations among a dependent variable and possible predictors. © 2016 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.
Predictors of pathological gambling severity taking gender differences into account.
González-Ortega, I; Echeburúa, E; Corral, P; Polo-López, R; Alberich, S
2013-01-01
The current study aims to identify predictors of pathological gambling (PG) severity, taking gender differences into account, in an outpatient sample of pathological gamblers seeking treatment. The sample for this study consisted of 103 subjects (51 women and 52 men) meeting current DSM-IV-TR criteria for PG. Linear and logistic regression analyses were used to examine different risk factors (gender, age, impulsivity, sensation seeking, self-esteem) and risk markers (depression, anxiety, gambling-related thoughts, substance abuse) as predictors of PG severity. Impulsivity, maladjustment in everyday life and age at gambling onset were the best predictors in the overall sample. When gender differences were taken into account, duration of gambling disorder in women and depression and impulsivity in men predicted PG severity. In turn, a high degree of severity in the South Oaks Gambling Screen score was related to older age and more familiy support in women and to low self-esteem and alcohol abuse in men. Female gamblers were older than male gamblers and started gambling later in life, but became dependent on gambling more quickly than men. Further research should examine these data to tailor treatment to specific patients' needs according to sex and individual characteristics. Copyright © 2012 S. Karger AG, Basel.
Deymier, Martin J., E-mail: mdeymie@emory.edu [Emory Vaccine Center, Yerkes National Primate Research Center, 954 Gatewood Road NE, Atlanta, GA 30329 (United States); Claiborne, Daniel T., E-mail: dclaibo@emory.edu [Emory Vaccine Center, Yerkes National Primate Research Center, 954 Gatewood Road NE, Atlanta, GA 30329 (United States); Ende, Zachary, E-mail: zende@emory.edu [Emory Vaccine Center, Yerkes National Primate Research Center, 954 Gatewood Road NE, Atlanta, GA 30329 (United States); Ratner, Hannah K., E-mail: hannah.ratner@emory.edu [Emory Vaccine Center, Yerkes National Primate Research Center, 954 Gatewood Road NE, Atlanta, GA 30329 (United States); Kilembe, William, E-mail: wkilembe@rzhrg-mail.org [Zambia-Emory HIV Research Project (ZEHRP), B22/737 Mwembelelo, Emmasdale Post Net 412, P/BagE891, Lusaka (Zambia); Allen, Susan, E-mail: sallen5@emory.edu [Zambia-Emory HIV Research Project (ZEHRP), B22/737 Mwembelelo, Emmasdale Post Net 412, P/BagE891, Lusaka (Zambia); Department of Pathology and Laboratory Medicine, Emory University, Atlanta, GA (United States); Hunter, Eric, E-mail: eric.hunter2@emory.edu [Emory Vaccine Center, Yerkes National Primate Research Center, 954 Gatewood Road NE, Atlanta, GA 30329 (United States); Department of Pathology and Laboratory Medicine, Emory University, Atlanta, GA (United States)
2014-11-15
The high genetic diversity of HIV-1 impedes high throughput, large-scale sequencing and full-length genome cloning by common restriction enzyme based methods. Applying novel methods that employ a high-fidelity polymerase for amplification and an unbiased fusion-based cloning strategy, we have generated several HIV-1 full-length genome infectious molecular clones from an epidemiologically linked transmission pair. These clones represent the transmitted/founder virus and phylogenetically diverse non-transmitted variants from the chronically infected individual's diverse quasispecies near the time of transmission. We demonstrate that, using this approach, PCR-induced mutations in full-length clones derived from their cognate single genome amplicons are rare. Furthermore, all eight non-transmitted genomes tested produced functional virus with a range of infectivities, belying the previous assumption that a majority of circulating viruses in chronic HIV-1 infection are defective. Thus, these methods provide important tools to update protocols in molecular biology that can be universally applied to the study of human viral pathogens. - Highlights: • Our novel methodology demonstrates accurate amplification and cloning of full-length HIV-1 genomes. • A majority of plasma derived HIV variants from a chronically infected individual are infectious. • The transmitted/founder was more infectious than the majority of the variants from the chronically infected donor.
Deymier, Martin J.; Claiborne, Daniel T.; Ende, Zachary; Ratner, Hannah K.; Kilembe, William; Allen, Susan; Hunter, Eric
2014-01-01
The high genetic diversity of HIV-1 impedes high throughput, large-scale sequencing and full-length genome cloning by common restriction enzyme based methods. Applying novel methods that employ a high-fidelity polymerase for amplification and an unbiased fusion-based cloning strategy, we have generated several HIV-1 full-length genome infectious molecular clones from an epidemiologically linked transmission pair. These clones represent the transmitted/founder virus and phylogenetically diverse non-transmitted variants from the chronically infected individual's diverse quasispecies near the time of transmission. We demonstrate that, using this approach, PCR-induced mutations in full-length clones derived from their cognate single genome amplicons are rare. Furthermore, all eight non-transmitted genomes tested produced functional virus with a range of infectivities, belying the previous assumption that a majority of circulating viruses in chronic HIV-1 infection are defective. Thus, these methods provide important tools to update protocols in molecular biology that can be universally applied to the study of human viral pathogens. - Highlights: • Our novel methodology demonstrates accurate amplification and cloning of full-length HIV-1 genomes. • A majority of plasma derived HIV variants from a chronically infected individual are infectious. • The transmitted/founder was more infectious than the majority of the variants from the chronically infected donor
Dilla, Shintia Ulfa; Andriyana, Yudhie; Sudartianto
2017-03-01
Acid rain causes many bad effects in life. It is formed by two strong acids, sulfuric acid (H2SO4) and nitric acid (HNO3), where sulfuric acid is derived from SO2 and nitric acid from NOx {x=1,2}. The purpose of the research is to find out the influence of So4 and NO3 levels contained in the rain to the acidity (pH) of rainwater. The data are incomplete panel data with two-way error component model. The panel data is a collection of some of the observations that observed from time to time. It is said incomplete if each individual has a different amount of observation. The model used in this research is in the form of random effects model (REM). Minimum variance quadratic unbiased estimation (MIVQUE) is used to estimate the variance error components, while maximum likelihood estimation is used to estimate the parameters. As a result, we obtain the following model: Ŷ* = 0.41276446 - 0.00107302X1 + 0.00215470X2.
Chapman, Robin S.; Hesketh, Linda J.; Kistler, Doris J.
2002-01-01
Longitudinal change in syntax comprehension and production skill, measured over six years, was modeled in 31 individuals (ages 5-20) with Down syndrome. The best fitting Hierarchical Linear Modeling model of comprehension uses age and visual and auditory short-term memory as predictors of initial status, and age for growth trajectory. (Contains…
Akbulut, Yavuz
2007-01-01
Factors predicting vocabulary learning and reading comprehension of advanced language learners of English in a linear multimedia text were investigated in the current study. Predictor variables of interest were multimedia type, reading proficiency, learning styles, topic interest and background knowledge about the topic. The outcome variables of…
Age is no barrier: predictors of academic success in older learners
Imlach, Abbie-Rose; Ward, David D.; Stuart, Kimberley E.; Summers, Mathew J.; Valenzuela, Michael J.; King, Anna E.; Saunders, Nichole L.; Summers, Jeffrey; Srikanth, Velandai K.; Robinson, Andrew; Vickers, James C.
2017-11-01
Although predictors of academic success have been identified in young adults, such predictors are unlikely to translate directly to an older student population, where such information is scarce. The current study aimed to examine cognitive, psychosocial, lifetime, and genetic predictors of university-level academic performance in older adults (50-79 years old). Participants were mostly female (71%) and had a greater than high school education level (M = 14.06 years, SD = 2.76), on average. Two multiple linear regression analyses were conducted. The first examined all potential predictors of grade point average (GPA) in the subset of participants who had volunteered samples for genetic analysis (N = 181). Significant predictors of GPA were then re-examined in a second multiple linear regression using the full sample (N = 329). Our data show that the cognitive domains of episodic memory and language processing, in conjunction with midlife engagement in cognitively stimulating activities, have a role in predicting academic performance as measured by GPA in the first year of study. In contrast, it was determined that age, IQ, gender, working memory, psychosocial factors, and common brain gene polymorphisms linked to brain function, plasticity and degeneration (APOE, BDNF, COMT, KIBRA, SERT) did not influence academic performance. These findings demonstrate that ageing does not impede academic achievement, and that discrete cognitive skills as well as lifetime engagement in cognitively stimulating activities can promote academic success in older adults.
Linear Programming and Network Flows
Bazaraa, Mokhtar S; Sherali, Hanif D
2011-01-01
The authoritative guide to modeling and solving complex problems with linear programming-extensively revised, expanded, and updated The only book to treat both linear programming techniques and network flows under one cover, Linear Programming and Network Flows, Fourth Edition has been completely updated with the latest developments on the topic. This new edition continues to successfully emphasize modeling concepts, the design and analysis of algorithms, and implementation strategies for problems in a variety of fields, including industrial engineering, management science, operations research
LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections
2007-01-01
1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table
Elementary linear programming with applications
Kolman, Bernard
1995-01-01
Linear programming finds the least expensive way to meet given needs with available resources. Its results are used in every area of engineering and commerce: agriculture, oil refining, banking, and air transport. Authors Kolman and Beck present the basic notions of linear programming and illustrate how they are used to solve important common problems. The software on the included disk leads students step-by-step through the calculations. The Second Edition is completely revised and provides additional review material on linear algebra as well as complete coverage of elementary linear program
Hood, John Linsley
2013-01-01
The Art of Linear Electronics presents the principal aspects of linear electronics and techniques in linear electronic circuit design. The book provides a wide range of information on the elucidation of the methods and techniques in the design of linear electronic circuits. The text discusses such topics as electronic component symbols and circuit drawing; passive and active semiconductor components; DC and low frequency amplifiers; and the basic effects of feedback. Subjects on frequency response modifying circuits and filters; audio amplifiers; low frequency oscillators and waveform generato
Linearity and Non-linearity of Photorefractive effect in Materials ...
Linearity and Non-linearity of Photorefractive effect in Materials using the Band transport ... For low light beam intensities the change in the refractive index is ... field is spatially phase shifted by /2 relative to the interference fringe pattern, which ...
The linear programming bound for binary linear codes
Brouwer, A.E.
1993-01-01
Combining Delsarte's (1973) linear programming bound with the information that certain weights cannot occur, new upper bounds for dmin (n,k), the maximum possible minimum distance of a binary linear code with given word length n and dimension k, are derived.
Linear operator inequalities for strongly stable weakly regular linear systems
Curtain, RF
2001-01-01
We consider the question of the existence of solutions to certain linear operator inequalities (Lur'e equations) for strongly stable, weakly regular linear systems with generating operators A, B, C, 0. These operator inequalities are related to the spectral factorization of an associated Popov
Zollanvari, Amin
2013-05-24
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
LINEAR REGRESSION MODEL ESTİMATİON FOR RIGHT CENSORED DATA
Ersin Yılmaz
2016-05-01
Full Text Available In this study, firstly we will define a right censored data. If we say shortly right-censored data is censoring values that above the exact line. This may be related with scaling device. And then we will use response variable acquainted from right-censored explanatory variables. Then the linear regression model will be estimated. For censored data’s existence, Kaplan-Meier weights will be used for the estimation of the model. With the weights regression model will be consistent and unbiased with that. And also there is a method for the censored data that is a semi parametric regression and this method also give useful results for censored data too. This study also might be useful for the health studies because of the censored data used in medical issues generally.
Zollanvari, Amin; Genton, Marc G.
2013-01-01
We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.
Voltage splay modes and enhanced phase locking in a modified linear Josephson array
Harris, E.B.; Garland, J.C.
1997-01-01
We analyze a modified linear Josephson-junction array in which additional unbiased junctions are used to greatly enhance phase locking. This geometry exhibits strong correlated behavior, with an external magnetic field tuning the voltage splay angle between adjacent Josephson oscillators. The array displays a coherent in-phase mode for f=(1)/(2), where f is the magnetic frustration, while for 0 p (f)=2aV dc /Φ 0 (1-2f). The locked splay modes are found to be tolerant of critical current disorder approaching 100%. The stability of the array has also been studied by computing Floquet exponents. These exponents are found to be negative for all array lengths, with a 1/N 2 dependence, N being the number of series-connected junctions. copyright 1996 The American Physical Society
Linear and non-linear optics of condensed matter
McLean, T.P.
1977-01-01
Part I - Linear optics: 1. General introduction. 2. Frequency dependence of epsilon(ω, k vector). 3. Wave-vector dependence of epsilon(ω, k vector). 4. Tensor character of epsilon(ω, k vector). Part II - Non-linear optics: 5. Introduction. 6. A classical theory of non-linear response in one dimension. 7. The generalization to three dimensions. 8. General properties of the polarizability tensors. 9. The phase-matching condition. 10. Propagation in a non-linear dielectric. 11. Second harmonic generation. 12. Coupling of three waves. 13. Materials and their non-linearities. 14. Processes involving energy exchange with the medium. 15. Two-photon absorption. 16. Stimulated Raman effect. 17. Electro-optic effects. 18. Limitations of the approach presented here. (author)
Predictors of recurrence in pheochromocytoma.
Press, Danielle; Akyuz, Muhammet; Dural, Cem; Aliyev, Shamil; Monteiro, Rosebel; Mino, Jeff; Mitchell, Jamie; Hamrahian, Amir; Siperstein, Allan; Berber, Eren
2014-12-01
The recurrence rate of pheochromocytoma after adrenalectomy is 6.5-16.5%. This study aims to identify predictors of recurrence and optimal biochemical testing and imaging for detecting the recurrence of pheochromocytoma. In this retrospective study we reviewed all patients who underwent adrenalectomy for pheochromocytoma during a 14-year period at a single institution. One hundred thirty-five patients had adrenalectomy for pheochromocytoma. Eight patients (6%) developed recurrent disease. The median time from initial operation to diagnosis of recurrence was 35 months. On multivariate analysis, tumor size >5 cm was an independent predictor of recurrence. One patient with recurrence died, 4 had stable disease, 2 had progression of disease, and 1 was cured. Recurrence was diagnosed by increases in plasma and/or urinary metanephrines and positive imaging in 6 patients (75%), and by positive imaging and normal biochemical levels in 2 patients (25%). Patients with large tumors (>5 cm) should be followed vigilantly for recurrence. Because 25% of patients with recurrence had normal biochemical levels, we recommend routine imaging and testing of plasma or urinary metanephrines for prompt diagnosis of recurrence. Copyright © 2014 Elsevier Inc. All rights reserved.
Generalized linear mixed models modern concepts, methods and applications
Stroup, Walter W
2012-01-01
PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data
Predictors of outcomes in outpatients with anorexia nervosa - Results from the ANTOP study.
Wild, Beate; Friederich, Hans-Christoph; Zipfel, Stephan; Resmark, Gaby; Giel, Katrin; Teufel, Martin; Schellberg, Dieter; Löwe, Bernd; de Zwaan, Martina; Zeeck, Almut; Herpertz, Stephan; Burgmer, Markus; von Wietersheim, Jörn; Tagay, Sefik; Dinkel, Andreas; Herzog, Wolfgang
2016-10-30
This study aimed to determine predictors of BMI and recovery for outpatients with anorexia nervosa (AN). Patients were participants of the ANTOP (Anorexia Nervosa Treatment of Out-Patients) trial and randomized to focal psychodynamic therapy (FPT), enhanced cognitive behavior therapy (CBT-E), or optimized treatment as usual (TAU-O). N=169 patients participated in the one-year follow-up (T4). Outcomes were the BMI and global outcome (recovery/partial syndrome/full syndrome) at T4. We examined the following baseline variables as possible predictors: age, BMI, duration of illness, subtype of AN, various axis I diagnoses, quality of life, self-esteem, and psychological characteristics relevant to AN. Linear and logistic regression analyses were conducted to identify the predictors of the BMI and global outcome. The strongest positive predictor for BMI and recovery at T4 was a higher baseline BMI of the patients. Negative predictors for BMI and recovery were a duration of illness >6 years and a lifetime depression diagnosis at baseline. Additionally, higher bodily pain was significantly associated with a lower BMI and self-esteem was a positive predictor for recovery at T4. A higher baseline BMI and shorter illness duration led to a better outcome. Further research is necessary to investigate whether or not AN patients with lifetime depression, higher bodily pain, and lower self-esteem may benefit from specific treatment approaches. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Crasmareanu Mircea
2017-12-01
Full Text Available We consider the paracomplex version of the notion of mixed linear spaces introduced by M. Jurchescu in [4] by replacing the complex unit i with the paracomplex unit j, j2 = 1. The linear algebra of these spaces is studied with a special view towards their morphisms.
Linear Algebra and Image Processing
Allali, Mohamed
2010-01-01
We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)
Efficient Searching with Linear Constraints
Agarwal, Pankaj K.; Arge, Lars Allan; Erickson, Jeff
2000-01-01
We show how to preprocess a set S of points in d into an external memory data structure that efficiently supports linear-constraint queries. Each query is in the form of a linear constraint xd a0+∑d−1i=1 aixi; the data structure must report all the points of S that satisfy the constraint. This pr...
Johnson, Bruce G.; Gerver, Michael J.; Hawkey, Timothy J.; Fenn, Ralph C.
1993-01-01
Improved linear actuator comprises air slide and linear electric motor. Unit exhibits low friction, low backlash, and more nearly even acceleration. Used in machinery in which positions, velocities, and accelerations must be carefully controlled and/or vibrations must be suppressed.
Linear morphoea follows Blaschko's lines.
Weibel, L; Harper, J I
2008-07-01
The aetiology of morphoea (or localized scleroderma) remains unknown. It has previously been suggested that lesions of linear morphoea may follow Blaschko's lines and thus reflect an embryological development. However, the distribution of linear morphoea has never been accurately evaluated. We aimed to identify common patterns of clinical presentation in children with linear morphoea and to establish whether linear morphoea follows the lines of Blaschko. A retrospective chart review of 65 children with linear morphoea was performed. According to clinical photographs the skin lesions of these patients were plotted on to standardized head and body charts. With the aid of Adobe Illustrator a final figure was produced including an overlay of all individual lesions which was used for comparison with the published lines of Blaschko. Thirty-four (53%) patients had the en coup de sabre subtype, 27 (41%) presented with linear morphoea on the trunk and/or limbs and four (6%) children had a combination of the two. In 55 (85%) children the skin lesions were confined to one side of the body, showing no preference for either left or right side. On comparing the overlays of all body and head lesions with the original lines of Blaschko there was an excellent correlation. Our data indicate that linear morphoea follows the lines of Blaschko. We hypothesize that in patients with linear morphoea susceptible cells are present in a mosaic state and that exposure to some trigger factor may result in the development of this condition.
Campagnoli, Patrizia; Petris, Giovanni
2009-01-01
State space models have gained tremendous popularity in as disparate fields as engineering, economics, genetics and ecology. Introducing general state space models, this book focuses on dynamic linear models, emphasizing their Bayesian analysis. It illustrates the fundamental steps needed to use dynamic linear models in practice, using R package.
Linear Programming across the Curriculum
Yoder, S. Elizabeth; Kurz, M. Elizabeth
2015-01-01
Linear programming (LP) is taught in different departments across college campuses with engineering and management curricula. Modeling an LP problem is taught in every linear programming class. As faculty teaching in Engineering and Management departments, the depth to which teachers should expect students to master this particular type of…
Introduction to RF linear accelerators
Weiss, M.
1994-01-01
The basic features of RF linear accelerators are described. The concept of the 'loaded cavity', essential for the synchronism wave-particle, is introduced, and formulae describing the action of electromagnetic fields on the beam are given. The treatment of intense beams is mentioned, and various existing linear accelerators are presented as examples. (orig.)
Spatial Processes in Linear Ordering
von Hecker, Ulrich; Klauer, Karl Christoph; Wolf, Lukas; Fazilat-Pour, Masoud
2016-01-01
Memory performance in linear order reasoning tasks (A > B, B > C, C > D, etc.) shows quicker, and more accurate responses to queries on wider (AD) than narrower (AB) pairs on a hypothetical linear mental model (A -- B -- C -- D). While indicative of an analogue representation, research so far did not provide positive evidence for spatial…
Andersen, O. Krogh
1975-01-01
of Korringa-Kohn-Rostoker, linear-combination-of-atomic-orbitals, and cellular methods; the secular matrix is linear in energy, the overlap integrals factorize as potential parameters and structure constants, the latter are canonical in the sense that they neither depend on the energy nor the cell volume...
Rapakoulia, Trisevgeni
2017-08-09
Motivation: Drug combination therapy for treatment of cancers and other multifactorial diseases has the potential of increasing the therapeutic effect, while reducing the likelihood of drug resistance. In order to reduce time and cost spent in comprehensive screens, methods are needed which can model additive effects of possible drug combinations. Results: We here show that the transcriptional response to combinatorial drug treatment at promoters, as measured by single molecule CAGE technology, is accurately described by a linear combination of the responses of the individual drugs at a genome wide scale. We also find that the same linear relationship holds for transcription at enhancer elements. We conclude that the described approach is promising for eliciting the transcriptional response to multidrug treatment at promoters and enhancers in an unbiased genome wide way, which may minimize the need for exhaustive combinatorial screens.
Morocco: an unbiased energy transition
Lavergne, Richard
2015-01-01
In October 2014, the International Energy Agency presented its first in depth report 'Morocco 2014 Energy Policy Review' of Morocco's energy policy, an evaluation and recommendations, with as reference the shared goals of the Agency. The accent was placed on renewable energies energy efficiency and climate change. The 'Moroccan way' of energy transition merits the attention of energy economists and of the negotiators involved in COP21. (authors)
Introduction to generalized linear models
Dobson, Annette J
2008-01-01
Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...
Acoustic emission linear pulse holography
Collins, H.D.; Busse, L.J.; Lemon, D.K.
1983-01-01
This paper describes the emission linear pulse holography which produces a chronological linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. A thirty two point sampling array is used to construct phase-only linear holograms of simulated acoustic emission sources on large metal plates. The concept behind the AE linear pulse holography is illustrated, and a block diagram of a data acquisition system to implement the concept is given. Array element spacing, synthetic frequency criteria, and lateral depth resolution are specified. A reference timing transducer positioned between the array and the inspection zone and which inititates the time-of-flight measurements is described. The results graphically illustrate the technique using a one-dimensional FFT computer algorithm (ie. linear backward wave) for an AE image reconstruction
Linear and Generalized Linear Mixed Models and Their Applications
Jiang, Jiming
2007-01-01
This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested
Perley, D. A. [Department of Astronomy, California Institute of Technology, MC 249-17, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Perley, R. A. [National Radio Astronomy Observatory, P.O. Box O, Socorro, NM 87801 (United States); Hjorth, J.; Malesani, D. [Dark Cosmology Centre, Niels Bohr Institute, DK-2100 Copenhagen (Denmark); Michałowski, M. J. [Scottish Universities Physics Alliance, Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh, EH9 3HJ (United Kingdom); Cenko, S. B. [NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Jakobsson, P. [Centre for Astrophysics and Cosmology, Science Institute, University of Iceland, Dunhagi 5, 107 Reykjavík (Iceland); Krühler, T. [European Southern Observatory, Alonso de Córdova 3107, Vitacura, Casilla 19001, Santiago 19 (Chile); Levan, A. J. [Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Tanvir, N. R., E-mail: dperley@astro.caltech.edu [Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH (United Kingdom)
2015-03-10
Luminous infrared galaxies and submillimeter galaxies contribute significantly to stellar mass assembly and provide an important test of the connection between the gamma-ray burst (GRB) rate and that of overall cosmic star formation. We present sensitive 3 GHz radio observations using the Karl G. Jansky Very Large Array of 32 uniformly selected GRB host galaxies spanning a redshift range from 0 < z < 2.5, providing the first fully dust- and sample-unbiased measurement of the fraction of GRBs originating from the universe's most bolometrically luminous galaxies. Four galaxies are detected, with inferred radio star formation rates (SFRs) ranging between 50 and 300 M {sub ☉} yr{sup –1}. Three of the four detections correspond to events consistent with being optically obscured 'dark' bursts. Our overall detection fraction implies that between 9% and 23% of GRBs between 0.5 < z < 2.5 occur in galaxies with S {sub 3GHz} > 10 μJy, corresponding to SFR > 50 M {sub ☉} yr{sup –1} at z ∼ 1 or >250 M {sub ☉} yr{sup –1} at z ∼ 2. Similar galaxies contribute approximately 10%-30% of all cosmic star formation, so our results are consistent with a GRB rate that is not strongly biased with respect to the total SFR of a galaxy. However, all four radio-detected hosts have stellar masses significantly lower than IR/submillimeter-selected field galaxies of similar luminosities. We suggest that the GRB rate may be suppressed in metal-rich environments but independently enhanced in intense starbursts, producing a strong efficiency dependence on mass but little net dependence on bulk galaxy SFR.
Jeong, Hyunsuk; Jo, Sun-Jin; Lee, Seung-Yup; Kim, Eunjin; Son, Hye Jung; Han, Hyun-ho; Lee, Hae Kook; Kweon, Yong-Sil; Bhang, Soo-young; Choi, Jung-Seok; Kim, Bung-Nyun; Gentile, Douglas A; Potenza, Marc N
2017-01-01
Introduction In 2013, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) proposed nine internet gaming disorder (IGD) diagnostic criteria as a condition warranting further empirical and clinical research. The aim of this study is to clarify the natural and clinical courses of IGD proposed DSM-5 in adolescents and to evaluate its risk and protective factors. Methods and analysis The Internet user Cohort for Unbiased Recognition of gaming disorder in Early Adolescence (iCURE) study is an ongoing multidisciplinary, prospective, longitudinal cohort study conducted in 21 schools in Korea. Participant recruitment commenced in March 2015 with the goal of registering 3000 adolescents. The baseline assessment included surveys on emotional, social and environmental characteristics. A parent or guardian completed questionnaires and a structured psychiatric comorbidity diagnostic interview regarding their children. Adolescents with the Internet Game Use-Elicited Symptom Screen total scores of 6 or higher were asked to participate in the clinical diagnostic interview. Two subcohorts of adolescents were constructed: a representative subcohort and a clinical evaluation subcohort. The representative subcohort comprises a randomly selected 10% of the iCURE to investigate the clinical course of IGD based on clinical diagnosis and to estimate the false negative rate. The clinical evaluation subcohort comprised participants meeting three or more of the nine IGD criteria, determined by clinical diagnostic interview, to show the clinical course of IGD. Follow-up data will be collected annually for the 3 years following the baseline assessments. The primary endpoint is 2-year incidence, remission and recurrence rates of IGD. Cross-sectional and longitudinal associations between exposures and outcomes as well as mediation factors will be evaluated. Ethics and dissemination This study is approved by the Institutional Review Board of the Catholic University
Predictors of Transience among Homeless Emerging Adults
Ferguson, Kristin M.; Bender, Kimberly; Thompson, Sanna J.
2014-01-01
This study identified predictors of transience among homeless emerging adults in three cities. A total of 601 homeless emerging adults from Los Angeles, Austin, and Denver were recruited using purposive sampling. Ordinary least squares regression results revealed that significant predictors of greater transience include White ethnicity, high…
Electrical Signs predictors of malignant ventricular arrhythmias
Aleman Fernandez, Ailema Amelia; Dorantes Sanchez, Margarita
2012-01-01
Recurrence of malignant ventricular arrhythmia is frequent in cardioverter-defibrillators related patients. The risk stratification is difficult, there are numerous electrocardiographic predictors but his sensibility and specificity are not absolute. The limit between normal and pathological is not defined, besides the complexity of ventricular arrhythmias. We expose different electrocardiographic predictors that can help to better individual risk stratification
Predictors of treatment failure among pulmonary tuberculosis ...
Introduction: Early identification of Tuberculosis (TB) treatment failure using cost effective means is urgently needed in developing nations. The study set out to describe affordable predictors of TB treatment failure in an African setting. Objective: To determine the predictors of treatment failure among patients with sputum ...
A smart predictor for material property testing
Wang, Wilson; Kanneg, Derek
2008-01-01
A reliable predictor is very useful for real-world industrial applications to forecast the future behavior of dynamic systems. A smart predictor, based on a novel recurrent neural fuzzy (RNF) scheme, is developed in this paper for multi-step-ahead prediction of material properties. A systematic investigation based on two benchmark data sets is conducted in terms of performance and efficiency. Analysis results reveal that, of the data-driven forecasting schemes, predictors based on step input patterns outperform those based on sequential input patterns; the RNF predictor outperforms those based on recurrent neural networks and ANFIS schemes in multi-step-ahead prediction of nonlinear time series. An adaptive Levenberg–Marquardt training technique is adopted to improve the robustness and convergence of the RNF predictor. Furthermore, the proposed smart predictor is implemented for material property testing. Investigation results show that the developed RNF predictor is a reliable forecasting tool for material property testing; it can capture and track the system's dynamic characteristics quickly and accurately. It is also a robust predictor to accommodate different system conditions
Incidence and predictors of coronary stent thrombosis
D'Ascenzo, Fabrizio; Bollati, Mario; Clementi, Fabrizio
2013-01-01
Stent thrombosis remains among the most feared complications of percutaneous coronary intervention (PCI) with stenting. However, data on its incidence and predictors are sparse and conflicting. We thus aimed to perform a collaborative systematic review on incidence and predictors of stent...
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
Linear and quasi-linear equations of parabolic type
Ladyženskaja, O A; Ural′ceva, N N; Uralceva, N N
1968-01-01
Equations of parabolic type are encountered in many areas of mathematics and mathematical physics, and those encountered most frequently are linear and quasi-linear parabolic equations of the second order. In this volume, boundary value problems for such equations are studied from two points of view: solvability, unique or otherwise, and the effect of smoothness properties of the functions entering the initial and boundary conditions on the smoothness of the solutions.
The Theory of Linear Prediction
Vaidyanathan, PP
2007-01-01
Linear prediction theory has had a profound impact in the field of digital signal processing. Although the theory dates back to the early 1940s, its influence can still be seen in applications today. The theory is based on very elegant mathematics and leads to many beautiful insights into statistical signal processing. Although prediction is only a part of the more general topics of linear estimation, filtering, and smoothing, this book focuses on linear prediction. This has enabled detailed discussion of a number of issues that are normally not found in texts. For example, the theory of vecto
Saravanan, R
2018-01-01
Non-linear optical materials have widespread and promising applications, but the efforts to understand the local structure, electron density distribution and bonding is still lacking. The present work explores the structural details, the electron density distribution and the local bond length distribution of some non-linear optical materials. It also gives estimation of the optical band gap, the particle size, crystallite size, and the elemental composition from UV-Visible analysis, SEM, XRD and EDS of some non-linear optical materials respectively.
Optimal control linear quadratic methods
Anderson, Brian D O
2007-01-01
This augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems. It explores linear optimal control theory from an engineering viewpoint, with step-by-step explanations that show clearly how to make practical use of the material.The three-part treatment begins with the basic theory of the linear regulator/tracker for time-invariant and time-varying systems. The Hamilton-Jacobi equation is introduced using the Principle of Optimality, and the infinite-time problem is considered. The second part outlines the
Cellular Automata Rules and Linear Numbers
Nayak, Birendra Kumar; Sahoo, Sudhakar; Biswal, Sagarika
2012-01-01
In this paper, linear Cellular Automta (CA) rules are recursively generated using a binary tree rooted at "0". Some mathematical results on linear as well as non-linear CA rules are derived. Integers associated with linear CA rules are defined as linear numbers and the properties of these linear numbers are studied.
Feedback systems for linear colliders
Hendrickson, L; Himel, Thomas M; Minty, Michiko G; Phinney, N; Raimondi, Pantaleo; Raubenheimer, T O; Shoaee, H; Tenenbaum, P G
1999-01-01
Feedback systems are essential for stable operation of a linear collider, providing a cost-effective method for relaxing tight tolerances. In the Stanford Linear Collider (SLC), feedback controls beam parameters such as trajectory, energy, and intensity throughout the accelerator. A novel dithering optimization system which adjusts final focus parameters to maximize luminosity contributed to achieving record performance in the 1997-98 run. Performance limitations of the steering feedback have been investigated, and improvements have been made. For the Next Linear Collider (NLC), extensive feedback systems are planned as an intregal part of the design. Feedback requiremetns for JLC (the Japanese Linear Collider) are essentially identical to NLC; some of the TESLA requirements are similar but there are significant differences. For NLC, algorithms which incorporate improvements upon the SLC implementation are being prototyped. Specialized systems for the damping rings, rf and interaction point will operate at hi...
An introduction to linear algebra
Mirsky, L
2003-01-01
Rigorous, self-contained coverage of determinants, vectors, matrices and linear equations, quadratic forms, more. Elementary, easily readable account with numerous examples and problems at the end of each chapter.
CLIC: developing a linear collider
Laurent Guiraud
1999-01-01
Compact Linear Collider (CLIC) is a CERN project to provide high-energy electron-positron collisions. Instead of conventional radio-frequency klystrons, CLIC will use a low-energy, high-intensity primary beam to produce acceleration.
1988 linear accelerator conference proceedings
1989-06-01
This report contains papers presented at the 1988 Linear Accelerator Conference. A few topics covered are beam dynamics; beam transport; superconducting components; free electron lasers; ion sources; and klystron research
CERN balances linear collider studies
ILC Newsline
2011-01-01
The forces behind the two most mature proposals for a next-generation collider, the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) study, have been steadily coming together, with scientists from both communities sharing ideas and information across the technology divide. In a support of cooperation between the two, CERN in Switzerland, where most CLIC research takes place, recently converted the project-specific position of CLIC Study Leader to the concept-based Linear Collider Study Leader. The scientist who now holds this position, Steinar Stapnes, is charged with making the linear collider a viable option for CERN’s future, one that could include either CLIC or the ILC. The transition to more involve the ILC must be gradual, he said, and the redefinition of his post is a good start. Though not very much involved with superconducting radiofrequency (SRF) technology, where ILC researchers have made significant advances, CERN participates in many aspect...
Linear Methods for Image Interpolation
Pascal Getreuer
2011-01-01
We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.
Klumpp, A. R.; Lawson, C. L.
1988-01-01
Routines provided for common scalar, vector, matrix, and quaternion operations. Computer program extends Ada programming language to include linear-algebra capabilities similar to HAS/S programming language. Designed for such avionics applications as software for Space Station.
Acoustic emission linear pulse holography
Collins, H.D.; Busse, L.J.; Lemon, D.K.
1983-10-25
This device relates to the concept of and means for performing Acoustic Emission Linear Pulse Holography, which combines the advantages of linear holographic imaging and Acoustic Emission into a single non-destructive inspection system. This unique system produces a chronological, linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. The innovation is the concept of utilizing the crack-generated acoustic emission energy to generate a chronological series of images of a growing crack by applying linear, pulse holographic processing to the acoustic emission data. The process is implemented by placing on a structure an array of piezoelectric sensors (typically 16 or 32 of them) near the defect location. A reference sensor is placed between the defect and the array.
Fowler, Stephanie M; Schmidt, Heinar; van de Ven, Remy; Wynn, Peter; Hopkins, David L
2014-12-01
A Raman spectroscopic hand held device was used to predict shear force (SF) of 80 fresh lamb m. longissimus lumborum (LL) at 1 and 5days post mortem (PM). Traditional predictors of SF including sarcomere length (SL), particle size (PS), cooking loss (CL), percentage myofibrillar breaks and pH were also measured. SF values were regressed against Raman spectra using partial least squares regression and against the traditional predictors using linear regression. The best prediction of shear force values used spectra at 1day PM to predict shear force at 1day which gave a root mean square error of prediction (RMSEP) of 13.6 (Null=14.0) and the R(2) between observed and cross validated predicted values was 0.06 (R(2)cv). Overall, for fresh LL, the predictability SF, by either the Raman hand held probe or traditional predictors was low. Copyright © 2014 Elsevier Ltd. All rights reserved.
Functionalized linear and cyclic polyolefins
Tuba, Robert; Grubbs, Robert H.
2018-02-13
This invention relates to methods and compositions for preparing linear and cyclic polyolefins. More particularly, the invention relates to methods and compositions for preparing functionalized linear and cyclic polyolefins via olefin metathesis reactions. Polymer products produced via the olefin metathesis reactions of the invention may be utilized for a wide range of materials applications. The invention has utility in the fields of polymer and materials chemistry and manufacture.
Rumen Daskalov
2017-07-01
Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].
Explorative methods in linear models
Høskuldsson, Agnar
2004-01-01
The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....
Polarized Electrons for Linear Colliders
Clendenin, J.
2004-01-01
Future electron-positron linear colliders require a highly polarized electron beam with a pulse structure that depends primarily on whether the acceleration utilizes warm or superconducting rf structures. The International Linear Collider (ILC) will use cold structures for the main linac. It is shown that a dc-biased polarized photoelectron source such as successfully used for the SLC can meet the charge requirements for the ILC micropulse with a polarization approaching 90%
Effect Displays in R for Generalised Linear Models
John Fox
2003-07-01
Full Text Available This paper describes the implementation in R of a method for tabular or graphical display of terms in a complex generalised linear model. By complex, I mean a model that contains terms related by marginality or hierarchy, such as polynomial terms, or main effects and interactions. I call these tables or graphs effect displays. Effect displays are constructed by identifying high-order terms in a generalised linear model. Fitted values under the model are computed for each such term. The lower-order "relatives" of a high-order term (e.g., main effects marginal to an interaction are absorbed into the term, allowing the predictors appearing in the high-order term to range over their values. The values of other predictors are fixed at typical values: for example, a covariate could be fixed at its mean or median, a factor at its proportional distribution in the data, or to equal proportions in its several levels. Variations of effect displays are also described, including representation of terms higher-order to any appearing in the model.
Primordial black holes in linear and non-linear regimes
Allahyari, Alireza; Abolhasani, Ali Akbar [Department of Physics, Sharif University of Technology, Tehran (Iran, Islamic Republic of); Firouzjaee, Javad T., E-mail: allahyari@physics.sharif.edu, E-mail: j.taghizadeh.f@ipm.ir [School of Astronomy, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)
2017-06-01
We revisit the formation of primordial black holes (PBHs) in the radiation-dominated era for both linear and non-linear regimes, elaborating on the concept of an apparent horizon. Contrary to the expectation from vacuum models, we argue that in a cosmological setting a density fluctuation with a high density does not always collapse to a black hole. To this end, we first elaborate on the perturbation theory for spherically symmetric space times in the linear regime. Thereby, we introduce two gauges. This allows to introduce a well defined gauge-invariant quantity for the expansion of null geodesics. Using this quantity, we argue that PBHs do not form in the linear regime irrespective of the density of the background. Finally, we consider the formation of PBHs in non-linear regimes, adopting the spherical collapse picture. In this picture, over-densities are modeled by closed FRW models in the radiation-dominated era. The difference of our approach is that we start by finding an exact solution for a closed radiation-dominated universe. This yields exact results for turn-around time and radius. It is important that we take the initial conditions from the linear perturbation theory. Additionally, instead of using uniform Hubble gauge condition, both density and velocity perturbations are admitted in this approach. Thereby, the matching condition will impose an important constraint on the initial velocity perturbations δ {sup h} {sub 0} = −δ{sub 0}/2. This can be extended to higher orders. Using this constraint, we find that the apparent horizon of a PBH forms when δ > 3 at turn-around time. The corrections also appear from the third order. Moreover, a PBH forms when its apparent horizon is outside the sound horizon at the re-entry time. Applying this condition, we infer that the threshold value of the density perturbations at horizon re-entry should be larger than δ {sub th} > 0.7.
Jeong, Hyunsuk; Yim, Hyeon Woo; Jo, Sun-Jin; Lee, Seung-Yup; Kim, Eunjin; Son, Hye Jung; Han, Hyun-Ho; Lee, Hae Kook; Kweon, Yong-Sil; Bhang, Soo-Young; Choi, Jung-Seok; Kim, Bung-Nyun; Gentile, Douglas A; Potenza, Marc N
2017-10-05
In 2013, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) proposed nine internet gaming disorder (IGD) diagnostic criteria as a condition warranting further empirical and clinical research. The aim of this study is to clarify the natural and clinical courses of IGD proposed DSM-5 in adolescents and to evaluate its risk and protective factors. The Internet user Cohort for Unbiased Recognition of gaming disorder in Early Adolescence (iCURE) study is an ongoing multidisciplinary, prospective, longitudinal cohort study conducted in 21 schools in Korea. Participant recruitment commenced in March 2015 with the goal of registering 3000 adolescents. The baseline assessment included surveys on emotional, social and environmental characteristics. A parent or guardian completed questionnaires and a structured psychiatric comorbidity diagnostic interview regarding their children. Adolescents with the Internet Game Use-Elicited Symptom Screen total scores of 6 or higher were asked to participate in the clinical diagnostic interview. Two subcohorts of adolescents were constructed: a representative subcohort and a clinical evaluation subcohort. The representative subcohort comprises a randomly selected 10% of the iCURE to investigate the clinical course of IGD based on clinical diagnosis and to estimate the false negative rate. The clinical evaluation subcohort comprised participants meeting three or more of the nine IGD criteria, determined by clinical diagnostic interview, to show the clinical course of IGD. Follow-up data will be collected annually for the 3 years following the baseline assessments. The primary endpoint is 2-year incidence, remission and recurrence rates of IGD. Cross-sectional and longitudinal associations between exposures and outcomes as well as mediation factors will be evaluated. This study is approved by the Institutional Review Board of the Catholic University of Korea. Results will be published in peer
A predictor-corrector algorithm to estimate the fractional flow in oil-water models
Savioli, Gabriela B; Berdaguer, Elena M Fernandez
2008-01-01
We introduce a predictor-corrector algorithm to estimate parameters in a nonlinear hyperbolic problem. It can be used to estimate the oil-fractional flow function from the Buckley-Leverett equation. The forward model is non-linear: the sought- for parameter is a function of the solution of the equation. Traditionally, the estimation of functions requires the selection of a fitting parametric model. The algorithm that we develop does not require a predetermined parameter model. Therefore, the estimation problem is carried out over a set of parameters which are functions. The algorithm is based on the linearization of the parameter-to-output mapping. This technique is new in the field of nonlinear estimation. It has the advantage of laying aside parametric models. The algorithm is iterative and is of predictor-corrector type. We present theoretical results on the inverse problem. We use synthetic data to test the new algorithm.
Predictors of Dietary Energy Density among Preschool Aged Children
Nilmani N.T. Fernando
2018-02-01
Full Text Available Childhood obesity is a global problem with many contributing factors including dietary energy density (DED. This paper aims to investigate potential predictors of DED among preschool aged children in Victoria, Australia. Secondary analysis of longitudinal data for 209 mother–child pairs from the Melbourne Infant Feeding, Activity and Nutrition Trial was conducted. Data for predictors (maternal child feeding and nutrition knowledge, maternal dietary intake, home food availability, socioeconomic status were obtained through questionnaires completed by first-time mothers when children were aged 4 or 18 months. Three 24-h dietary recalls were completed when children were aged ~3.5 years. DED was calculated utilizing three methods: “food only”, “food and dairy beverages”, and “food and all beverages”. Linear regression analyses were conducted to identify associations between predictors and these three measures of children’s DED. Home availability of fruits (β: −0.82; 95% CI: −1.35, −0.29, p = 0.002 for DEDfood; β: −0.42; 95% CI: −0.82, −0.02, p = 0.041 for DEDfood+dairy beverages and non-core snacks (β: 0.11; 95% CI: 0.02, 0.20, p = 0.016 for DEDfood; β: 0.09; 95% CI: 0.02, 0.15, p = 0.010 for DEDfood+dairy beverages were significantly associated with two of the three DED measures. Providing fruit at home early in a child’s life may encourage the establishment of healthful eating behaviors that could promote a diet that is lower in energy density later in life. Home availability of non-core snacks is likely to increase the energy density of preschool children’s diets, supporting the proposition that non-core snack availability at home should be limited.
Serum Predictors of Percent Lean Mass in Young Adults.
Lustgarten, Michael S; Price, Lori L; Phillips, Edward M; Kirn, Dylan R; Mills, John; Fielding, Roger A
2016-08-01
Lustgarten, MS, Price, LL, Phillips, EM, Kirn, DR, Mills, J, and Fielding, RA. Serum predictors of percent lean mass in young adults. J Strength Cond Res 30(8): 2194-2201, 2016-Elevated lean (skeletal muscle) mass is associated with increased muscle strength and anaerobic exercise performance, whereas low levels of lean mass are associated with insulin resistance and sarcopenia. Therefore, studies aimed at obtaining an improved understanding of mechanisms related to the quantity of lean mass are of interest. Percent lean mass (total lean mass/body weight × 100) in 77 young subjects (18-35 years) was measured with dual-energy x-ray absorptiometry. Twenty analytes and 296 metabolites were evaluated with the use of the standard chemistry screen and mass spectrometry-based metabolomic profiling, respectively. Sex-adjusted multivariable linear regression was used to determine serum analytes and metabolites significantly (p ≤ 0.05 and q ≤ 0.30) associated with the percent lean mass. Two enzymes (alkaline phosphatase and serum glutamate oxaloacetate aminotransferase) and 29 metabolites were found to be significantly associated with the percent lean mass, including metabolites related to microbial metabolism, uremia, inflammation, oxidative stress, branched-chain amino acid metabolism, insulin sensitivity, glycerolipid metabolism, and xenobiotics. Use of sex-adjusted stepwise regression to obtain a final covariate predictor model identified the combination of 5 analytes and metabolites as overall predictors of the percent lean mass (model R = 82.5%). Collectively, these data suggest that a complex interplay of various metabolic processes underlies the maintenance of lean mass in young healthy adults.
The linear-non-linear frontier for the Goldstone Higgs
Gavela, M.B.; Saa, S.; Kanshin, K.; Machado, P.A.N.
2016-01-01
The minimal SO(5)/SO(4) σ-model is used as a template for the ultraviolet completion of scenarios in which the Higgs particle is a low-energy remnant of some high-energy dynamics, enjoying a (pseudo) Nambu-Goldstone-boson ancestry. Varying the σ mass allows one to sweep from the perturbative regime to the customary non-linear implementations. The low-energy benchmark effective non-linear Lagrangian for bosons and fermions is obtained, determining as well the operator coefficients including linear corrections. At first order in the latter, three effective bosonic operators emerge which are independent of the explicit soft breaking assumed. The Higgs couplings to vector bosons and fermions turn out to be quite universal: the linear corrections are proportional to the explicit symmetry-breaking parameters. Furthermore, we define an effective Yukawa operator which allows a simple parametrization and comparison of different heavy-fermion ultraviolet completions. In addition, one particular fermionic completion is explored in detail, obtaining the corresponding leading low-energy fermionic operators. (orig.)
The linear-non-linear frontier for the Goldstone Higgs
Gavela, M.B.; Saa, S. [IFT-UAM/CSIC, Universidad Autonoma de Madrid, Departamento de Fisica Teorica y Instituto de Fisica Teorica, Madrid (Spain); Kanshin, K. [Universita di Padova, Dipartimento di Fisica e Astronomia ' G. Galilei' , Padua (Italy); INFN, Padova (Italy); Machado, P.A.N. [IFT-UAM/CSIC, Universidad Autonoma de Madrid, Departamento de Fisica Teorica y Instituto de Fisica Teorica, Madrid (Spain); Fermi National Accelerator Laboratory, Theoretical Physics Department, Batavia, IL (United States)
2016-12-15
The minimal SO(5)/SO(4) σ-model is used as a template for the ultraviolet completion of scenarios in which the Higgs particle is a low-energy remnant of some high-energy dynamics, enjoying a (pseudo) Nambu-Goldstone-boson ancestry. Varying the σ mass allows one to sweep from the perturbative regime to the customary non-linear implementations. The low-energy benchmark effective non-linear Lagrangian for bosons and fermions is obtained, determining as well the operator coefficients including linear corrections. At first order in the latter, three effective bosonic operators emerge which are independent of the explicit soft breaking assumed. The Higgs couplings to vector bosons and fermions turn out to be quite universal: the linear corrections are proportional to the explicit symmetry-breaking parameters. Furthermore, we define an effective Yukawa operator which allows a simple parametrization and comparison of different heavy-fermion ultraviolet completions. In addition, one particular fermionic completion is explored in detail, obtaining the corresponding leading low-energy fermionic operators. (orig.)
Linearization: Geometric, Complex, and Conditional
Asghar Qadir
2012-01-01
Full Text Available Lie symmetry analysis provides a systematic method of obtaining exact solutions of nonlinear (systems of differential equations, whether partial or ordinary. Of special interest is the procedure that Lie developed to transform scalar nonlinear second-order ordinary differential equations to linear form. Not much work was done in this direction to start with, but recently there have been various developments. Here, first the original work of Lie (and the early developments on it, and then more recent developments based on geometry and complex analysis, apart from Lie’s own method of algebra (namely, Lie group theory, are reviewed. It is relevant to mention that much of the work is not linearization but uses the base of linearization.
Window observers for linear systems
Utkin Vadim
2000-01-01
Full Text Available Given a linear system x ˙ = A x + B u with output y = C x and a window function ω ( t , i.e., ∀ t , ω ( t ∈ {0,1 }, and assuming that the window function is Lebesgue measurable, we refer to the following observer, x ˆ = A x + B u + ω ( t L C ( x − x ˆ as a window observer. The stability issue is treated in this paper. It is proven that for linear time-invariant systems, the window observer can be stabilized by an appropriate design under a very mild condition on the window functions, albeit for linear time-varying system, some regularity of the window functions is required to achieve observer designs with the asymptotic stability. The corresponding design methods are developed. An example is included to illustrate the possible applications
Topics in quaternion linear algebra
Rodman, Leiba
2014-01-01
Quaternions are a number system that has become increasingly useful for representing the rotations of objects in three-dimensional space and has important applications in theoretical and applied mathematics, physics, computer science, and engineering. This is the first book to provide a systematic, accessible, and self-contained exposition of quaternion linear algebra. It features previously unpublished research results with complete proofs and many open problems at various levels, as well as more than 200 exercises to facilitate use by students and instructors. Applications presented in the book include numerical ranges, invariant semidefinite subspaces, differential equations with symmetries, and matrix equations. Designed for researchers and students across a variety of disciplines, the book can be read by anyone with a background in linear algebra, rudimentary complex analysis, and some multivariable calculus. Instructors will find it useful as a complementary text for undergraduate linear algebra courses...
Towards the International Linear Collider
Lopez-Fernandez, Ricardo
2006-01-01
The broad physics potential of e+e- linear colliders was recognized by the high energy physics community right after the end of LEP in 2000. In 2007, the Large Hadron Collider (LHC) now under construction at CERN will obtain its first collisions. The LHC, colliding protons with protons at 14 TeV, will discover a standard model Higgs boson over the full potential mass range, and should be sensitive to new physics into the several TeV range. The program for the Linear Collider (LC) will be set in the context of the discoveries made at the LHC. All the proposals for a Linear Collider will extend the discoveries and provide a wealth of measurements that are essential for giving deeper understanding of their meaning, and pointing the way to further evolution of particle physics in the future. For the mexican groups is the right time to join such an effort
Linear Synchronous Motor Repeatability Tests
Ward, C.R.
2002-01-01
A cart system using linear synchronous motors was being considered for the Plutonium Immobilization Plant (PIP). One of the applications in the PIP was the movement of a stack of furnace trays, filled with the waste form (pucks) from a stacking/unstacking station to several bottom loaded furnaces. A system was ordered to perform this function in the PIP Ceramic Prototype Test Facility (CPTF). This system was installed and started up in SRTC prior to being installed in the CPTF. The PIP was suspended and then canceled after the linear synchronous motor system was started up. This system was used to determine repeatability of a linear synchronous motor cart system for the Modern Pit Facility
Linearly polarized photons at ELSA
Eberhardt, Holger [Physikalisches Institut, Universitaet Bonn (Germany)
2009-07-01
To investigate the nucleon resonance regime in meson photoproduction, double polarization experiments are currently performed at the electron accelerator ELSA in Bonn. The experiments make use of a polarized target and circularly or linearly polarized photon beams. Linearly polarized photons are produced by coherent bremsstrahlung from an accurately aligned diamond crystal. The orientation of the crystal with respect to the electron beam is measured using the Stonehenge-Technique. Both, the energy of maximum polarization and the plane of polarization, can be deliberately chosen for the experiment. The linearly polarized beam provides the basis for the measurement of azimuthal beam asymmetries, such as {sigma} (unpolarized target) and G (polarized target). These observables are extracted in various single and multiple meson photoproduction channels.
Linear programming foundations and extensions
Vanderbei, Robert J
2001-01-01
Linear Programming: Foundations and Extensions is an introduction to the field of optimization. The book emphasizes constrained optimization, beginning with a substantial treatment of linear programming, and proceeding to convex analysis, network flows, integer programming, quadratic programming, and convex optimization. The book is carefully written. Specific examples and concrete algorithms precede more abstract topics. Topics are clearly developed with a large number of numerical examples worked out in detail. Moreover, Linear Programming: Foundations and Extensions underscores the purpose of optimization: to solve practical problems on a computer. Accordingly, the book is coordinated with free efficient C programs that implement the major algorithms studied: -The two-phase simplex method; -The primal-dual simplex method; -The path-following interior-point method; -The homogeneous self-dual methods. In addition, there are online JAVA applets that illustrate various pivot rules and variants of the simplex m...
Uniqueness theorems in linear elasticity
Knops, Robin John
1971-01-01
The classical result for uniqueness in elasticity theory is due to Kirchhoff. It states that the standard mixed boundary value problem for a homogeneous isotropic linear elastic material in equilibrium and occupying a bounded three-dimensional region of space possesses at most one solution in the classical sense, provided the Lame and shear moduli, A and J1 respectively, obey the inequalities (3 A + 2 J1) > 0 and J1>O. In linear elastodynamics the analogous result, due to Neumann, is that the initial-mixed boundary value problem possesses at most one solution provided the elastic moduli satisfy the same set of inequalities as in Kirchhoffs theorem. Most standard textbooks on the linear theory of elasticity mention only these two classical criteria for uniqueness and neglect altogether the abundant literature which has appeared since the original publications of Kirchhoff. To remedy this deficiency it seems appropriate to attempt a coherent description ofthe various contributions made to the study of uniquenes...
Bayes linear statistics, theory & methods
Goldstein, Michael
2007-01-01
Bayesian methods combine information available from data with any prior information available from expert knowledge. The Bayes linear approach follows this path, offering a quantitative structure for expressing beliefs, and systematic methods for adjusting these beliefs, given observational data. The methodology differs from the full Bayesian methodology in that it establishes simpler approaches to belief specification and analysis based around expectation judgements. Bayes Linear Statistics presents an authoritative account of this approach, explaining the foundations, theory, methodology, and practicalities of this important field. The text provides a thorough coverage of Bayes linear analysis, from the development of the basic language to the collection of algebraic results needed for efficient implementation, with detailed practical examples. The book covers:The importance of partial prior specifications for complex problems where it is difficult to supply a meaningful full prior probability specification...
Scalar-tensor linear inflation
Artymowski, Michał [Institute of Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków (Poland); Racioppi, Antonio, E-mail: Michal.Artymowski@uj.edu.pl, E-mail: Antonio.Racioppi@kbfi.ee [National Institute of Chemical Physics and Biophysics, Rävala 10, 10143 Tallinn (Estonia)
2017-04-01
We investigate two approaches to non-minimally coupled gravity theories which present linear inflation as attractor solution: a) the scalar-tensor theory approach, where we look for a scalar-tensor theory that would restore results of linear inflation in the strong coupling limit for a non-minimal coupling to gravity of the form of f (φ) R /2; b) the particle physics approach, where we motivate the form of the Jordan frame potential by loop corrections to the inflaton field. In both cases the Jordan frame potentials are modifications of the induced gravity inflationary scenario, but instead of the Starobinsky attractor they lead to linear inflation in the strong coupling limit.
Predictors of psychological resilience amongst medical students following major earthquakes.
Carter, Frances; Bell, Caroline; Ali, Anthony; McKenzie, Janice; Boden, Joseph M; Wilkinson, Timothy; Bell, Caroline
2016-05-06
To identify predictors of self-reported psychological resilience amongst medical students following major earthquakes in Canterbury in 2010 and 2011. Two hundred and fifty-three medical students from the Christchurch campus, University of Otago, were invited to participate in an electronic survey seven months following the most severe earthquake. Students completed the Connor-Davidson Resilience Scale, the Depression, Anxiety and Stress Scale, the Post-traumatic Disorder Checklist, the Work and Adjustment Scale, and the Eysenck Personality Questionnaire. Likert scales and other questions were also used to assess a range of variables including demographic and historical variables (eg, self-rated resilience prior to the earthquakes), plus the impacts of the earthquakes. The response rate was 78%. Univariate analyses identified multiple variables that were significantly associated with higher resilience. Multiple linear regression analyses produced a fitted model that was able to explain 35% of the variance in resilience scores. The best predictors of higher resilience were: retrospectively-rated personality prior to the earthquakes (higher extroversion and lower neuroticism); higher self-rated resilience prior to the earthquakes; not being exposed to the most severe earthquake; and less psychological distress following the earthquakes. Psychological resilience amongst medical students following major earthquakes was able to be predicted to a moderate extent.
Permafrost Hazards and Linear Infrastructure
Stanilovskaya, Julia; Sergeev, Dmitry
2014-05-01
The international experience of linear infrastructure planning, construction and exploitation in permafrost zone is being directly tied to the permafrost hazard assessment. That procedure should also consider the factors of climate impact and infrastructure protection. The current global climate change hotspots are currently polar and mountain areas. Temperature rise, precipitation and land ice conditions change, early springs occur more often. The big linear infrastructure objects cross the territories with different permafrost conditions which are sensitive to the changes in air temperature, hydrology, and snow accumulation which are connected to climatic dynamics. One of the most extensive linear structures built on permafrost worldwide are Trans Alaskan Pipeline (USA), Alaska Highway (Canada), Qinghai-Xizang Railway (China) and Eastern Siberia - Pacific Ocean Oil Pipeline (Russia). Those are currently being influenced by the regional climate change and permafrost impact which may act differently from place to place. Thermokarst is deemed to be the most dangerous process for linear engineering structures. Its formation and development depend on the linear structure type: road or pipeline, elevated or buried one. Zonal climate and geocryological conditions are also of the determining importance here. All the projects are of the different age and some of them were implemented under different climatic conditions. The effects of permafrost thawing have been recorded every year since then. The exploration and transportation companies from different countries maintain the linear infrastructure from permafrost degradation in different ways. The highways in Alaska are in a good condition due to governmental expenses on annual reconstructions. The Chara-China Railroad in Russia is under non-standard condition due to intensive permafrost response. Standards for engineering and construction should be reviewed and updated to account for permafrost hazards caused by the
A Linear Electromagnetic Piston Pump
Hogan, Paul H.
Advancements in mobile hydraulics for human-scale applications have increased demand for a compact hydraulic power supply. Conventional designs couple a rotating electric motor to a hydraulic pump, which increases the package volume and requires several energy conversions. This thesis investigates the use of a free piston as the moving element in a linear motor to eliminate multiple energy conversions and decrease the overall package volume. A coupled model used a quasi-static magnetic equivalent circuit to calculate the motor inductance and the electromagnetic force acting on the piston. The force was an input to a time domain model to evaluate the mechanical and pressure dynamics. The magnetic circuit model was validated with finite element analysis and an experimental prototype linear motor. The coupled model was optimized using a multi-objective genetic algorithm to explore the parameter space and maximize power density and efficiency. An experimental prototype linear pump coupled pistons to an off-the-shelf linear motor to validate the mechanical and pressure dynamics models. The magnetic circuit force calculation agreed within 3% of finite element analysis, and within 8% of experimental data from the unoptimized prototype linear motor. The optimized motor geometry also had good agreement with FEA; at zero piston displacement, the magnetic circuit calculates optimized motor force within 10% of FEA in less than 1/1000 the computational time. This makes it well suited to genetic optimization algorithms. The mechanical model agrees very well with the experimental piston pump position data when tuned for additional unmodeled mechanical friction. Optimized results suggest that an improvement of 400% of the state of the art power density is attainable with as high as 85% net efficiency. This demonstrates that a linear electromagnetic piston pump has potential to serve as a more compact and efficient supply of fluid power for the human scale.
Multiple predictor smoothing methods for sensitivity analysis: Description of techniques
Storlie, Curtis B.; Helton, Jon C.
2008-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. Then, in the second and concluding part of this presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Multiple predictor smoothing methods for sensitivity analysis: Example results
Storlie, Curtis B.; Helton, Jon C.
2008-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Stability analysis of embedded nonlinear predictor neural generalized predictive controller
Hesham F. Abdel Ghaffar
2014-03-01
Full Text Available Nonlinear Predictor-Neural Generalized Predictive Controller (NGPC is one of the most advanced control techniques that are used with severe nonlinear processes. In this paper, a hybrid solution from NGPC and Internal Model Principle (IMP is implemented to stabilize nonlinear, non-minimum phase, variable dead time processes under high disturbance values over wide range of operation. Also, the superiority of NGPC over linear predictive controllers, like GPC, is proved for severe nonlinear processes over wide range of operation. The necessary conditions required to stabilize NGPC is derived using Lyapunov stability analysis for nonlinear processes. The NGPC stability conditions and improvement in disturbance suppression are verified by both simulation using Duffing’s nonlinear equation and real-time using continuous stirred tank reactor. Up to our knowledge, the paper offers the first hardware embedded Neural GPC which has been utilized to verify NGPC–IMP improvement in realtime.
Sparse Linear Identifiable Multivariate Modeling
Henao, Ricardo; Winther, Ole
2011-01-01
and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable......In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully...
Quantized, piecewise linear filter network
Sørensen, John Aasted
1993-01-01
A quantization based piecewise linear filter network is defined. A method for the training of this network based on local approximation in the input space is devised. The training is carried out by repeatedly alternating between vector quantization of the training set into quantization classes...... and equalization of the quantization classes linear filter mean square training errors. The equalization of the mean square training errors is carried out by adapting the boundaries between neighbor quantization classes such that the differences in mean square training errors are reduced...
Correct Linearization of Einstein's Equations
Rabounski D.
2006-06-01
Full Text Available Regularly Einstein's equations can be reduced to a wave form (linearly dependent from the second derivatives of the space metric in the absence of gravitation, the space rotation and Christoffel's symbols. As shown here, the origin of the problem is that one uses the general covariant theory of measurement. Here the wave form of Einstein's equations is obtained in the terms of Zelmanov's chronometric invariants (physically observable projections on the observer's time line and spatial section. The obtained equations depend on solely the second derivatives even if gravitation, the space rotation and Christoffel's symbols. The correct linearization proves: the Einstein equations are completely compatible with weak waves of the metric.
Basic linear partial differential equations
Treves, Francois
1975-01-01
Focusing on the archetypes of linear partial differential equations, this text for upper-level undergraduates and graduate students features most of the basic classical results. The methods, however, are decidedly nontraditional: in practically every instance, they tend toward a high level of abstraction. This approach recalls classical material to contemporary analysts in a language they can understand, as well as exploiting the field's wealth of examples as an introduction to modern theories.The four-part treatment covers the basic examples of linear partial differential equations and their
Introduction to computational linear algebra
Nassif, Nabil; Erhel, Jocelyne
2015-01-01
Introduction to Computational Linear Algebra introduces the reader with a background in basic mathematics and computer programming to the fundamentals of dense and sparse matrix computations with illustrating examples. The textbook is a synthesis of conceptual and practical topics in ""Matrix Computations."" The book's learning outcomes are twofold: to understand state-of-the-art computational tools to solve matrix computations problems (BLAS primitives, MATLAB® programming) as well as essential mathematical concepts needed to master the topics of numerical linear algebra. It is suitable for s
Linear feedback controls the essentials
Haidekker, Mark A
2013-01-01
The design of control systems is at the very core of engineering. Feedback controls are ubiquitous, ranging from simple room thermostats to airplane engine control. Helping to make sense of this wide-ranging field, this book provides a new approach by keeping a tight focus on the essentials with a limited, yet consistent set of examples. Analysis and design methods are explained in terms of theory and practice. The book covers classical, linear feedback controls, and linear approximations are used when needed. In parallel, the book covers time-discrete (digital) control systems and juxtapos
Passive longitudinal phase space linearizer
P. Craievich
2010-03-01
Full Text Available We report on the possibility to passively linearize the bunch compression process in electron linacs for the next generation x-ray free electron lasers. This can be done by using the monopole wakefields in a dielectric-lined waveguide. The optimum longitudinal voltage loss over the length of the bunch is calculated in order to compensate both the second-order rf time curvature and the second-order momentum compaction terms. Thus, the longitudinal phase space after the compression process is linearized up to a fourth-order term introduced by the convolution between the bunch and the monopole wake function.
Generalized, Linear, and Mixed Models
McCulloch, Charles E; Neuhaus, John M
2011-01-01
An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m
Emittance control in linear colliders
Ruth, R.D.
1991-01-01
Before completing a realistic design of a next-generation linear collider, the authors must first learn the lessons taught by the first generation, the SLC. Given that, they must make designs fault tolerant by including correction and compensation in the basic design. They must also try to eliminate these faults by improved alignment and stability of components. When these two efforts cross, they have a realistic design. The techniques of generation and control of emittance reviewed here provide a foundation for a design which can obtain the necessary luminosity in a next-generation linear collider
Linear contextual modal type theory
Schack-Nielsen, Anders; Schürmann, Carsten
Abstract. When one implements a logical framework based on linear type theory, for example the Celf system [?], one is immediately con- fronted with questions about their equational theory and how to deal with logic variables. In this paper, we propose linear contextual modal type theory that gives...... a mathematical account of the nature of logic variables. Our type theory is conservative over intuitionistic contextual modal type theory proposed by Nanevski, Pfenning, and Pientka. Our main contributions include a mechanically checked proof of soundness and a working implementation....
Predictors of intractable childhood epilepsy
Malik, M.A.; Ahmed, T.M.
2008-01-01
To determine the prognosis of seizures in epileptic children and identify early predictors of intractable childhood epilepsy. All children (aged 1 month to 16 years) with idiopathic or cryptogenic epilepsy who were treated and followed at the centre during the study period were included. The patients who had marked seizures even after two years of adequate treatment were labeled as intractable epileptics (cases). Children who had no seizure for more than one year at last follow-up visit were the controls. Adequate treatment was described as using at least three anti-epileptic agents either alone or in combination with proper compliance and dosage. Records of these patients were reviewed to identify the variables that may be associated with seizure intractability. Of 442 epileptic children, 325 (74%) intractable and 117 (26%) control epileptics were included in the study. Male gender (OR=3.92), seizures onset in infancy >10 seizures before starting treatment (OR=3.76), myoclonic seizures (OR=1.37), neonatal seizures (OR=3.69), abnormal EEG (OR=7.28) and cryptogenic epilepsy (OR=9.69) and head trauma (OR=4.07) were the factors associated with intractable epilepsy. Seizure onset between 5-7 years of age, idiopathic epilepsy, and absence seizures were associated with favourable prognosis in childhood epilepsy. Intractable childhood epilepsy is expected if certain risk factors such as type, age of onset, gender and cause of epilepsy are found. Early referral of such patients to the specialized centres is recommended for prompt and optimal management. (author)
Predictors of Ramadan fasting during pregnancy
Lily A. van Bilsen
2016-12-01
Full Text Available Although the health effects of Ramadan fasting during pregnancy are still unclear, it is important to identify the predictors and motivational factors involved in women’s decision to observe the fast. We investigated these factors in a cross sectional study of 187 pregnant Muslim women who attended antenatal care visits in the Budi Kemuliaan Hospital, Jakarta, Indonesia. The odds of adherence to fasting were reduced by 4% for every week increase in gestational age during Ramadan [odds ratio (OR 0.96; 95% confidence interval (CI 0.92, 1.00; p = 0.06] and increased by 10% for every one unit increase of women’s prepregnancy body mass index (BMI (OR 1.10; 95% CI 0.99, 1.23; p = 0.08. Nonparticipation was associated with opposition from husbands (OR 0.34; 95% CI 0.14, 0.82; p = 0.02 and with women’s fear of possible adverse effects of fasting on their own or the baby’s health (OR 0.47; 95% CI 0.22, 1.01; p = 0.05 and OR 0.43; 95% CI 0.21, 0.89; p = 0.02, respectively, although they were attenuated in multivariable analysis. Neither age, income, education, employment, parity, experience of morning sickness, nor fasting during pregnancy outside of Ramadan determined fasting during pregnancy. Linear regression analysis within women who fasted showed that the number of days fasted were inversely associated with women’s gestational age, fear of possible adverse effects of fasting on their own or the fetal health, and with opposition from husbands. In conclusion, earlier gestational age during Ramadan, husband’s opinion and possibly higher prepregnancy BMI, influence women’s adherence to Ramadan fasting during pregnancy. Fear of adverse health effects of Ramadan fasting is common in both fasting and non-fasting pregnant women.
Childhood temperament predictors of adolescent physical activity
James A Janssen
2017-01-01
Full Text Available Abstract Background Physical inactivity is a leading cause of mortality worldwide. Many patterns of physical activity involvement are established early in life. To date, the role of easily identifiable early-life individual predictors of PA, such as childhood temperament, remains relatively unexplored. Here, we tested whether childhood temperamental activity level, high intensity pleasure, low intensity pleasure, and surgency predicted engagement in physical activity (PA patterns 11 years later in adolescence. Methods Data came from a longitudinal community study (N = 206 participants, 53% females, 70% Caucasian. Parents reported their children’s temperamental characteristics using the Child Behavior Questionnaire (CBQ when children were 4 & 5 years old. Approximately 11 years later, adolescents completed self-reports of PA using the Godin Leisure Time Exercise Questionnaire and the Youth Risk Behavior Survey. Ordered logistic regression, ordinary least squares linear regression, and Zero-inflated Poisson regression models were used to predict adolescent PA from childhood temperament. Race, socioeconomic status, and adolescent body mass index were used as covariates. Results Males with greater childhood temperamental activity level engaged in greater adolescent PA volume (B = .42, SE = .13 and a 1 SD difference in childhood temperamental activity level predicted 29.7% more strenuous adolescent PA per week. Males’ high intensity pleasure predicted higher adolescent PA volume (B = .28, SE = .12. Males’ surgency positively predicted more frequent PA activity (B = .47, SE = .23, OR = 1.61, 95% CI: 1.02, 2.54 and PA volume (B = .31, SE = .12. No predictions from females’ childhood temperament to later PA engagement were identified. Conclusions Childhood temperament may influence the formation of later PA habits, particularly in males. Boys with high temperamental activity level, high intensity
Predictors of thallium exposure and its relation with preterm birth.
Jiang, Yangqian; Xia, Wei; Zhang, Bin; Pan, Xinyun; Liu, Wenyu; Jin, Shuna; Huo, Wenqian; Liu, Hongxiu; Peng, Yang; Sun, Xiaojie; Zhang, Hongling; Zhou, Aifen; Xu, Shunqing; Li, Yuanyuan
2018-02-01
Thallium (Tl) is a well-recognized hazardous toxic heavy metal that has been reported to have embryotoxicity and fetotoxicity. However, little is known about its association with preterm birth (PTB) in humans. We aimed to evaluate the predictors of Tl exposure and assessed its relation with PTB. The study population included 7173 mother-infant pairs from a birth cohort in Wuhan, China. Predictors of Tl concentrations were explored using linear regression analyses, and associations of Tl exposure with risk of PTB or gestational age at birth were estimated using logistic regression or generalized linear models. The geometric mean and median values of urinary Tl concentrations were 0.28 μg/L (0.55 μg/g creatinine) and 0.29 μg/L (0.53 μg/g creatinine). We found that maternal urinary Tl concentrations varied by gestational weight gain, educational attainment, multivitamin and iron supplementations. Women with Tl concentrations higher than 0.80 μg/g creatinine were at higher risk of giving birth prematurely versus those with Tl concentrations lower than 0.36 μg/g creatinine [adjusted odds ratio (95% confidence interval (CI)): 1.55 (1.05, 2.27)], and the association was more pronounced in PTB with premature rupture of membranes (PROM) rather than in PTB without PROM. About 3-fold increase in creatinine-corrected Tl concentrations were associated with 0.99-day decrease in gestational length (95% CI: -1.36, -0.63). This is the first report on the associations between maternal Tl exposure and the risk of PTB. Copyright © 2017 Elsevier Ltd. All rights reserved.
Li, Yanming; Nan, Bin; Zhu, Ji
2015-06-01
We propose a multivariate sparse group lasso variable selection and estimation method for data with high-dimensional predictors as well as high-dimensional response variables. The method is carried out through a penalized multivariate multiple linear regression model with an arbitrary group structure for the regression coefficient matrix. It suits many biology studies well in detecting associations between multiple traits and multiple predictors, with each trait and each predictor embedded in some biological functional groups such as genes, pathways or brain regions. The method is able to effectively remove unimportant groups as well as unimportant individual coefficients within important groups, particularly for large p small n problems, and is flexible in handling various complex group structures such as overlapping or nested or multilevel hierarchical structures. The method is evaluated through extensive simulations with comparisons to the conventional lasso and group lasso methods, and is applied to an eQTL association study. © 2015, The International Biometric Society.
Vanilla Technicolor at Linear Colliders
T. Frandsen, Mads; Jarvinen, Matti; Sannino, Francesco
2011-01-01
We analyze the reach of Linear Colliders (LC)s for models of dynamical electroweak symmetry breaking. We show that LCs can efficiently test the compositeness scale, identified with the mass of the new spin-one resonances, till the maximum energy in the center-of-mass of the colliding leptons. In ...
Variational linear algebraic equations method
Moiseiwitsch, B.L.
1982-01-01
A modification of the linear algebraic equations method is described which ensures a variational bound on the phaseshifts for potentials having a definite sign at all points. The method is illustrated by the elastic scattering of s-wave electrons by the static field of atomic hydrogen. (author)
Feedback Systems for Linear Colliders
1999-01-01
Feedback systems are essential for stable operation of a linear collider, providing a cost-effective method for relaxing tight tolerances. In the Stanford Linear Collider (SLC), feedback controls beam parameters such as trajectory, energy, and intensity throughout the accelerator. A novel dithering optimization system which adjusts final focus parameters to maximize luminosity contributed to achieving record performance in the 1997-98 run. Performance limitations of the steering feedback have been investigated, and improvements have been made. For the Next Linear Collider (NLC), extensive feedback systems are planned as an integral part of the design. Feedback requirements for JLC (the Japanese Linear Collider) are essentially identical to NLC; some of the TESLA requirements are similar but there are significant differences. For NLC, algorithms which incorporate improvements upon the SLC implementation are being prototyped. Specialized systems for the damping rings, rf and interaction point will operate at high bandwidth and fast response. To correct for the motion of individual bunches within a train, both feedforward and feedback systems are planned. SLC experience has shown that feedback systems are an invaluable operational tool for decoupling systems, allowing precision tuning, and providing pulse-to-pulse diagnostics. Feedback systems for the NLC will incorporate the key SLC features and the benefits of advancing technologies