WorldWideScience

Sample records for linear factor analysis

  1. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    Science.gov (United States)

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  2. Foundations of factor analysis

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Introduction Factor Analysis and Structural Theories Brief History of Factor Analysis as a Linear Model Example of Factor AnalysisMathematical Foundations for Factor Analysis Introduction Scalar AlgebraVectorsMatrix AlgebraDeterminants Treatment of Variables as Vectors Maxima and Minima of FunctionsComposite Variables and Linear Transformations Introduction Composite Variables Unweighted Composite VariablesDifferentially Weighted Composites Matrix EquationsMulti

  3. Analysis of Nonlinear Dynamics in Linear Compressors Driven by Linear Motors

    Science.gov (United States)

    Chen, Liangyuan

    2018-03-01

    The analysis of dynamic characteristics of the mechatronics system is of great significance for the linear motor design and control. Steady-state nonlinear response characteristics of a linear compressor are investigated theoretically based on the linearized and nonlinear models. First, the influence factors considering the nonlinear gas force load were analyzed. Then, a simple linearized model was set up to analyze the influence on the stroke and resonance frequency. Finally, the nonlinear model was set up to analyze the effects of piston mass, spring stiffness, driving force as an example of design parameter variation. The simulating results show that the stroke can be obtained by adjusting the excitation amplitude, frequency and other adjustments, the equilibrium position can be adjusted by adjusting the DC input, and to make the more efficient operation, the operating frequency must always equal to the resonance frequency.

  4. Identification of noise in linear data sets by factor analysis

    International Nuclear Information System (INIS)

    Roscoe, B.A.; Hopke, Ph.K.

    1982-01-01

    A technique which has the ability to identify bad data points, after the data has been generated, is classical factor analysis. The ability of classical factor analysis to identify two different types of data errors make it ideally suited for scanning large data sets. Since the results yielded by factor analysis indicate correlations between parameters, one must know something about the nature of the data set and the analytical techniques used to obtain it to confidentially isolate errors. (author)

  5. Linear factor copula models and their properties

    KAUST Repository

    Krupskii, Pavel; Genton, Marc G.

    2018-01-01

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  6. Linear factor copula models and their properties

    KAUST Repository

    Krupskii, Pavel

    2018-04-25

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  7. Linear Accelerator Stereotactic Radiosurgery of Central Nervous System Arteriovenous Malformations: A 15-Year Analysis of Outcome-Related Factors in a Single Tertiary Center.

    Science.gov (United States)

    Thenier-Villa, José Luis; Galárraga-Campoverde, Raúl Alejandro; Martínez Rolán, Rosa María; De La Lama Zaragoza, Adolfo Ramón; Martínez Cueto, Pedro; Muñoz Garzón, Víctor; Salgado Fernández, Manuel; Conde Alonso, Cesáreo

    2017-07-01

    Linear accelerator stereotactic radiosurgery is one of the modalities available for the treatment of central nervous system arteriovenous malformations (AVMs). The aim of this study was to describe our 15-year experience with this technique in a single tertiary center and the analysis of outcome-related factors. From 1998 to 2013, 195 patients were treated with linear accelerator-based radiosurgery; we conducted a retrospective study collecting patient- and AVM-related variables. Treatment outcomes were obliteration, posttreatment hemorrhage, symptomatic radiation-induced changes, and 3-year neurologic status. We also analyzed prognostic factors of each outcome and predictability analysis of 5 scales: Spetzler-Martin grade, Lawton-Young supplementary and Lawton combined scores, radiosurgery-based AVM score, Virginia Radiosurgery AVM Scale, and Heidelberg score. Overall obliteration rate was 81%. Nidus diameter and venous drainage were predictive of obliteration (P linear accelerator-based radiosurgery is a useful, valid, effective, and safe modality for treatment of brain AVMs. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Linear Algebraic Method for Non-Linear Map Analysis

    International Nuclear Information System (INIS)

    Yu, L.; Nash, B.

    2009-01-01

    We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.

  9. Linear model analysis of the influencing factors of boar longevity in Southern China.

    Science.gov (United States)

    Wang, Chao; Li, Jia-Lian; Wei, Hong-Kui; Zhou, Yuan-Fei; Jiang, Si-Wen; Peng, Jian

    2017-04-15

    This study aimed to investigate the factors influencing the boar herd life month (BHLM) in Southern China. A total of 1630 records of culling boars from nine artificial insemination centers were collected from January 2013 to May 2016. A logistic regression model and two linear models were used to analyze the effects of breed, housing type, age at herd entry, and seed stock herd on boar removal reason and BHLM, respectively. Boar breed and the age at herd entry had significant effects on the removal reasons (P linear models (with or without removal reason including) showed boars raised individually in stalls exhibited shorter BHLM than those raised in pens (P introduction. Copyright © 2017. Published by Elsevier Inc.

  10. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

    Science.gov (United States)

    Zeb, Salman; Yousaf, Muhammad

    2017-01-01

    In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

  11. A comparison between linear and non-linear analysis of flexible pavements

    Energy Technology Data Exchange (ETDEWEB)

    Soleymani, H.R.; Berthelot, C.F.; Bergan, A.T. [Saskatchewan Univ., Saskatoon, SK (Canada). Dept. of Mechanical Engineering

    1995-12-31

    Computer pavement analysis programs, which are based on mathematical simulation models, were compared. The programs included in the study were: ELSYM5, an Elastic Linear (EL) pavement analysis program, MICH-PAVE, a Finite Element Non-Linear (FENL) and Finite Element Linear (FEL) pavement analysis program. To perform the analysis different tire pressures, pavement material properties and asphalt layer thicknesses were selected. Evaluation criteria used in the analysis were tensile strain in bottom of the asphalt layer, vertical compressive strain at the top of the subgrade and surface displacement. Results showed that FENL methods predicted more strain and surface deflection than the FEL and EL analysis methods. Analyzing pavements with FEL does not offer many advantages over the EL method. Differences in predicted strains between the three methods of analysis in some cases was found to be close to 100% It was suggested that these programs require more calibration and validation both theoretically and empirically to accurately correlate with field observations. 19 refs., 4 tabs., 9 figs.

  12. Comparison of Linear and Non-linear Regression Analysis to Determine Pulmonary Pressure in Hyperthyroidism.

    Science.gov (United States)

    Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan

    2017-01-01

    This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second

  13. CFORM- LINEAR CONTROL SYSTEM DESIGN AND ANALYSIS: CLOSED FORM SOLUTION AND TRANSIENT RESPONSE OF THE LINEAR DIFFERENTIAL EQUATION

    Science.gov (United States)

    Jamison, J. W.

    1994-01-01

    CFORM was developed by the Kennedy Space Center Robotics Lab to assist in linear control system design and analysis using closed form and transient response mechanisms. The program computes the closed form solution and transient response of a linear (constant coefficient) differential equation. CFORM allows a choice of three input functions: the Unit Step (a unit change in displacement); the Ramp function (step velocity); and the Parabolic function (step acceleration). It is only accurate in cases where the differential equation has distinct roots, and does not handle the case for roots at the origin (s=0). Initial conditions must be zero. Differential equations may be input to CFORM in two forms - polynomial and product of factors. In some linear control analyses, it may be more appropriate to use a related program, Linear Control System Design and Analysis (KSC-11376), which uses root locus and frequency response methods. CFORM was written in VAX FORTRAN for a VAX 11/780 under VAX VMS 4.7. It has a central memory requirement of 30K. CFORM was developed in 1987.

  14. [Multiple linear regression and ROC curve analysis of the factors of lumbar spine bone mineral density].

    Science.gov (United States)

    Zhang, Xiaodong; Zhao, Yinxia; Hu, Shaoyong; Hao, Shuai; Yan, Jiewen; Zhang, Lingyan; Zhao, Jing; Li, Shaolin

    2015-09-01

    To investigate the correlation between the lumbar vertebra bone mineral density (BMD) and age, gender, height, weight, body mass index, waistline, hipline, bone marrow and abdomen fat, and to explore the key factor affecting the BMD. A total of 72 cases were randomly recruited. All the subjects underwent a spectroscopic examination of the third lumber vertebra with single-voxel method in 1.5T MR. Lipid fractions (FF%) were measured. Quantitative CT were also performed to get the BMD of L3 and the corresponding abdomen subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT). The statistical analysis were performed by SPSS 19.0. Multiple linear regression showed except the age and FF% showed significant difference (P0.05). The correlation of age and FF% with BMD was statistically negatively significant (r=-0.830, -0.521, P<0.05). The ROC curve analysis showed that the sensitivety and specificity of predicting osteoporosis were 81.8% and 86.9%, with a threshold of 58.5 years old. And it showed that the sensitivety and specificity of predicting osteoporosis were 90.9% and 55.7%, with a threshold of 52.8% for FF%. The lumbar vertebra BMD was significantly and negatively correlated with age and bone marrow FF%, but it was not significantly correlated with gender, height, weight, BMI, waistline, hipline, SAT and VAT. And age was the critical factor.

  15. Advanced analysis technique for the evaluation of linear alternators and linear motors

    Science.gov (United States)

    Holliday, Jeffrey C.

    1995-01-01

    A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.

  16. Linear regression analysis: part 14 of a series on evaluation of scientific publications.

    Science.gov (United States)

    Schneider, Astrid; Hommel, Gerhard; Blettner, Maria

    2010-11-01

    Regression analysis is an important statistical method for the analysis of medical data. It enables the identification and characterization of relationships among multiple factors. It also enables the identification of prognostically relevant risk factors and the calculation of risk scores for individual prognostication. This article is based on selected textbooks of statistics, a selective review of the literature, and our own experience. After a brief introduction of the uni- and multivariable regression models, illustrative examples are given to explain what the important considerations are before a regression analysis is performed, and how the results should be interpreted. The reader should then be able to judge whether the method has been used correctly and interpret the results appropriately. The performance and interpretation of linear regression analysis are subject to a variety of pitfalls, which are discussed here in detail. The reader is made aware of common errors of interpretation through practical examples. Both the opportunities for applying linear regression analysis and its limitations are presented.

  17. Domination spaces and factorization of linear and multilinear ...

    African Journals Online (AJOL)

    It is well known that not every summability property for multilinear operators leads to a factorization theorem. In this paper we undertake a detailed study of factorization schemes for summing linear and nonlinear operators. Our aim is to integrate under the same theory a wide family of classes of mappings for which a Pietsch ...

  18. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    OpenAIRE

    Zuidwijk, Rob

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an optimal solution are investigated, and the optimal solution is studied on a so-called critical range of the initial data, in which certain properties such as the optimal basis in linear programming are ...

  19. Analysis of γ spectra in airborne radioactivity measurements using multiple linear regressions

    International Nuclear Information System (INIS)

    Bao Min; Shi Quanlin; Zhang Jiamei

    2004-01-01

    This paper describes the net peak counts calculating of nuclide 137 Cs at 662 keV of γ spectra in airborne radioactivity measurements using multiple linear regressions. Mathematic model is founded by analyzing every factor that has contribution to Cs peak counts in spectra, and multiple linear regression function is established. Calculating process adopts stepwise regression, and the indistinctive factors are eliminated by F check. The regression results and its uncertainty are calculated using Least Square Estimation, then the Cs peak net counts and its uncertainty can be gotten. The analysis results for experimental spectrum are displayed. The influence of energy shift and energy resolution on the analyzing result is discussed. In comparison with the stripping spectra method, multiple linear regression method needn't stripping radios, and the calculating result has relation with the counts in Cs peak only, and the calculating uncertainty is reduced. (authors)

  20. Generalized Linear Mixed Model Analysis of Urban-Rural Differences in Social and Behavioral Factors for Colorectal Cancer Screening

    Science.gov (United States)

    Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin

    2017-09-27

    Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (pregression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License

  1. Non-linear time series analysis on flow instability of natural circulation under rolling motion condition

    International Nuclear Information System (INIS)

    Zhang, Wenchao; Tan, Sichao; Gao, Puzhen; Wang, Zhanwei; Zhang, Liansheng; Zhang, Hong

    2014-01-01

    Highlights: • Natural circulation flow instabilities in rolling motion are studied. • The method of non-linear time series analysis is used. • Non-linear evolution characteristic of flow instability is analyzed. • Irregular complex flow oscillations are chaotic oscillations. • The effect of rolling parameter on the threshold of chaotic oscillation is studied. - Abstract: Non-linear characteristics of natural circulation flow instabilities under rolling motion conditions were studied by the method of non-linear time series analysis. Experimental flow time series of different dimensionless power and rolling parameters were analyzed based on phase space reconstruction theory. Attractors which were reconstructed in phase space and the geometric invariants, including correlation dimension, Kolmogorov entropy and largest Lyapunov exponent, were determined. Non-linear characteristics of natural circulation flow instabilities under rolling motion conditions was studied based on the results of the geometric invariant analysis. The results indicated that the values of the geometric invariants first increase and then decrease as dimensionless power increases which indicated the non-linear characteristics of the system first enhance and then weaken. The irregular complex flow oscillation is typical chaotic oscillation because the value of geometric invariants is at maximum. The threshold of chaotic oscillation becomes larger as the rolling frequency or rolling amplitude becomes big. The main influencing factors that influence the non-linear characteristics of the natural circulation system under rolling motion are thermal driving force, flow resistance and the additional forces caused by rolling motion. The non-linear characteristics of the natural circulation system under rolling motion changes caused by the change of the feedback and coupling degree among these influencing factors when the dimensionless power or rolling parameters changes

  2. Perturbation analysis of linear control problems

    International Nuclear Information System (INIS)

    Petkov, Petko; Konstantinov, Mihail

    2017-01-01

    The paper presents a brief overview of the technique of splitting operators, proposed by the authors and intended for perturbation analysis of control problems involving unitary and orthogonal matrices. Combined with the technique of Lyapunov majorants and the implementation of the Banach and Schauder fixed point principles, it allows to obtain rigorous non-local perturbation bounds for a set of sensitivity analysis problems. Among them are the reduction of linear systems into orthogonal canonical forms, the feedback synthesis problem and pole assignment problem in particular, as well as other important problems in control theory and linear algebra. Key words: perturbation analysis, canonical forms, feedback synthesis

  3. Linear and nonlinear models for predicting fish bioconcentration factors for pesticides.

    Science.gov (United States)

    Yuan, Jintao; Xie, Chun; Zhang, Ting; Sun, Jinfang; Yuan, Xuejie; Yu, Shuling; Zhang, Yingbiao; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu

    2016-08-01

    This work is devoted to the applications of the multiple linear regression (MLR), multilayer perceptron neural network (MLP NN) and projection pursuit regression (PPR) to quantitative structure-property relationship analysis of bioconcentration factors (BCFs) of pesticides tested on Bluegill (Lepomis macrochirus). Molecular descriptors of a total of 107 pesticides were calculated with the DRAGON Software and selected by inverse enhanced replacement method. Based on the selected DRAGON descriptors, a linear model was built by MLR, nonlinear models were developed using MLP NN and PPR. The robustness of the obtained models was assessed by cross-validation and external validation using test set. Outliers were also examined and deleted to improve predictive power. Comparative results revealed that PPR achieved the most accurate predictions. This study offers useful models and information for BCF prediction, risk assessment, and pesticide formulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Seismic analysis of equipment system with non-linearities such as gap and friction using equivalent linearization method

    International Nuclear Information System (INIS)

    Murakami, H.; Hirai, T.; Nakata, M.; Kobori, T.; Mizukoshi, K.; Takenaka, Y.; Miyagawa, N.

    1989-01-01

    Many of the equipment systems of nuclear power plants contain a number of non-linearities, such as gap and friction, due to their mechanical functions. It is desirable to take such non-linearities into account appropriately for the evaluation of the aseismic soundness. However, in usual design works, linear analysis method with rough assumptions is applied from engineering point of view. An equivalent linearization method is considered to be one of the effective analytical techniques to evaluate non-linear responses, provided that errors to a certain extent are tolerated, because it has greater simplicity in analysis and economization in computing time than non-linear analysis. The objective of this paper is to investigate the applicability of the equivalent linearization method to evaluate the maximum earthquake response of equipment systems such as the CANDU Fuelling Machine which has multiple non- linearities

  5. Common pitfalls in statistical analysis: Linear regression analysis

    Directory of Open Access Journals (Sweden)

    Rakesh Aggarwal

    2017-01-01

    Full Text Available In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis.

  6. Orthogonal sparse linear discriminant analysis

    Science.gov (United States)

    Liu, Zhonghua; Liu, Gang; Pu, Jiexin; Wang, Xiaohong; Wang, Haijun

    2018-03-01

    Linear discriminant analysis (LDA) is a linear feature extraction approach, and it has received much attention. On the basis of LDA, researchers have done a lot of research work on it, and many variant versions of LDA were proposed. However, the inherent problem of LDA cannot be solved very well by the variant methods. The major disadvantages of the classical LDA are as follows. First, it is sensitive to outliers and noises. Second, only the global discriminant structure is preserved, while the local discriminant information is ignored. In this paper, we present a new orthogonal sparse linear discriminant analysis (OSLDA) algorithm. The k nearest neighbour graph is first constructed to preserve the locality discriminant information of sample points. Then, L2,1-norm constraint on the projection matrix is used to act as loss function, which can make the proposed method robust to outliers in data points. Extensive experiments have been performed on several standard public image databases, and the experiment results demonstrate the performance of the proposed OSLDA algorithm.

  7. Calculation of elastic-plastic strain ranges for fatigue analysis based on linear elastic stresses

    International Nuclear Information System (INIS)

    Sauer, G.

    1998-01-01

    Fatigue analysis requires that the maximum strain ranges be known. These strain ranges are generally computed from linear elastic analysis. The elastic strain ranges are enhanced by a factor K e to obtain the total elastic-plastic strain range. The reliability of the fatigue analysis depends on the quality of this factor. Formulae for calculating the K e factor are proposed. A beam is introduced as a computational model for determining the elastic-plastic strains. The beam is loaded by the elastic stresses of the real structure. The elastic-plastic strains of the beam are compared with the beam's elastic strains. This comparison furnishes explicit expressions for the K e factor. The K e factor is tested by means of seven examples. (orig.)

  8. Factors affecting the HIV/AIDS epidemic: An ecological analysis of ...

    African Journals Online (AJOL)

    Factors affecting the HIV/AIDS epidemic: An ecological analysis of global data. ... Backward multiple linear regression analysis identified the proportion of Muslims, physicians density, and adolescent fertility rate are as the three most prominent factors linked with the national HIV epidemic. Conclusions: The findings support ...

  9. Log Linear Models for Religious and Social Factors affecting the practice of Family Planning Methods in Lahore, Pakistan

    Directory of Open Access Journals (Sweden)

    Farooq Ahmad

    2006-01-01

    Full Text Available This is cross sectional study based on 304 households (couples with wives age less than 48 years, chosen from urban locality (city Lahore. Fourteen religious, demographic and socio-economic factors of categorical nature like husband education, wife education, husband’s monthly income, occupation of husband, household size, husband-wife discussion, number of living children, desire for more children, duration of marriage, present age of wife, age of wife at marriage, offering of prayers, political view, and religiously decisions were taken to understand acceptance of family planning. Multivariate log-linear analysis was applied to identify association pattern and interrelationship among factors. The logit model was applied to explore the relationship between predictor factors and dependent factor, and to explore which are the factors upon which acceptance of family planning is highly depending. Log-linear analysis demonstrate that preference of contraceptive use was found to be consistently associated with factors Husband-Wife discussion, Desire for more children, No. of children, Political view and Duration of married life. While Husband’s monthly income, Occupation of husband, Age of wife at marriage and Offering of prayers resulted in no statistical explanation of adoption of family planning methods.

  10. Likelihood-based Dynamic Factor Analysis for Measurement and Forecasting

    NARCIS (Netherlands)

    Jungbacker, B.M.J.P.; Koopman, S.J.

    2015-01-01

    We present new results for the likelihood-based analysis of the dynamic factor model. The latent factors are modelled by linear dynamic stochastic processes. The idiosyncratic disturbance series are specified as autoregressive processes with mutually correlated innovations. The new results lead to

  11. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    NARCIS (Netherlands)

    R.A. Zuidwijk (Rob)

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an

  12. An introduction to linear ordinary differential equations using the impulsive response method and factorization

    CERN Document Server

    Camporesi, Roberto

    2016-01-01

    This book presents a method for solving linear ordinary differential equations based on the factorization of the differential operator. The approach for the case of constant coefficients is elementary, and only requires a basic knowledge of calculus and linear algebra. In particular, the book avoids the use of distribution theory, as well as the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The case of variable coefficients is addressed using Mammana’s result for the factorization of a real linear ordinary differential operator into a product of first-order (complex) factors, as well as a recent generalization of this result to the case of complex-valued coefficients.

  13. Applied linear algebra and matrix analysis

    CERN Document Server

    Shores, Thomas S

    2018-01-01

    In its second edition, this textbook offers a fresh approach to matrix and linear algebra. Its blend of theory, computational exercises, and analytical writing projects is designed to highlight the interplay between these aspects of an application. This approach places special emphasis on linear algebra as an experimental science that provides tools for solving concrete problems. The second edition’s revised text discusses applications of linear algebra like graph theory and network modeling methods used in Google’s PageRank algorithm. Other new materials include modeling examples of diffusive processes, linear programming, image processing, digital signal processing, and Fourier analysis. These topics are woven into the core material of Gaussian elimination and other matrix operations; eigenvalues, eigenvectors, and discrete dynamical systems; and the geometrical aspects of vector spaces. Intended for a one-semester undergraduate course without a strict calculus prerequisite, Applied Linear Algebra and M...

  14. MDCT linear and volumetric analysis of adrenal glands: Normative data and multiparametric assessment

    International Nuclear Information System (INIS)

    Carsin-Vu, Aline; Mule, Sebastien; Janvier, Annaelle; Hoeffel, Christine; Oubaya, Nadia; Delemer, Brigitte; Soyer, Philippe

    2016-01-01

    To study linear and volumetric adrenal measurements, their reproducibility, and correlations between total adrenal volume (TAV) and adrenal micronodularity, age, gender, body mass index (BMI), visceral (VAAT) and subcutaneous adipose tissue volume (SAAT), presence of diabetes, chronic alcoholic abuse and chronic inflammatory disease (CID). We included 154 patients (M/F, 65/89; mean age, 57 years) undergoing abdominal multidetector row computed tomography (MDCT). Two radiologists prospectively independently performed adrenal linear and volumetric measurements with semi-automatic software. Inter-observer reliability was studied using inter-observer correlation coefficient (ICC). Relationships between TAV and associated factors were studied using bivariate and multivariable analysis. Mean TAV was 8.4 ± 2.7 cm 3 (3.3-18.7 cm 3 ). ICC was excellent for TAV (0.97; 95 % CI: 0.96-0.98) and moderate to good for linear measurements. TAV was significantly greater in men (p < 0.0001), alcoholics (p = 0.04), diabetics (p = 0.0003) and those with micronodular glands (p = 0.001). TAV was lower in CID patients (p = 0.0001). TAV correlated positively with VAAT (r = 0.53, p < 0.0001), BMI (r = 0.42, p < 0.0001), SAAT (r = 0.29, p = 0.0003) and age (r = 0.23, p = 0.005). Multivariable analysis revealed gender, micronodularity, diabetes, age and BMI as independent factors influencing TAV. Adrenal gland MDCT-based volumetric measurements are more reproducible than linear measurements. Gender, micronodularity, age, BMI and diabetes independently influence TAV. (orig.)

  15. Non-linear seismic analysis of structures coupled with fluid

    International Nuclear Information System (INIS)

    Descleve, P.; Derom, P.; Dubois, J.

    1983-01-01

    This paper presents a method to calculate non-linear structure behaviour under horizontal and vertical seismic excitation, making possible the full non-linear seismic analysis of a reactor vessel. A pseudo forces method is used to introduce non linear effects and the problem is solved by superposition. Two steps are used in the method: - Linear calculation of the complete model. - Non linear analysis of thin shell elements and calculation of seismic induced pressure originating from linear and non linear effects, including permanent loads and thermal stresses. Basic aspects of the mathematical formulation are developed. It has been applied to axi-symmetric shell element using a Fourier series solution. For the fluid interaction effect, a comparison is made with a dynamic test. In an example of application, the displacement and pressure time history are given. (orig./GL)

  16. Linear discriminant analysis for welding fault detection

    International Nuclear Information System (INIS)

    Li, X.; Simpson, S.W.

    2010-01-01

    This work presents a new method for real time welding fault detection in industry based on Linear Discriminant Analysis (LDA). A set of parameters was calculated from one second blocks of electrical data recorded during welding and based on control data from reference welds under good conditions, as well as faulty welds. Optimised linear combinations of the parameters were determined with LDA and tested with independent data. Short arc welds in overlap joints were studied with various power sources, shielding gases, wire diameters, and process geometries. Out-of-position faults were investigated. Application of LDA fault detection to a broad range of welding procedures was investigated using a similarity measure based on Principal Component Analysis. The measure determines which reference data are most similar to a given industrial procedure and the appropriate LDA weights are then employed. Overall, results show that Linear Discriminant Analysis gives an effective and consistent performance in real-time welding fault detection.

  17. Incomplete factorization technique for positive definite linear systems

    International Nuclear Information System (INIS)

    Manteuffel, T.A.

    1980-01-01

    This paper describes a technique for solving the large sparse symmetric linear systems that arise from the application of finite element methods. The technique combines an incomplete factorization method called the shifted incomplete Cholesky factorization with the method of generalized conjugate gradients. The shifted incomplete Cholesky factorization produces a splitting of the matrix A that is dependent upon a parameter α. It is shown that if A is positive definite, then there is some α for which this splitting is possible and that this splitting is at least as good as the Jacobi splitting. The method is shown to be more efficient on a set of test problems than either direct methods or explicit iteration schemes

  18. COMPARATIVE STUDY OF THREE LINEAR SYSTEM SOLVER APPLIED TO FAST DECOUPLED LOAD FLOW METHOD FOR CONTINGENCY ANALYSIS

    Directory of Open Access Journals (Sweden)

    Syafii

    2017-03-01

    Full Text Available This paper presents the assessment of fast decoupled load flow computation using three linear system solver scheme. The full matrix version of the fast decoupled load flow based on XB methods used in this study. The numerical investigations are carried out on the small and large test systems. The execution time of small system such as IEEE 14, 30, and 57 are very fast, therefore the computation time can not be compared for these cases. Another cases IEEE 118, 300 and TNB 664 produced significant execution speedup. The superLU factorization sparse matrix solver has best performance and speedup of load flow solution as well as in contigency analysis. The invers full matrix solver can solved only for IEEE 118 bus test system in 3.715 second and for another cases take too long time. However for superLU factorization linear solver can solved all of test system in 7.832 second for a largest of test system. Therefore the superLU factorization linear solver can be a viable alternative applied in contingency analysis.

  19. Driven Factors Analysis of China’s Irrigation Water Use Efficiency by Stepwise Regression and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Renfu Jia

    2016-01-01

    Full Text Available This paper introduces an integrated approach to find out the major factors influencing efficiency of irrigation water use in China. It combines multiple stepwise regression (MSR and principal component analysis (PCA to obtain more realistic results. In real world case studies, classical linear regression model often involves too many explanatory variables and the linear correlation issue among variables cannot be eliminated. Linearly correlated variables will cause the invalidity of the factor analysis results. To overcome this issue and reduce the number of the variables, PCA technique has been used combining with MSR. As such, the irrigation water use status in China was analyzed to find out the five major factors that have significant impacts on irrigation water use efficiency. To illustrate the performance of the proposed approach, the calculation based on real data was conducted and the results were shown in this paper.

  20. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    Science.gov (United States)

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  1. Local hyperspectral data multisharpening based on linear/linear-quadratic nonnegative matrix factorization by integrating lidar data

    Science.gov (United States)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2015-10-01

    In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.

  2. Analysis of linear energy transfers and quality factors of charged particles produced by spontaneous fission neutrons from 252Cf and 244Pu in the human body

    International Nuclear Information System (INIS)

    Endo, A.; Sato, T.

    2013-01-01

    Absorbed doses, linear energy transfers (LETs) and quality factors of secondary charged particles in organs and tissues, generated via the interactions of the spontaneous fission neutrons from. 252 Cf and. 244 Pu within the human body, were studied using the Particle and Heavy Ion Transport Code System (PHITS) coupled with the ICRP Reference Phantom. Both the absorbed doses and the quality factors in target organs generally decrease with increasing distance from the source organ. The analysis of LET distributions of secondary charged particles led to the identification of the relationship between LET spectra and target-source organ locations. A comparison between human body-averaged mean quality factors and fluence-averaged radiation weighting factors showed that the current numerical conventions for the radiation weighting factors of neutrons, updated in ICRP103, and the quality factors for internal exposure are valid. (authors)

  3. Linear mixed-effects modeling approach to FMRI group analysis.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  4. Lattice Boltzmann methods for global linear instability analysis

    Science.gov (United States)

    Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis

    2017-12-01

    Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.

  5. Validation of head scatter factor for an Elekta synergy platform linear accelerator

    International Nuclear Information System (INIS)

    Johannes, N.B.

    2013-07-01

    A semi-empirical method has been proposed and developed to model and compute head or collimator scatter factors for 6 and 15 MV photon beams from Elekta Synergy platform linear accelerator at the radiation oncology centre of 'Sweden-Ghana Medical Centre Limited', East Legon Hills in Accra. The proposed model was based on two dimensional Gaussian distribution, whose output was compared to measured head scatter factor data for the linear accelerator obtained during commissioning of the teletherapy machine. The two dimensions Gaussian distribution model used physical specifications and configuration of the head unit (collimator system) of the linear accelerator, which were obtained from the user manual provided by the manufacturer of the linear accelerator. The algorithm for the model was implemented using Matlab software in the Microsoft windows environment. The model was done for both square and rectangular fields, and the output compared with corresponding measured data. The comparisons for the square fields were used to establish an error term in the Gaussian distribution function. The error term was determined by plotting the difference between the output factors from MatLab and the corresponding measured data as function of one side of a square field (equivalent square field). The correlation equation of the curve obtained was chosen as the error term, which was incorporated into the Gaussian distribution function. This was repeated for two photon beam energies (6 and 15 MV). The refined Gaussian distributions were then used to determine head scatter factors for square and rectangular fields. For the rectangular fields, Sterling's proposed formula was used to find equivalent square used to obtain the equivalent square fields found in the error terms of the proposed formula was sed to find equivalent square used to obtain the equivalent square fields found in the error terms of the proposed and developed model. The output of the 2D Gaussian distribution without

  6. Signals and transforms in linear systems analysis

    CERN Document Server

    Wasylkiwskyj, Wasyl

    2013-01-01

    Signals and Transforms in Linear Systems Analysis covers the subject of signals and transforms, particularly in the context of linear systems theory. Chapter 2 provides the theoretical background for the remainder of the text. Chapter 3 treats Fourier series and integrals. Particular attention is paid to convergence properties at step discontinuities. This includes the Gibbs phenomenon and its amelioration via the Fejer summation techniques. Special topics include modulation and analytic signal representation, Fourier transforms and analytic function theory, time-frequency analysis and frequency dispersion. Fundamentals of linear system theory for LTI analogue systems, with a brief account of time-varying systems, are covered in Chapter 4 . Discrete systems are covered in Chapters 6 and 7.  The Laplace transform treatment in Chapter 5 relies heavily on analytic function theory as does Chapter 8 on Z -transforms. The necessary background on complex variables is provided in Appendix A. This book is intended to...

  7. Design and Analysis of MEMS Linear Phased Array

    Directory of Open Access Journals (Sweden)

    Guoxiang Fan

    2016-01-01

    Full Text Available A structure of micro-electro-mechanical system (MEMS linear phased array based on “multi-cell” element is designed to increase radiation sound pressure of transducer working in bending vibration mode at high frequency. In order to more accurately predict the resonant frequency of an element, the theoretical analysis of the dynamic equation of a fixed rectangular composite plate and finite element method simulation are adopted. The effects of the parameters both in the lateral and elevation direction on the three-dimensional beam directivity characteristics are comprehensively analyzed. The key parameters in the analysis include the “cell” number of element, “cell” size, “inter-cell” spacing and the number of elements, element width. The simulation results show that optimizing the linear array parameters both in the lateral and elevation direction can greatly improve the three-dimensional beam focusing for MEMS linear phased array, which is obviously different from the traditional linear array.

  8. Non linear stability analysis of parallel channels with natural circulation

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Ashish Mani; Singh, Suneet, E-mail: suneet.singh@iitb.ac.in

    2016-12-01

    Highlights: • Nonlinear instabilities in natural circulation loop are studied. • Generalized Hopf points, Sub and Supercritical Hopf bifurcations are identified. • Bogdanov–Taken Point (BT Point) is observed by nonlinear stability analysis. • Effect of parameters on stability of system is studied. - Abstract: Linear stability analysis of two-phase flow in natural circulation loop is quite extensively studied by many researchers in past few years. It can be noted that linear stability analysis is limited to the small perturbations only. It is pointed out that such systems typically undergo Hopf bifurcation. If the Hopf bifurcation is subcritical, then for relatively large perturbation, the system has unstable limit cycles in the (linearly) stable region in the parameter space. Hence, linear stability analysis capturing only infinitesimally small perturbations is not sufficient. In this paper, bifurcation analysis is carried out to capture the non-linear instability of the dynamical system and both subcritical and supercritical bifurcations are observed. The regions in the parameter space for which subcritical and supercritical bifurcations exist are identified. These regions are verified by numerical simulation of the time-dependent, nonlinear ODEs for the selected points in the operating parameter space using MATLAB ODE solver.

  9. Form factors in the projected linear chiral sigma model

    International Nuclear Information System (INIS)

    Alberto, P.; Coimbra Univ.; Bochum Univ.; Ruiz Arriola, E.; Fiolhais, M.; Urbano, J.N.; Coimbra Univ.; Goeke, K.; Gruemmer, F.; Bochum Univ.

    1990-01-01

    Several nucleon form factors are computed within the framework of the linear chiral soliton model. To this end variational means and projection techniques applied to generalized hedgehog quark-boson Fock states are used. In this procedure the Goldberger-Treiman relation and a virial theorem for the pion-nucleon form factor are well fulfilled demonstrating the consistency of the treatment. Both proton and neutron charge form factors are correctly reproduced, as well as the proton magnetic one. The shapes of the neutron magnetic and of the axial form factors are good but their absolute values at the origin are too large. The slopes of all the form factors at zero momentum transfer are in good agreement with the experimental data. The pion-nucleon form factor exhibits to great extent a monopole shape with a cut-off mass of Λ=690 MeV. Electromagnetic form factors for the vertex γNΔ and the nucleon spin distribution are also evaluated and discussed. (orig.)

  10. Non-linear finite element analysis in structural mechanics

    CERN Document Server

    Rust, Wilhelm

    2015-01-01

    This monograph describes the numerical analysis of non-linearities in structural mechanics, i.e. large rotations, large strain (geometric non-linearities), non-linear material behaviour, in particular elasto-plasticity as well as time-dependent behaviour, and contact. Based on that, the book treats stability problems and limit-load analyses, as well as non-linear equations of a large number of variables. Moreover, the author presents a wide range of problem sets and their solutions. The target audience primarily comprises advanced undergraduate and graduate students of mechanical and civil engineering, but the book may also be beneficial for practising engineers in industry.

  11. Analysis of Linear Hybrid Systems in CLP

    DEFF Research Database (Denmark)

    Banda, Gourinath; Gallagher, John Patrick

    2009-01-01

    In this paper we present a procedure for representing the semantics of linear hybrid automata (LHAs) as constraint logic programs (CLP); flexible and accurate analysis and verification of LHAs can then be performed using generic CLP analysis and transformation tools. LHAs provide an expressive...

  12. MTF measurement and analysis of linear array HgCdTe infrared detectors

    Science.gov (United States)

    Zhang, Tong; Lin, Chun; Chen, Honglei; Sun, Changhong; Lin, Jiamu; Wang, Xi

    2018-01-01

    The slanted-edge technique is the main method for measurement detectors MTF, however this method is commonly used on planar array detectors. In this paper the authors present a modified slanted-edge method to measure the MTF of linear array HgCdTe detectors. Crosstalk is one of the major factors that degrade the MTF value of such an infrared detector. This paper presents an ion implantation guard-ring structure which was designed to effectively absorb photo-carriers that may laterally defuse between adjacent pixels thereby suppressing crosstalk. Measurement and analysis of the MTF of the linear array detectors with and without a guard-ring were carried out. The experimental results indicated that the ion implantation guard-ring structure effectively suppresses crosstalk and increases MTF value.

  13. Slope Safety Factor Calculations With Non-Linear Yield Criterion Using Finite Elements

    DEFF Research Database (Denmark)

    Clausen, Johan; Damkilde, Lars

    2006-01-01

    The factor of safety for a slope is calculated with the finite element method using a non-linear yield criterion of the Hoek-Brown type. The parameters of the Hoek-Brown criterion are found from triaxial test data. Parameters of the linear Mohr-Coulomb criterion are calibrated to the same triaxial...... are carried out at much higher stress levels than present in a slope failure, this leads to the conclusion that the use of the non-linear criterion leads to a safer slope design...

  14. Linear and non-linear amplification of high-mode perturbations at the ablation front in HiPER targets

    Energy Technology Data Exchange (ETDEWEB)

    Olazabal-Loume, M; Breil, J; Hallo, L; Ribeyre, X [CELIA, UMR 5107 Universite Bordeaux 1-CNRS-CEA, 351 cours de la Liberation, 33405 Talence (France); Sanz, J, E-mail: olazabal@celia.u-bordeaux1.f [ETSI Aeronauticos, Universidad Politecnica de Madrid, Madrid 28040 (Spain)

    2011-01-15

    The linear and non-linear sensitivity of the 180 kJ baseline HiPER target to high-mode perturbations, i.e. surface roughness, is addressed using two-dimensional simulations and a complementary analysis by linear and non-linear ablative Rayleigh-Taylor models. Simulations provide an assessment of an early non-linear stage leading to a significant deformation of the ablation surface for modes of maximum linear growth factor. A design using a picket prepulse evidences an improvement in the target stability inducing a delay of the non-linear behavior. Perturbation evolution and shape, evidenced by simulations of the non-linear stage, are analyzed with existing self-consistent non-linear theory.

  15. Evaluation of beach cleanup effects using linear system analysis.

    Science.gov (United States)

    Kataoka, Tomoya; Hinata, Hirofumi

    2015-02-15

    We established a method for evaluating beach cleanup effects (BCEs) based on a linear system analysis, and investigated factors determining BCEs. Here we focus on two BCEs: decreasing the total mass of toxic metals that could leach into a beach from marine plastics and preventing the fragmentation of marine plastics on the beach. Both BCEs depend strongly on the average residence time of marine plastics on the beach (τ(r)) and the period of temporal variability of the input flux of marine plastics (T). Cleanups on the beach where τ(r) is longer than T are more effective than those where τ(r) is shorter than T. In addition, both BCEs are the highest near the time when the remnants of plastics reach the local maximum (peak time). Therefore, it is crucial to understand the following three factors for effective cleanups: the average residence time, the plastic input period and the peak time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. The analysis and design of linear circuits

    CERN Document Server

    Thomas, Roland E; Toussaint, Gregory J

    2009-01-01

    The Analysis and Design of Linear Circuits, 6e gives the reader the opportunity to not only analyze, but also design and evaluate linear circuits as early as possible. The text's abundance of problems, applications, pedagogical tools, and realistic examples helps engineers develop the skills needed to solve problems, design practical alternatives, and choose the best design from several competing solutions. Engineers searching for an accessible introduction to resistance circuits will benefit from this book that emphasizes the early development of engineering judgment.

  17. CFD analysis of linear compressors considering load conditions

    Science.gov (United States)

    Bae, Sanghyun; Oh, Wonsik

    2017-08-01

    This paper is a study on computational fluid dynamics (CFD) analysis of linear compressor considering load conditions. In the conventional CFD analysis of the linear compressor, the load condition was not considered in the behaviour of the piston. In some papers, behaviour of piston is assumed as sinusoidal motion provided by user defined function (UDF). In the reciprocating type compressor, the stroke of the piston is restrained by the rod, while the stroke of the linear compressor is not restrained, and the stroke changes depending on the load condition. The greater the pressure difference between the discharge refrigerant and the suction refrigerant, the more the centre point of the stroke is pushed backward. And the behaviour of the piston is not a complete sine wave. For this reason, when the load condition changes in the CFD analysis of the linear compressor, it may happen that the ANSYS code is changed or unfortunately the modelling is changed. In addition, a separate analysis or calculation is required to find a stroke that meets the load condition, which may contain errors. In this study, the coupled mechanical equations and electrical equations are solved using the UDF, and the behaviour of the piston is solved considering the pressure difference across the piston. Using the above method, the stroke of the piston with respect to the motor specification of the analytical model can be calculated according to the input voltage, and the piston behaviour can be realized considering the thrust amount due to the pressure difference.

  18. On the null distribution of Bayes factors in linear regression

    Science.gov (United States)

    We show that under the null, the 2 log (Bayes factor) is asymptotically distributed as a weighted sum of chi-squared random variables with a shifted mean. This claim holds for Bayesian multi-linear regression with a family of conjugate priors, namely, the normal-inverse-gamma prior, the g-prior, and...

  19. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  20. Linear regression and sensitivity analysis in nuclear reactor design

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.; McClarren, Ryan G.

    2015-01-01

    Highlights: • Presented a benchmark for the applicability of linear regression to complex systems. • Applied linear regression to a nuclear reactor power system. • Performed neutronics, thermal–hydraulics, and energy conversion using Brayton’s cycle for the design of a GCFBR. • Performed detailed sensitivity analysis to a set of parameters in a nuclear reactor power system. • Modeled and developed reactor design using MCNP, regression using R, and thermal–hydraulics in Java. - Abstract: The paper presents a general strategy applicable for sensitivity analysis (SA), and uncertainity quantification analysis (UA) of parameters related to a nuclear reactor design. This work also validates the use of linear regression (LR) for predictive analysis in a nuclear reactor design. The analysis helps to determine the parameters on which a LR model can be fit for predictive analysis. For those parameters, a regression surface is created based on trial data and predictions are made using this surface. A general strategy of SA to determine and identify the influential parameters those affect the operation of the reactor is mentioned. Identification of design parameters and validation of linearity assumption for the application of LR of reactor design based on a set of tests is performed. The testing methods used to determine the behavior of the parameters can be used as a general strategy for UA, and SA of nuclear reactor models, and thermal hydraulics calculations. A design of a gas cooled fast breeder reactor (GCFBR), with thermal–hydraulics, and energy transfer has been used for the demonstration of this method. MCNP6 is used to simulate the GCFBR design, and perform the necessary criticality calculations. Java is used to build and run input samples, and to extract data from the output files of MCNP6, and R is used to perform regression analysis and other multivariate variance, and analysis of the collinearity of data

  1. Comparison of equivalent linear and non linear methods on ground response analysis: case study at West Bangka site

    International Nuclear Information System (INIS)

    Eko Rudi Iswanto; Eric Yee

    2016-01-01

    Within the framework of identifying NPP sites, site surveys are performed in West Bangka (WB), Bangka-Belitung Island Province. Ground response analysis of a potential site has been carried out using peak strain profiles and peak ground acceleration. The objective of this research is to compare Equivalent Linear (EQL) and Non Linear (NL) methods of ground response analysis on the selected NPP site (West Bangka) using Deep Soil software. Equivalent linear method is widely used because requires soil data in simple way and short time of computational process. On the other hand, non linear method is capable of representing the actual soil behaviour by considering non linear soil parameter. The results showed that EQL method has similar trends to NL method. At surface layer, the acceleration values for EQL and NL methods are resulted as 0.425 g and 0.375 g respectively. NL method is more reliable in capturing higher frequencies of spectral acceleration compared to EQL method. (author)

  2. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.

    Science.gov (United States)

    Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-04-01

    To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.

  3. Error Analysis on Plane-to-Plane Linear Approximate Coordinate ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, the error analysis has been done for the linear approximate transformation between two tangent planes in celestial sphere in a simple case. The results demonstrate that the error from the linear transformation does not meet the requirement of high-precision astrometry under some conditions, so the ...

  4. Controllability analysis of decentralised linear controllers for polymeric fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Serra, Maria; Aguado, Joaquin; Ansede, Xavier; Riera, Jordi [Institut de Robotica i Informatica Industrial, Universitat Politecnica de Catalunya - Consejo Superior de Investigaciones Cientificas, C. Llorens i Artigas 4, 08028 Barcelona (Spain)

    2005-10-10

    This work deals with the control of polymeric fuel cells. It includes a linear analysis of the system at different operating points, the comparison and selection of different control structures, and the validation of the controlled system by simulation. The work is based on a complex non linear model which has been linearised at several operating points. The linear analysis tools used are the Morari resiliency index, the condition number, and the relative gain array. These techniques are employed to compare the controllability of the system with different control structures and at different operating conditions. According to the results, the most promising control structures are selected and their performance with PI based diagonal controllers is evaluated through simulations with the complete non linear model. The range of operability of the examined control structures is compared. Conclusions indicate good performance of several diagonal linear controllers. However, very few have a wide operability range. (author)

  5. Interior Point Method for Solving Fuzzy Number Linear Programming Problems Using Linear Ranking Function

    Directory of Open Access Journals (Sweden)

    Yi-hua Zhong

    2013-01-01

    Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.

  6. Linear elastic obstacles: analysis of experimental results in the case of stress dependent pre-exponentials

    International Nuclear Information System (INIS)

    Surek, T.; Kuon, L.G.; Luton, M.J.; Jones, J.J.

    1975-01-01

    For the case of linear elastic obstacles, the analysis of experimental plastic flow data is shown to have a particularly simple form when the pre-exponential factor is a single-valued function of the modulus-reduced stress. The analysis permits the separation of the stress and temperature dependence of the strain rate into those of the pre-exponential factor and the activation free energy. As a consequence, the true values of the activation enthalpy, volume and entropy also are obtained. The approach is applied to four sets of experimental data, including Zr, and the results for the pre-exponential term are examined for self-consistency in view of the assumed functional dependence

  7. Linearized spectrum correlation analysis for line emission measurements.

    Science.gov (United States)

    Nishizawa, T; Nornberg, M D; Den Hartog, D J; Sarff, J S

    2017-08-01

    A new spectral analysis method, Linearized Spectrum Correlation Analysis (LSCA), for charge exchange and passive ion Doppler spectroscopy is introduced to provide a means of measuring fast spectral line shape changes associated with ion-scale micro-instabilities. This analysis method is designed to resolve the fluctuations in the emission line shape from a stationary ion-scale wave. The method linearizes the fluctuations around a time-averaged line shape (e.g., Gaussian) and subdivides the spectral output channels into two sets to reduce contributions from uncorrelated fluctuations without averaging over the fast time dynamics. In principle, small fluctuations in the parameters used for a line shape model can be measured by evaluating the cross spectrum between different channel groupings to isolate a particular fluctuating quantity. High-frequency ion velocity measurements (100-200 kHz) were made by using this method. We also conducted simulations to compare LSCA with a moment analysis technique under a low photon count condition. Both experimental and synthetic measurements demonstrate the effectiveness of LSCA.

  8. A parametric FE modeling of brake for non-linear analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed,Ibrahim; Fatouh, Yasser [Automotive and Tractors Technology Department, Faculty of Industrial Education, Helwan University, Cairo (Egypt); Aly, Wael [Refrigeration and Air-Conditioning Technology Department, Faculty of Industrial Education, Helwan University, Cairo (Egypt)

    2013-07-01

    A parametric modeling of a drum brake based on 3-D Finite Element Methods (FEM) for non-contact analysis is presented. Many parameters are examined during this study such as the effect of drum-lining interface stiffness, coefficient of friction, and line pressure on the interface contact. Firstly, the modal analysis of the drum brake is also studied to get the natural frequency and instability of the drum to facilitate transforming the modal elements to non-contact elements. It is shown that the Unsymmetric solver of the modal analysis is efficient enough to solve this linear problem after transforming the non-linear behavior of the contact between the drum and the lining to a linear behavior. A SOLID45 which is a linear element is used in the modal analysis and then transferred to non-linear elements which are Targe170 and Conta173 that represent the drum and lining for contact analysis study. The contact analysis problems are highly non-linear and require significant computer resources to solve it, however, the contact problem give two significant difficulties. Firstly, the region of contact is not known based on the boundary conditions such as line pressure, and drum and friction material specs. Secondly, these contact problems need to take the friction into consideration. Finally, it showed a good distribution of the nodal reaction forces on the slotted lining contact surface and existing of the slot in the middle of the lining can help in wear removal due to the friction between the lining and the drum. Accurate contact stiffness can give a good representation for the pressure distribution between the lining and the drum. However, a full contact of the front part of the slotted lining could occur in case of 20, 40, 60 and 80 bar of piston pressure and a partially contact between the drum and lining can occur in the rear part of the slotted lining.

  9. Factorization of a class of almost linear second-order differential equations

    International Nuclear Information System (INIS)

    Estevez, P G; Kuru, S; Negro, J; Nieto, L M

    2007-01-01

    A general type of almost linear second-order differential equations, which are directly related to several interesting physical problems, is characterized. The solutions of these equations are obtained using the factorization technique, and their non-autonomous invariants are also found by means of scale transformations

  10. The Linear Time Frequency Analysis Toolbox

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel; Torrésani, Bruno; Balazs, Peter

    2011-01-01

    The Linear Time Frequency Analysis Toolbox is a Matlab/Octave toolbox for computational time-frequency analysis. It is intended both as an educational and computational tool. The toolbox provides the basic Gabor, Wilson and MDCT transform along with routines for constructing windows (lter...... prototypes) and routines for manipulating coe cients. It also provides a bunch of demo scripts devoted either to demonstrating the main functions of the toolbox, or to exemplify their use in specic signal processing applications. In this paper we describe the used algorithms, their mathematical background...

  11. Linear stability analysis of flow instabilities with a nodalized reduced order model in heated channel

    International Nuclear Information System (INIS)

    Paul, Subhanker; Singh, Suneet

    2015-01-01

    The prime objective of the presented work is to develop a Nodalized Reduced Order Model (NROM) to carry linear stability analysis of flow instabilities in a two-phase flow system. The model is developed by dividing the single phase and two-phase region of a uniformly heated channel into N number of nodes followed by time dependent spatial linear approximations for single phase enthalpy and two-phase quality between the consecutive nodes. Moving boundary scheme has been adopted in the model, where all the node boundaries vary with time due to the variation of boiling boundary inside the heated channel. Using a state space approach, the instability thresholds are delineated by stability maps plotted in parameter planes of phase change number (N pch ) and subcooling number (N sub ). The prime feature of the present model is that, though the model equations are simpler due to presence of linear-linear approximations for single phase enthalpy and two-phase quality, yet the results are in good agreement with the existing models (Karve [33]; Dokhane [34]) where the model equations run for several pages and experimental data (Solberg [41]). Unlike the existing ROMs, different two-phase friction factor multiplier correlations have been incorporated in the model. The applicability of various two-phase friction factor multipliers and their effects on stability behaviour have been depicted by carrying a comparative study. It is also observed that the Friedel model for friction factor calculations produces the most accurate results with respect to the available experimental data. (authors)

  12. Application of linearized model to the stability analysis of the pressurized water reactor

    International Nuclear Information System (INIS)

    Li Haipeng; Huang Xiaojin; Zhang Liangju

    2008-01-01

    A Linear Time-Invariant model of the Pressurized Water Reactor is formulated through the linearization of the nonlinear model. The model simulation results show that the linearized model agrees well with the nonlinear model under small perturbation. Based upon the Lyapunov's First Method, the linearized model is applied to the stability analysis of the Pressurized Water Reactor. The calculation results show that the methodology of linearization to stability analysis is conveniently feasible. (authors)

  13. Comparison of modal spectral and non-linear time history analysis of a piping system

    International Nuclear Information System (INIS)

    Gerard, R.; Aelbrecht, D.; Lafaille, J.P.

    1987-01-01

    A typical piping system of the discharge line of the chemical and volumetric control system, outside the containment, between the penetration and the heat exchanger, an operating power plant was analyzed using four different methods: Modal spectral analysis with 2% constant damping, modal spectral analysis using ASME Code Case N411 (PVRC damping), linear time history analysis, non-linear time history analysis. This paper presents an estimation of the conservatism of the linear methods compared to the non-linear analysis. (orig./HP)

  14. Time-dependent tumour repopulation factors in linear-quadratic equations

    International Nuclear Information System (INIS)

    Dale, R.G.

    1989-01-01

    Tumour proliferation effects can be tentatively quantified in the linear-quadratic (LQ) method by the incorporation of a time-dependent factor, the magnitude of which is related both to the value of α in the tumour α/β ratio, and to the tumour doubling time. The method, the principle of which has been suggested by a numbre of other workers for use in fractionated therapy, is here applied to both fractionated and protracted radiotherapy treatments, and examples of its uses are given. By assuming that repopulation of late-responding tissues is significant during normal treatment strategies in terms of the behaviour of the Extrapolated Response Dose (ERD). Although the numerical credibility of the analysis used here depends on the reliability of the LQ model, and on the assumption that the rate of repopulation is constant throughout treatment, the predictions are consistent with other lines of reasoning which point to the advantages of accelerated hyperfractionation. In particular, it is demonstrated that accelerated fractionation represents a relatively 'foregiving' treatment which enables tumours of a variety of sensitivities and clonogenic growth rates to be treated moderately successfully, even though the critical cellular parameters may not be known in individual cases. The analysis also suggests that tumours which combine low intrinsic sensitivity with a very short doubling time might be bettter controlled by low dose-rate continuous therapy than by almost any form of accelerated hyperfractionation. (author). 24 refs.; 5 figs

  15. Spatial Analysis of Linear Structures in the Exploration of Groundwater

    Directory of Open Access Journals (Sweden)

    Abdramane Dembele

    2017-11-01

    Full Text Available The analysis of linear structures on major geological formations plays a crucial role in resource exploration in the Inner Niger Delta. Highlighting and mapping of the large lithological units were carried out using image fusion, spectral bands (RGB coding, Principal Component Analysis (PCA, and band ratio methods. The automatic extraction method of linear structures has permitted the obtaining of a structural map with 82,659 linear structures, distributed on different stratigraphic stages. The intensity study shows an accentuation in density over 12.52% of the total area, containing 22.02% of the linear structures. The density and nodes (intersections of fractures formed by the linear structures on the different lithologies allowed to observe the behavior of the region’s aquifers in the exploration of subsoil resources. The central density, in relation to the hydrographic network of the lowlands, shows the conditioning of the flow and retention of groundwater in the region, and in-depth fluids. The node areas and high-density linear structures, have shown an ability to have rejections in deep (pores that favor the formation of structural traps for oil resources.

  16. Generalized Linear Covariance Analysis

    Science.gov (United States)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  17. Linear Ordinary Differential Equations with Constant Coefficients. Revisiting the Impulsive Response Method Using Factorization

    Science.gov (United States)

    Camporesi, Roberto

    2011-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of…

  18. A new detrended semipartial cross-correlation analysis: Assessing the important meteorological factors affecting API

    International Nuclear Information System (INIS)

    Shen, Chen-Hua

    2015-01-01

    To analyze the unique contribution of meteorological factors to the air pollution index (API), a new method, the detrended semipartial cross-correlation analysis (DSPCCA), is proposed. Based on both a detrended cross-correlation analysis and a DFA-based multivariate-linear-regression (DMLR), this method is improved by including a semipartial correlation technique, which is used to indicate the unique contribution of an explanatory variable to multiple correlation coefficients. The advantages of this method in handling nonstationary time series are illustrated by numerical tests. To further demonstrate the utility of this method in environmental systems, new evidence of the primary contribution of meteorological factors to API is provided through DMLR. Results show that the most important meteorological factors affecting API are wind speed and diurnal temperature range, and the explanatory ability of meteorological factors to API gradually strengthens with increasing time scales. The results suggest that DSPCCA is a useful method for addressing environmental systems. - Highlights: • A detrended multiple linear regression is shown. • A detrended semipartial cross correlation analysis is proposed. • The important meteorological factors affecting API are assessed. • The explanatory ability of meteorological factors to API gradually strengthens with increasing time scales.

  19. A new detrended semipartial cross-correlation analysis: Assessing the important meteorological factors affecting API

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Chen-Hua, E-mail: shenandchen01@163.com [College of Geographical Science, Nanjing Normal University, Nanjing 210046 (China); Jiangsu Center for Collaborative Innovation in Geographical Information Resource, Nanjing 210046 (China); Key Laboratory of Virtual Geographic Environment of Ministry of Education, Nanjing 210046 (China)

    2015-12-04

    To analyze the unique contribution of meteorological factors to the air pollution index (API), a new method, the detrended semipartial cross-correlation analysis (DSPCCA), is proposed. Based on both a detrended cross-correlation analysis and a DFA-based multivariate-linear-regression (DMLR), this method is improved by including a semipartial correlation technique, which is used to indicate the unique contribution of an explanatory variable to multiple correlation coefficients. The advantages of this method in handling nonstationary time series are illustrated by numerical tests. To further demonstrate the utility of this method in environmental systems, new evidence of the primary contribution of meteorological factors to API is provided through DMLR. Results show that the most important meteorological factors affecting API are wind speed and diurnal temperature range, and the explanatory ability of meteorological factors to API gradually strengthens with increasing time scales. The results suggest that DSPCCA is a useful method for addressing environmental systems. - Highlights: • A detrended multiple linear regression is shown. • A detrended semipartial cross correlation analysis is proposed. • The important meteorological factors affecting API are assessed. • The explanatory ability of meteorological factors to API gradually strengthens with increasing time scales.

  20. Correction of X-ray diffraction profiles in linear-type PSPC by position factor

    International Nuclear Information System (INIS)

    Takahashi, Toshio

    1992-01-01

    PSPC (Position Sensitive Proportional Counter) makes it possible to obtain one-dimentional diffraction profiles without mechanical scanning. In a linear-type PSPC, the obtained profiles need correcting, because the position factor influences the intensity of the diffracted X-ray beam and the counting rate at each position on PSPC. The distances from the specimen are not the same at the center and at the edge of the detector, and the intensity decreases at the edge because of radiation and absorption. The counting rate varies with the incident angle of the diffracted beam at each position on PSPC. The position factor f i at channel i of the multichannel-analyser is given by f i = cos 4 α i ·exp{-μR(1/cosα i -1)} where R is the distance between the specimen and the center of PSPC, μ is the linear absorption coefficient and α i is the incident angle of the diffracted beam at channel i. The background profiles of silica gel powder were measured with CrKα and CuKα. The parameters of the model function were fitted to the profiles by the non-linear least squares method. The agreement between these parameters and the calculated values shows that the position factor can correct the measured profiles properly. (author)

  1. Confirmatory Factor Analysis and Multiple Linear Regression of the Neck Disability Index: Assessment If Subscales Are Equally Relevant in Whiplash and Nonspecific Neck Pain.

    Science.gov (United States)

    Croft, Arthur C; Milam, Bryce; Meylor, Jade; Manning, Richard

    2016-06-01

    Because of previously published recommendations to modify the Neck Disability Index (NDI), we evaluated the responsiveness and dimensionality of the NDI within a population of adult whiplash-injured subjects. The purpose of the present study was to evaluate the responsiveness and dimensionality of the NDI within a population of adult whiplash-injured subjects. Subjects who had sustained whiplash injuries of grade 2 or higher completed an NDI questionnaire. There were 123 subjects (55% female, of which 36% had recovered and 64% had chronic symptoms. NDI subscales were analyzed using confirmatory factor analysis, considering only the subscales and, secondly, using sex as an 11th variable. The subscales were also tested with multiple linear regression modeling using the total score as a target variable. When considering only the 10 NDI subscales, only a single factor emerged, with an eigenvalue of 5.4, explaining 53.7% of the total variance. Strong correlation (> .55) (P factor model of the NDI is not justified based on our results, and in this population of whiplash subjects, the NDI was unidimensional, demonstrating high internal consistency and supporting the original validation study of Vernon and Mior.

  2. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    Science.gov (United States)

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  3. Non linear seismic analysis of charge/discharge machine

    International Nuclear Information System (INIS)

    Dostal, M.; Trbojevic, V.M.; Nobile, M.

    1987-01-01

    The main conclusions of the seismic analysis of the Latina CDM are: i. The charge machine has been demonstrated to be capable of withstanding the effects of a 0.1 g earthquake. Stresses and displacements were all within allowable limits and the stability criteria were fully satisfied for all positions of the cross-travel bogie on the gantry. ii. Movements due to loss of friction between the cross-travel bogie wheels and the rail was found to be small, i.e. less than 2 mm for all cases considered. The modes of rocking of the fixed and hinged legs preclude any possibility of excessive movement between the long travel bogie wheels and the rail. iii. The non-linear analysis incorporating contact and friction has given more realistic results than any of the linear verification analyses. The method of analysis indicates that even the larger structures can be efficiently solved on a mini computer for a long forcing input (16 s). (orig.)

  4. ANALYSIS OF FACTORS WHICH AFFECTING THE ECONOMIC GROWTH

    Directory of Open Access Journals (Sweden)

    Suparna Wijaya

    2017-03-01

    Full Text Available High economic growth and sustainable process are main conditions for sustainability of economic country development. They are also become measures of the success of the country's economy. Factors which tested in this study are economic and non-economic factors which impacting economic development. This study has a goal to explain the factors that influence on macroeconomic Indonesia. It used linear regression modeling approach. The analysis result showed that Tax Amnesty, Exchange Rate, Inflation, and interest rate, they jointly can bring effect which amounted to 77.6% on economic growth whereas the remaining 22.4% is the influenced by other variables which not observed in this study. Keywords: tax amnesty, exchange rates, inflation, SBI and economic growth

  5. Worry About Caregiving Performance: A Confirmatory Factor Analysis

    Directory of Open Access Journals (Sweden)

    Ruijie Li

    2018-03-01

    Full Text Available Recent studies on the Zarit Burden Interview (ZBI support the existence of a unique factor, worry about caregiving performance (WaP, beyond role and personal strain. Our current study aims to confirm the existence of WaP within the multidimensionality of ZBI and to determine if predictors of WaP differ from the role and personal strain. We performed confirmatory factor analysis (CFA on 466 caregiver-patient dyads to compare between one-factor (total score, two-factor (role/personal strain, three-factor (role/personal strain and WaP, and four-factor models (role strain split into two factors. We conducted linear regression analyses to explore the relationships between different ZBI factors with socio-demographic and disease characteristics, and investigated the stage-dependent differences between WaP with role and personal strain by dyadic relationship. The four-factor structure that incorporated WaP and split role strain into two factors yielded the best fit. Linear regression analyses reveal that different variables significantly predict WaP (adult child caregiver and Neuropsychiatric Inventory Questionnaire (NPI-Q severity from role/personal strain (adult child caregiver, instrumental activities of daily living, and NPI-Q distress. Unlike other factors, WaP was significantly endorsed in early cognitive impairment. Among spouses, WaP remained low across Clinical Dementia Rating (CDR stages until a sharp rise in CDR 3; adult child and sibling caregivers experience a gradual rise throughout the stages. Our results affirm the existence of WaP as a unique factor. Future research should explore the potential of WaP as a possible intervention target to improve self-efficacy in the milder stages of burden.

  6. Analysis of Factors Affecting Inflation in Indonesia: an Islamic Perspective

    Directory of Open Access Journals (Sweden)

    Elis Ratna Wulan

    2015-04-01

    Full Text Available This study aims to determine the factors affecting inflation. The research is descriptive quantitative in nature. The data used are reported exchange rates, interest rates, money supply and inflation during 2008-2012. The research data was analyzed using multiple linear regression analysis. The results showed in the year 2008-2012 the condition of each variable are (1 the rate of inflation has a negative trend, (2 the interest rate has a negative trend, (3 the money supply has a positive trend, (4 the value of exchange rate has a positive trend. The test results by using multiple linear regression analysis result that variable interest rates, the money supply and the exchange rate of the rupiah significant effect on the rate of inflation.

  7. Theoretical analysis of balanced truncation for linear switched systems

    DEFF Research Database (Denmark)

    Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef

    2012-01-01

    In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians and their singu......In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians...... for showing this independence is realization theory of linear switched systems. [1] H. R. Shaker and R. Wisniewski, "Generalized gramian framework for model/controller order reduction of switched systems", International Journal of Systems Science, Vol. 42, Issue 8, 2011, 1277-1291. [2] H. R. Shaker and R....... Wisniewski, "Switched Systems Reduction Framework Based on Convex Combination of Generalized Gramians", Journal of Control Science and Engineering, 2009....

  8. Analysis of the efficiency of the linearization techniques for solving multi-objective linear fractional programming problems by goal programming

    Directory of Open Access Journals (Sweden)

    Tunjo Perić

    2017-01-01

    Full Text Available This paper presents and analyzes the applicability of three linearization techniques used for solving multi-objective linear fractional programming problems using the goal programming method. The three linearization techniques are: (1 Taylor’s polynomial linearization approximation, (2 the method of variable change, and (3 a modification of the method of variable change proposed in [20]. All three linearization techniques are presented and analyzed in two variants: (a using the optimal value of the objective functions as the decision makers’ aspirations, and (b the decision makers’ aspirations are given by the decision makers. As the criteria for the analysis we use the efficiency of the obtained solutions and the difficulties the analyst comes upon in preparing the linearization models. To analyze the applicability of the linearization techniques incorporated in the linear goal programming method we use an example of a financial structure optimization problem.

  9. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    Science.gov (United States)

    Luo, Wen; Azen, Razia

    2013-01-01

    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  10. Linear and nonlinear stability analysis, associated to experimental fast reactors

    International Nuclear Information System (INIS)

    Amorim, E.S. do; Moura Neto, C. de; Rosa, M.A.P.

    1980-07-01

    Phenomena associated to the physics of fast neutrons were analysed by linear and nonlinear Kinetics with arbitrary feedback. The theoretical foundations of linear kinetics and transfer functions aiming at the analysis of fast reactors stability, are established. These stability conditions were analitically proposed and investigated by digital and analogic programs. (E.G.) [pt

  11. Classification of acute stress using linear and non-linear heart rate variability analysis derived from sternal ECG

    DEFF Research Database (Denmark)

    Tanev, George; Saadi, Dorthe Bodholt; Hoppe, Karsten

    2014-01-01

    Chronic stress detection is an important factor in predicting and reducing the risk of cardiovascular disease. This work is a pilot study with a focus on developing a method for detecting short-term psychophysiological changes through heart rate variability (HRV) features. The purpose of this pilot...... study is to establish and to gain insight on a set of features that could be used to detect psychophysiological changes that occur during chronic stress. This study elicited four different types of arousal by images, sounds, mental tasks and rest, and classified them using linear and non-linear HRV...

  12. Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too).

    Science.gov (United States)

    Gonzalez-Vega, Laureano

    1999-01-01

    Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)

  13. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  14. Non-linear triangle-based polynomial expansion nodal method for hexagonal core analysis

    International Nuclear Information System (INIS)

    Cho, Jin Young; Cho, Byung Oh; Joo, Han Gyu; Zee, Sung Qunn; Park, Sang Yong

    2000-09-01

    This report is for the implementation of triangle-based polynomial expansion nodal (TPEN) method to MASTER code in conjunction with the coarse mesh finite difference(CMFD) framework for hexagonal core design and analysis. The TPEN method is a variation of the higher order polynomial expansion nodal (HOPEN) method that solves the multi-group neutron diffusion equation in the hexagonal-z geometry. In contrast with the HOPEN method, only two-dimensional intranodal expansion is considered in the TPEN method for a triangular domain. The axial dependence of the intranodal flux is incorporated separately here and it is determined by the nodal expansion method (NEM) for a hexagonal node. For the consistency of node geometry of the MASTER code which is based on hexagon, TPEN solver is coded to solve one hexagonal node which is composed of 6 triangular nodes directly with Gauss elimination scheme. To solve the CMFD linear system efficiently, stabilized bi-conjugate gradient(BiCG) algorithm and Wielandt eigenvalue shift method are adopted. And for the construction of the efficient preconditioner of BiCG algorithm, the incomplete LU(ILU) factorization scheme which has been widely used in two-dimensional problems is used. To apply the ILU factorization scheme to three-dimensional problem, a symmetric Gauss-Seidel Factorization scheme is used. In order to examine the accuracy of the TPEN solution, several eigenvalue benchmark problems and two transient problems, i.e., a realistic VVER1000 and VVER440 rod ejection benchmark problems, were solved and compared with respective references. The results of eigenvalue benchmark problems indicate that non-linear TPEN method is very accurate showing less than 15 pcm of eigenvalue errors and 1% of maximum power errors, and fast enough to solve the three-dimensional VVER-440 problem within 5 seconds on 733MHz PENTIUM-III. In the case of the transient problems, the non-linear TPEN method also shows good results within a few minute of

  15. Development of non-linear vibration analysis code for CANDU fuelling machine

    International Nuclear Information System (INIS)

    Murakami, Hajime; Hirai, Takeshi; Horikoshi, Kiyomi; Mizukoshi, Kaoru; Takenaka, Yasuo; Suzuki, Norio.

    1988-01-01

    This paper describes the development of a non-linear, dynamic analysis code for the CANDU 600 fuelling machine (F-M), which includes a number of non-linearities such as gap with or without Coulomb friction, special multi-linear spring connections, etc. The capabilities and features of the code and the mathematical treatment for the non-linearities are explained. The modeling and numerical methodology for the non-linearities employed in the code are verified experimentally. Finally, the simulation analyses for the full-scale F-M vibration testing are carried out, and the applicability of the code to such multi-degree of freedom systems as F-M is demonstrated. (author)

  16. Robust Linear Models for Cis-eQTL Analysis.

    Science.gov (United States)

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  17. Linear and Nonlinear Analysis of Brain Dynamics in Children with Cerebral Palsy

    Science.gov (United States)

    Sajedi, Firoozeh; Ahmadlou, Mehran; Vameghi, Roshanak; Gharib, Masoud; Hemmati, Sahel

    2013-01-01

    This study was carried out to determine linear and nonlinear changes of brain dynamics and their relationships with the motor dysfunctions in CP children. For this purpose power of EEG frequency bands (as a linear analysis) and EEG fractality (as a nonlinear analysis) were computed in eyes-closed resting state and statistically compared between 26…

  18. A STATISTICAL ANALYSIS OF GDP AND FINAL CONSUMPTION USING SIMPLE LINEAR REGRESSION. THE CASE OF ROMANIA 1990–2010

    OpenAIRE

    Aniela Balacescu; Marian Zaharia

    2011-01-01

    This paper aims to examine the causal relationship between GDP and final consumption. The authors used linear regression model in which GDP is considered variable results, and final consumption variable factor. In drafting article we used Excel software application that is a modern computing and statistical data analysis.

  19. Functional linear models for association analysis of quantitative traits.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  20. Possible factors determining the non-linearity in the VO2-power output relationship in humans: theoretical studies.

    Science.gov (United States)

    Korzeniewski, Bernard; Zoladz, Jerzy A

    2003-08-01

    At low power output exercise (below lactate threshold), the oxygen uptake increases linearly with power output, but at high power output exercise (above lactate threshold) some additional oxygen consumption causes a non-linearity in the overall VO(2) (oxygen uptake rate)-power output relationship. The functional significance of this phenomenon for human exercise tolerance is very important, but the mechanisms underlying it remain unknown. In the present work, a computer model of oxidative phosphorylation in intact skeletal muscle developed previously is used to examine the background of this relationship in different modes of exercise. Our simulations demonstrate that the non-linearity in the VO(2)-power output relationship and the difference in the magnitude of this non-linearity between incremental exercise mode and square-wave exercise mode (constant power output exercise) can be generated by introducing into the model some hypothetical factor F (group of associated factors) that accumulate(s) in time during exercise. The performed computer simulations, based on this assumption, give proper time courses of changes in VO(2) and [PCr] after an onset of work of different intensities, including the slow component in VO(2), well matching the experimental results. Moreover, if it is assumed that the exercise terminates because of fatigue when the amount/intensity of F exceed some threshold value, the model allows the generation of a proper shape of the well-known power-duration curve. This fact suggests that the phenomenon of the non-linearity of the VO(2)-power output relationship and the magnitude of this non-linearity in different modes of exercise is determined by some factor(s) responsible for muscle fatigue.

  1. An analysis of the electromagnetic field in multi-polar linear induction system

    International Nuclear Information System (INIS)

    Chervenkova, Todorka; Chervenkov, Atanas

    2002-01-01

    In this paper a new method for determination of the electromagnetic field vectors in a multi-polar linear induction system (LIS) is described. The analysis of the electromagnetic field has been done by four dimensional electromagnetic potentials in conjunction with theory of the magnetic loops . The electromagnetic field vectors are determined in the Minkovski's space as elements of the Maxwell's tensor. The results obtained are compared with those got from the analysis made by the finite elements method (FEM).With the method represented in this paper one can determine the electromagnetic field vectors in the multi-polar linear induction system using four-dimensional potential. A priority of this method is the obtaining of analytical results for the electromagnetic field vectors. These results are also valid for linear media. The dependencies are valid also at high speeds of movement. The results of the investigated linear induction system are comparable to those got by the finite elements method. The investigations may be continued in the determination of other characteristics such as drag force, levitation force, etc. The method proposed in this paper for an analysis of linear induction system can be used for optimization calculations. (Author)

  2. Sparse Linear Identifiable Multivariate Modeling

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2011-01-01

    and bench-marked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable......In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully...

  3. Hyperspectral and multispectral data fusion based on linear-quadratic nonnegative matrix factorization

    Science.gov (United States)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2017-04-01

    This paper proposes three multisharpening approaches to enhance the spatial resolution of urban hyperspectral remote sensing images. These approaches, related to linear-quadratic spectral unmixing techniques, use a linear-quadratic nonnegative matrix factorization (NMF) multiplicative algorithm. These methods begin by unmixing the observable high-spectral/low-spatial resolution hyperspectral and high-spatial/low-spectral resolution multispectral images. The obtained high-spectral/high-spatial resolution features are then recombined, according to the linear-quadratic mixing model, to obtain an unobservable multisharpened high-spectral/high-spatial resolution hyperspectral image. In the first designed approach, hyperspectral and multispectral variables are independently optimized, once they have been coherently initialized. These variables are alternately updated in the second designed approach. In the third approach, the considered hyperspectral and multispectral variables are jointly updated. Experiments, using synthetic and real data, are conducted to assess the efficiency, in spatial and spectral domains, of the designed approaches and of linear NMF-based approaches from the literature. Experimental results show that the designed methods globally yield very satisfactory spectral and spatial fidelities for the multisharpened hyperspectral data. They also prove that these methods significantly outperform the used literature approaches.

  4. Stability Analysis for Multi-Parameter Linear Periodic Systems

    DEFF Research Database (Denmark)

    Seyranian, A.P.; Solem, Frederik; Pedersen, Pauli

    1999-01-01

    This paper is devoted to stability analysis of general linear periodic systems depending on real parameters. The Floquet method and perturbation technique are the basis of the development. We start out with the first and higher-order derivatives of the Floquet matrix with respect to problem...

  5. Linearization effect in multifractal analysis: Insights from the Random Energy Model

    Science.gov (United States)

    Angeletti, Florian; Mézard, Marc; Bertin, Eric; Abry, Patrice

    2011-08-01

    The analysis of the linearization effect in multifractal analysis, and hence of the estimation of moments for multifractal processes, is revisited borrowing concepts from the statistical physics of disordered systems, notably from the analysis of the so-called Random Energy Model. Considering a standard multifractal process (compound Poisson motion), chosen as a simple representative example, we show the following: (i) the existence of a critical order q∗ beyond which moments, though finite, cannot be estimated through empirical averages, irrespective of the sample size of the observation; (ii) multifractal exponents necessarily behave linearly in q, for q>q∗. Tailoring the analysis conducted for the Random Energy Model to that of Compound Poisson motion, we provide explicative and quantitative predictions for the values of q∗ and for the slope controlling the linear behavior of the multifractal exponents. These quantities are shown to be related only to the definition of the multifractal process and not to depend on the sample size of the observation. Monte Carlo simulations, conducted over a large number of large sample size realizations of compound Poisson motion, comfort and extend these analyses.

  6. Influence factors analysis of water environmental quality of main rivers in Tianjin

    Science.gov (United States)

    Li, Ran; Bao, Jingling; Zou, Di; Shi, Fang

    2018-01-01

    According to the evaluation results of the water environment quality of main rivers in Tianjin in 1986-2015, this paper analyzed the current situation of water environmental quality of main rivers in Tianjin retrospectively, established the index system and multiple factors analysis through selecting factors influencing the water environmental quality of main rivers from the economy, industry and nature aspects with the combination method of principal component analysis and linear regression. The results showed that water consumption, sewage discharge and water resources were the main factors influencing the pollution of main rivers. Therefore, optimizing the utilization of water resources, improving utilization efficiency and reducing effluent discharge are important measures to reduce the pollution of surface water environment.

  7. Linear stability analysis in a solid-propellant rocket motor

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.M.; Kang, K.T.; Yoon, J.K. [Agency for Defense Development, Taejon (Korea, Republic of)

    1995-10-01

    Combustion instability in solid-propellant rocket motors depends on the balance between acoustic energy gains and losses of the system. The objective of this paper is to demonstrate the capability of the program which predicts the standard longitudinal stability using acoustic modes based on linear stability analysis and T-burner test results of propellants. Commercial ANSYS 5.0A program can be used to calculate the acoustic characteristic of a rocket motor. The linear stability prediction was compared with the static firing test results of rocket motors. (author). 11 refs., 17 figs.

  8. Linear stability analysis of collective neutrino oscillations without spurious modes

    Science.gov (United States)

    Morinaga, Taiki; Yamada, Shoichi

    2018-01-01

    Collective neutrino oscillations are induced by the presence of neutrinos themselves. As such, they are intrinsically nonlinear phenomena and are much more complex than linear counterparts such as the vacuum or Mikheyev-Smirnov-Wolfenstein oscillations. They obey integro-differential equations, for which it is also very challenging to obtain numerical solutions. If one focuses on the onset of collective oscillations, on the other hand, the equations can be linearized and the technique of linear analysis can be employed. Unfortunately, however, it is well known that such an analysis, when applied with discretizations of continuous angular distributions, suffers from the appearance of so-called spurious modes: unphysical eigenmodes of the discretized linear equations. In this paper, we analyze in detail the origin of these unphysical modes and present a simple solution to this annoying problem. We find that the spurious modes originate from the artificial production of pole singularities instead of a branch cut on the Riemann surface by the discretizations. The branching point singularities on the Riemann surface for the original nondiscretized equations can be recovered by approximating the angular distributions with polynomials and then performing the integrals analytically. We demonstrate for some examples that this simple prescription does remove the spurious modes. We also propose an even simpler method: a piecewise linear approximation to the angular distribution. It is shown that the same methodology is applicable to the multienergy case as well as to the dispersion relation approach that was proposed very recently.

  9. Use of linear discriminant function analysis in seed morphotype ...

    African Journals Online (AJOL)

    Use of linear discriminant function analysis in seed morphotype relationship study in 31 ... Data were collected on 100-seed weight, seed length and seed width. ... to the Mesoamerican gene pool, comprising the cultigroups Sieva-Big Lima, ...

  10. Linear functional analysis for scientists and engineers

    CERN Document Server

    Limaye, Balmohan V

    2016-01-01

    This book provides a concise and meticulous introduction to functional analysis. Since the topic draws heavily on the interplay between the algebraic structure of a linear space and the distance structure of a metric space, functional analysis is increasingly gaining the attention of not only mathematicians but also scientists and engineers. The purpose of the text is to present the basic aspects of functional analysis to this varied audience, keeping in mind the considerations of applicability. A novelty of this book is the inclusion of a result by Zabreiko, which states that every countably subadditive seminorm on a Banach space is continuous. Several major theorems in functional analysis are easy consequences of this result. The entire book can be used as a textbook for an introductory course in functional analysis without having to make any specific selection from the topics presented here. Basic notions in the setting of a metric space are defined in terms of sequences. These include total boundedness, c...

  11. Non-linear analysis of skew thin plate by finite difference method

    International Nuclear Information System (INIS)

    Kim, Chi Kyung; Hwang, Myung Hwan

    2012-01-01

    This paper deals with a discrete analysis capability for predicting the geometrically nonlinear behavior of skew thin plate subjected to uniform pressure. The differential equations are discretized by means of the finite difference method which are used to determine the deflections and the in-plane stress functions of plates and reduced to several sets of linear algebraic simultaneous equations. For the geometrically non-linear, large deflection behavior of the plate, the non-linear plate theory is used for the analysis. An iterative scheme is employed to solve these quasi-linear algebraic equations. Several problems are solved which illustrate the potential of the method for predicting the finite deflection and stress. For increasing lateral pressures, the maximum principal tensile stress occurs at the center of the plate and migrates toward the corners as the load increases. It was deemed important to describe the locations of the maximum principal tensile stress as it occurs. The load-deflection relations and the maximum bending and membrane stresses for each case are presented and discussed

  12. On the efficacy of linear system analysis of renal autoregulation in rats

    DEFF Research Database (Denmark)

    Chon, K H; Chen, Y M; Holstein-Rathlou, N H

    1993-01-01

    In order to assess the linearity of the mechanisms subserving renal blood flow autoregulation, broad-band arterial pressure fluctuations at three different power levels were induced experimentally and the resulting renal blood flow responses were recorded. Linear system analysis methods were...

  13. Linear and nonlinear analysis of high-power rf amplifiers

    International Nuclear Information System (INIS)

    Puglisi, M.

    1983-01-01

    After a survey of the state variable analysis method the final amplifier for the CBA is analyzed taking into account the real beam waveshape. An empirical method for checking the stability of a non-linear system is also considered

  14. TENSOR DECOMPOSITIONS AND SPARSE LOG-LINEAR MODELS

    Science.gov (United States)

    Johndrow, James E.; Bhattacharya, Anirban; Dunson, David B.

    2017-01-01

    Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. We derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions. PMID:29332971

  15. Plastic limit analysis with non linear kinematic strain hardening for metalworking processes applications

    International Nuclear Information System (INIS)

    Chaaba, Ali; Aboussaleh, Mohamed; Bousshine, Lahbib; Boudaia, El Hassan

    2011-01-01

    Limit analysis approaches are widely used to deal with metalworking processes analysis; however, they are applied only for perfectly plastic materials and recently for isotropic hardening ones excluding any kind of kinematic hardening. In the present work, using Implicit Standard Materials concept, sequential limit analysis approach and the finite element method, our objective consists in extending the limit analysis application for including linear and non linear kinematic strain hardenings. Because this plastic flow rule is non associative, the Implicit Standard Materials concept is adopted as a framework of non standard plasticity modeling. The sequential limit analysis procedure which considers the plastic behavior with non linear kinematic strain hardening as a succession of perfectly plastic behavior with yielding surfaces updated after each sequence of limit analysis and geometry updating is applied. Standard kinematic finite element method together with a regularization approach is used for performing two large compression cases (cold forging) in plane strain and axisymmetric conditions

  16. Manifold valued statistics, exact principal geodesic analysis and the effect of linear approximations

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Hauberg, Søren

    2010-01-01

    , we present a comparison between the non-linear analog of Principal Component Analysis, Principal Geodesic Analysis, in its linearized form and its exact counterpart that uses true intrinsic distances. We give examples of datasets for which the linearized version provides good approximations...... and for which it does not. Indicators for the differences between the two versions are then developed and applied to two examples of manifold valued data: outlines of vertebrae from a study of vertebral fractures and spacial coordinates of human skeleton end-effectors acquired using a stereo camera and tracking...

  17. Sensitivity analysis of linear programming problem through a recurrent neural network

    Science.gov (United States)

    Das, Raja

    2017-11-01

    In this paper we study the recurrent neural network for solving linear programming problems. To achieve optimality in accuracy and also in computational effort, an algorithm is presented. We investigate the sensitivity analysis of linear programming problem through the neural network. A detailed example is also presented to demonstrate the performance of the recurrent neural network.

  18. Robust linear discriminant analysis with distance based estimators

    Science.gov (United States)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina

    2017-11-01

    Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.

  19. Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis

    Science.gov (United States)

    Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés and, Luis G.; García Beltrán, Carlos Daniel

    2013-01-01

    This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results. PMID:23447007

  20. Treating experimental data of inverse kinetic method by unitary linear regression analysis

    International Nuclear Information System (INIS)

    Zhao Yusen; Chen Xiaoliang

    2009-01-01

    The theory of treating experimental data of inverse kinetic method by unitary linear regression analysis was described. Not only the reactivity, but also the effective neutron source intensity could be calculated by this method. Computer code was compiled base on the inverse kinetic method and unitary linear regression analysis. The data of zero power facility BFS-1 in Russia were processed and the results were compared. The results show that the reactivity and the effective neutron source intensity can be obtained correctly by treating experimental data of inverse kinetic method using unitary linear regression analysis and the precision of reactivity measurement is improved. The central element efficiency can be calculated by using the reactivity. The result also shows that the effect to reactivity measurement caused by external neutron source should be considered when the reactor power is low and the intensity of external neutron source is strong. (authors)

  1. Advanced statistics: linear regression, part II: multiple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  2. Non-linear analytic and coanalytic problems (Lp-theory, Clifford analysis, examples)

    International Nuclear Information System (INIS)

    Dubinskii, Yu A; Osipenko, A S

    2000-01-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the 'orthogonal' sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented

  3. Analysis Of Factors Causing Delays On Harun Nafsi - Hm Rifadin Street In Samarinda East Kalimantan Maintenance Project

    Directory of Open Access Journals (Sweden)

    Fadli

    2017-12-01

    Full Text Available This study aims to identify analyze and describe the factors that affect the project maintenance delay on Harun Nafsi - HM. Rifadin Street in Samarinda East Kalimantan. This research uses qualitative research method by utilizing questionnaires. The 30 participating respondents consist of 14 project implementers and 16 field implementers. The data are analyzed by descriptive statistical technique factor analysis and linear regression analysis. The results show that the factors influencing the delay of maintenance project of Harun Nafis - HM Rifadin Street include 1 time factor and workmanship factor 2 human resources and natural factors 3 geographical conditions late approval plans change and labor strikes and 4 non-optimal working levels and changes in the scope of the project during the work are still ongoing. Based on multiple linear regression analysis coefficient of determination value of 0.824 is obtained. It means that the four factors studied affect 82.4 of project delays and the rest of 27.6 is influenced by other variables out of this study. The results of this study also indicate that the dominant factor for road maintenance project delays is the fourth factor of the factors mentioned. The effort that the contractor needs to undertake is not to expand the employment contract if the project is underway or the contractor does not have the capability to complete another project.

  4. Describing three-class task performance: three-class linear discriminant analysis and three-class ROC analysis

    Science.gov (United States)

    He, Xin; Frey, Eric C.

    2007-03-01

    Binary ROC analysis has solid decision-theoretic foundations and a close relationship to linear discriminant analysis (LDA). In particular, for the case of Gaussian equal covariance input data, the area under the ROC curve (AUC) value has a direct relationship to the Hotelling trace. Many attempts have been made to extend binary classification methods to multi-class. For example, Fukunaga extended binary LDA to obtain multi-class LDA, which uses the multi-class Hotelling trace as a figure-of-merit, and we have previously developed a three-class ROC analysis method. This work explores the relationship between conventional multi-class LDA and three-class ROC analysis. First, we developed a linear observer, the three-class Hotelling observer (3-HO). For Gaussian equal covariance data, the 3- HO provides equivalent performance to the three-class ideal observer and, under less strict conditions, maximizes the signal to noise ratio for classification of all pairs of the three classes simultaneously. The 3-HO templates are not the eigenvectors obtained from multi-class LDA. Second, we show that the three-class Hotelling trace, which is the figureof- merit in the conventional three-class extension of LDA, has significant limitations. Third, we demonstrate that, under certain conditions, there is a linear relationship between the eigenvectors obtained from multi-class LDA and 3-HO templates. We conclude that the 3-HO based on decision theory has advantages both in its decision theoretic background and in the usefulness of its figure-of-merit. Additionally, there exists the possibility of interpreting the two linear features extracted by the conventional extension of LDA from a decision theoretic point of view.

  5. On form factors of the conjugated field in the non-linear Schroedinger model

    Energy Technology Data Exchange (ETDEWEB)

    Kozlowski, K.K.

    2011-05-15

    Izergin-Korepin's lattice discretization of the non-linear Schroedinger model along with Oota's inverse problem provides one with determinant representations for the form factors of the lattice discretized conjugated field operator. We prove that these form factors converge, in the zero lattice spacing limit, to those of the conjugated field operator in the continuous model. We also compute the large-volume asymptotic behavior of such form factors in the continuous model. These are in particular characterized by Fredholm determinants of operators acting on closed contours. We provide a way of defining these Fredholm determinants in the case of generic paramaters. (orig.)

  6. Performance of an Axisymmetric Rocket Based Combined Cycle Engine During Rocket Only Operation Using Linear Regression Analysis

    Science.gov (United States)

    Smith, Timothy D.; Steffen, Christopher J., Jr.; Yungster, Shaye; Keller, Dennis J.

    1998-01-01

    The all rocket mode of operation is shown to be a critical factor in the overall performance of a rocket based combined cycle (RBCC) vehicle. An axisymmetric RBCC engine was used to determine specific impulse efficiency values based upon both full flow and gas generator configurations. Design of experiments methodology was used to construct a test matrix and multiple linear regression analysis was used to build parametric models. The main parameters investigated in this study were: rocket chamber pressure, rocket exit area ratio, injected secondary flow, mixer-ejector inlet area, mixer-ejector area ratio, and mixer-ejector length-to-inlet diameter ratio. A perfect gas computational fluid dynamics analysis, using both the Spalart-Allmaras and k-omega turbulence models, was performed with the NPARC code to obtain values of vacuum specific impulse. Results from the multiple linear regression analysis showed that for both the full flow and gas generator configurations increasing mixer-ejector area ratio and rocket area ratio increase performance, while increasing mixer-ejector inlet area ratio and mixer-ejector length-to-diameter ratio decrease performance. Increasing injected secondary flow increased performance for the gas generator analysis, but was not statistically significant for the full flow analysis. Chamber pressure was found to be not statistically significant.

  7. Using Hierarchical Linear Modelling to Examine Factors Predicting English Language Students' Reading Achievement

    Science.gov (United States)

    Fung, Karen; ElAtia, Samira

    2015-01-01

    Using Hierarchical Linear Modelling (HLM), this study aimed to identify factors such as ESL/ELL/EAL status that would predict students' reading performance in an English language arts exam taken across Canada. Using data from the 2007 administration of the Pan-Canadian Assessment Program (PCAP) along with the accompanying surveys for students and…

  8. Absorption correction factor in X-ray fluorescent quantitative analysis

    International Nuclear Information System (INIS)

    Pimjun, S.

    1994-01-01

    An experiment on absorption correction factor in X-ray fluorescent quantitative analysis were carried out. Standard samples were prepared from the mixture of Fe 2 O 3 and tapioca flour at various concentration of Fe 2 O 3 ranging from 5% to 25%. Unknown samples were kaolin containing 3.5% to-50% of Fe 2 O 3 Kaolin samples were diluted with tapioca flour in order to reduce the absorption of FeK α and make them easy to prepare. Pressed samples with 0.150 /cm 2 and 2.76 cm in diameter, were used in the experiment. Absorption correction factor is related to total mass absorption coefficient (χ) which varied with sample composition. In known sample, χ can be calculated by conveniently the formula. However in unknown sample, χ can be determined by Emission-Transmission method. It was found that the relationship between corrected FeK α intensity and contents of Fe 2 O 3 in these samples was linear. This result indicate that this correction factor can be used to adjust the accuracy of X-ray intensity. Therefore, this correction factor is essential in quantitative analysis of elements comprising in any sample by X-ray fluorescent technique

  9. Linear Covariance Analysis for a Lunar Lander

    Science.gov (United States)

    Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael

    2017-01-01

    A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.

  10. Applied Research of Enterprise Cost Control Based on Linear Programming

    Directory of Open Access Journals (Sweden)

    Yu Shuo

    2015-01-01

    This paper researches the enterprise cost control through the linear programming model, and analyzes the restriction factors of the labor of enterprise production, raw materials, processing equipment, sales price, and other factors affecting the enterprise income, so as to obtain an enterprise cost control model based on the linear programming. This model can calculate rational production mode in the case of limited resources, and acquire optimal enterprise income. The production guiding program and scheduling arrangement of the enterprise can be obtained through calculation results, so as to provide scientific and effective guidance for the enterprise production. This paper adds the sensitivity analysis in the linear programming model, so as to learn about the stability of the enterprise cost control model based on linear programming through the sensitivity analysis, and verify the rationality of the model, and indicate the direction for the enterprise cost control. The calculation results of the model can provide a certain reference for the enterprise planning in the market economy environment, which have strong reference and practical significance in terms of the enterprise cost control.

  11. Optimal choice of basis functions in the linear regression analysis

    International Nuclear Information System (INIS)

    Khotinskij, A.M.

    1988-01-01

    Problem of optimal choice of basis functions in the linear regression analysis is investigated. Step algorithm with estimation of its efficiency, which holds true at finite number of measurements, is suggested. Conditions, providing the probability of correct choice close to 1 are formulated. Application of the step algorithm to analysis of decay curves is substantiated. 8 refs

  12. A linear multiple balance method for discrete ordinates neutron transport equations

    International Nuclear Information System (INIS)

    Park, Chang Je; Cho, Nam Zin

    2000-01-01

    A linear multiple balance method (LMB) is developed to provide more accurate and positive solutions for the discrete ordinates neutron transport equations. In this multiple balance approach, one mesh cell is divided into two subcells with quadratic approximation of angular flux distribution. Four multiple balance equations are used to relate center angular flux with average angular flux by Simpson's rule. From the analysis of spatial truncation error, the accuracy of the linear multiple balance scheme is ο(Δ 4 ) whereas that of diamond differencing is ο(Δ 2 ). To accelerate the linear multiple balance method, we also describe a simplified additive angular dependent rebalance factor scheme which combines a modified boundary projection acceleration scheme and the angular dependent rebalance factor acceleration schme. It is demonstrated, via fourier analysis of a simple model problem as well as numerical calculations, that the additive angular dependent rebalance factor acceleration scheme is unconditionally stable with spectral radius < 0.2069c (c being the scattering ration). The numerical results tested so far on slab-geometry discrete ordinates transport problems show that the solution method of linear multiple balance is effective and sufficiently efficient

  13. Standardizing effect size from linear regression models with log-transformed variables for meta-analysis.

    Science.gov (United States)

    Rodríguez-Barranco, Miguel; Tobías, Aurelio; Redondo, Daniel; Molina-Portillo, Elena; Sánchez, María José

    2017-03-17

    Meta-analysis is very useful to summarize the effect of a treatment or a risk factor for a given disease. Often studies report results based on log-transformed variables in order to achieve the principal assumptions of a linear regression model. If this is the case for some, but not all studies, the effects need to be homogenized. We derived a set of formulae to transform absolute changes into relative ones, and vice versa, to allow including all results in a meta-analysis. We applied our procedure to all possible combinations of log-transformed independent or dependent variables. We also evaluated it in a simulation based on two variables either normally or asymmetrically distributed. In all the scenarios, and based on different change criteria, the effect size estimated by the derived set of formulae was equivalent to the real effect size. To avoid biased estimates of the effect, this procedure should be used with caution in the case of independent variables with asymmetric distributions that significantly differ from the normal distribution. We illustrate an application of this procedure by an application to a meta-analysis on the potential effects on neurodevelopment in children exposed to arsenic and manganese. The procedure proposed has been shown to be valid and capable of expressing the effect size of a linear regression model based on different change criteria in the variables. Homogenizing the results from different studies beforehand allows them to be combined in a meta-analysis, independently of whether the transformations had been performed on the dependent and/or independent variables.

  14. Linearly Polarized IR Spectroscopy Theory and Applications for Structural Analysis

    CERN Document Server

    Kolev, Tsonko

    2011-01-01

    A technique that is useful in the study of pharmaceutical products and biological molecules, polarization IR spectroscopy has undergone continuous development since it first emerged almost 100 years ago. Capturing the state of the science as it exists today, "Linearly Polarized IR Spectroscopy: Theory and Applications for Structural Analysis" demonstrates how the technique can be properly utilized to obtain important information about the structure and spectral properties of oriented compounds. The book starts with the theoretical basis of linear-dichroic infrared (IR-LD) spectroscop

  15. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  16. Algorithm for Non-proportional Loading in Sequentially Linear Analysis

    NARCIS (Netherlands)

    Yu, C.; Hoogenboom, P.C.J.; Rots, J.G.; Saouma, V.; Bolander, J.; Landis, E.

    2016-01-01

    Sequentially linear analysis (SLA) is an alternative to the Newton-Raphson method for analyzing the nonlinear behavior of reinforced concrete and masonry structures. In this paper SLA is extended to load cases that are applied one after the other, for example first dead load and then wind load. It

  17. Improved Methods for Pitch Synchronous Linear Prediction Analysis of Speech

    OpenAIRE

    劉, 麗清

    2015-01-01

    Linear prediction (LP) analysis has been applied to speech system over the last few decades. LP technique is well-suited for speech analysis due to its ability to model speech production process approximately. Hence LP analysis has been widely used for speech enhancement, low-bit-rate speech coding in cellular telephony, speech recognition, characteristic parameter extraction (vocal tract resonances frequencies, fundamental frequency called pitch) and so on. However, the performance of the co...

  18. Non-linear analysis of solid propellant burning rate behavior

    Energy Technology Data Exchange (ETDEWEB)

    Junye Wang [Zhejiang Univ. of Technology, College of Mechanical and Electrical Engineering, Hanzhou (China)

    2000-07-01

    The parametric analysis of the thermal wave model of the non-steady combustion of solid propellants is carried out under a sudden compression. First, to observe non-linear effects, solutions are obtained using a computer under prescribed pressure variations. Then, the effects of rearranging the spatial mesh, additional points, and the time step on numerical solutions are evaluated. Finally, the behaviour of the thermal wave combustion model is examined under large heat releases (H) and a dynamic factor ({beta}). The numerical predictions show that (1) the effect of a dynamic factor ({beta}), related to the magnitude of dp/dt, on the peak burning rate increases as the value of beta increases. However, unsteady burning rate 'runaway' does not appear and will return asymptotically to ap{sup n}, when {beta}{>=}10.0. The burning rate 'runaway' is a numerical difficulty, not a solution to the models. (2) At constant beta and m, the amplitude of the burning rate increases with increasing H. However, the increase in the burning rate amplitude is stepwise, and there is no apparent intrinsic instability limit. A damped oscillation of burning rate occurs when the value of H is less. However, when H>1.0, the state of an intrinsically unstable model is composed of repeated, amplitude spikes, i.e. an undamped oscillation occurs. (3) The effect of the time step on the peak burning rate increases as H increases. (Author)

  19. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry I.

    2017-12-08

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  20. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry; Kasimov, Aslan R.

    2018-01-01

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  1. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry

    2018-03-20

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  2. A Homotopy-Perturbation analysis of the non-linear contaminant ...

    African Journals Online (AJOL)

    In this research work, a Homotopy-perturbation analysis of a non –linear contaminant flow equation with an initial continuous point source is provided. The equation is characterized by advection, diffusion and adsorption. We assume that the adsorption term is modeled by Freudlich Isotherm. We provide an approximation of ...

  3. Linear discriminant analysis of structure within African eggplant 'Shum'

    African Journals Online (AJOL)

    A MANOVA preceded linear discriminant analysis, to model each of 61 variables, as predicted by clusters and experiment to filter out non-significant traits. Four distinct clusters emerged, with a cophenetic relation coefficient of 0.87 (P<0.01). Canonical variates that best predicted the observed clusters include petiole length, ...

  4. The Analysis of Factors Influencing Effectivenes of Property Taxes in Karanganyar Regency

    Directory of Open Access Journals (Sweden)

    Endang Brotojoyo

    2018-03-01

    Full Text Available The purpose of this study was to test empirically Effect of Compensation, Motivation and External Factors To Performance Officer With Property Taxes Voting in the District Effectiveness Matesih Karanganyar. The analysis technique used is using validity and reliability test, linearity test, regression analysis, path analysis, t test, F test, test the coefficient of determination and correlation analysis. Compensation Hypothesis Test Results significantly influence the effectiveness of tax collection. Motivation significantly influences the effectiveness of tax collection. External factors do not significant effect on effectiveness of tax collection. Compensation significant effect on the performance of Officers. Motivation significant effect on the performance of the Property Taxes polling clerk. External factors do not significant effect on the performance of Officers. Effectiveness of tax collection clerk significant effects on performance. F test results can be concluded jointly variable compensation, motivation, and external factors affecting the effectiveness of tax collection performance. The R2 total of 0,974 means that the performance of the Property Taxes in the district polling officer Matesih Karanganyar explained by the variable compensation, motivation, external factors and the effectiveness of tax collection amounted to 97.4%. The results of path analysis showed that the effective compensation and motivation through a direct path, while external factors are not effective for direct and indirect pathways.

  5. On the analysis of clonogenic survival data: Statistical alternatives to the linear-quadratic model

    International Nuclear Information System (INIS)

    Unkel, Steffen; Belka, Claus; Lauber, Kirsten

    2016-01-01

    The most frequently used method to quantitatively describe the response to ionizing irradiation in terms of clonogenic survival is the linear-quadratic (LQ) model. In the LQ model, the logarithm of the surviving fraction is regressed linearly on the radiation dose by means of a second-degree polynomial. The ratio of the estimated parameters for the linear and quadratic term, respectively, represents the dose at which both terms have the same weight in the abrogation of clonogenic survival. This ratio is known as the α/β ratio. However, there are plausible scenarios in which the α/β ratio fails to sufficiently reflect differences between dose-response curves, for example when curves with similar α/β ratio but different overall steepness are being compared. In such situations, the interpretation of the LQ model is severely limited. Colony formation assays were performed in order to measure the clonogenic survival of nine human pancreatic cancer cell lines and immortalized human pancreatic ductal epithelial cells upon irradiation at 0-10 Gy. The resulting dataset was subjected to LQ regression and non-linear log-logistic regression. Dimensionality reduction of the data was performed by cluster analysis and principal component analysis. Both the LQ model and the non-linear log-logistic regression model resulted in accurate approximations of the observed dose-response relationships in the dataset of clonogenic survival. However, in contrast to the LQ model the non-linear regression model allowed the discrimination of curves with different overall steepness but similar α/β ratio and revealed an improved goodness-of-fit. Additionally, the estimated parameters in the non-linear model exhibit a more direct interpretation than the α/β ratio. Dimensionality reduction of clonogenic survival data by means of cluster analysis was shown to be a useful tool for classifying radioresistant and sensitive cell lines. More quantitatively, principal component analysis allowed

  6. Comparative analysis of linear motor geometries for Stirling coolers

    Science.gov (United States)

    R, Rajesh V.; Kuzhiveli, Biju T.

    2017-12-01

    Compared to rotary motor driven Stirling coolers, linear motor coolers are characterized by small volume and long life, making them more suitable for space and military applications. The motor design and operational characteristics have a direct effect on the operation of the cooler. In this perspective, ample scope exists in understanding the behavioural description of linear motor systems. In the present work, the authors compare and analyze different moving magnet linear motor geometries to finalize the most favourable one for Stirling coolers. The required axial force in the linear motors is generated by the interaction of magnetic fields of a current carrying coil and that of a permanent magnet. The compact size, commercial availability of permanent magnets and low weight requirement of the system are quite a few constraints for the design. The finite element analysis performed using Maxwell software serves as the basic tool to analyze the magnet movement, flux distribution in the air gap and the magnetic saturation levels on the core. A number of material combinations are investigated for core before finalizing the design. The effect of varying the core geometry on the flux produced in the air gap is also analyzed. The electromagnetic analysis of the motor indicates that the permanent magnet height ought to be taken in such a way that it is under the influence of electromagnetic field of current carrying coil as well as the outer core in the balanced position. This is necessary so that sufficient amount of thrust force is developed by efficient utilisation of the air gap flux density. Also, the outer core ends need to be designed to facilitate enough room for the magnet movement under the operating conditions.

  7. Problems with the factor analysis of items: Solutions based on item response theory and item parcelling

    Directory of Open Access Journals (Sweden)

    Gideon P. De Bruin

    2004-10-01

    Full Text Available The factor analysis of items often produces spurious results in the sense that unidimensional scales appear multidimensional. This may be ascribed to failure in meeting the assumptions of linearity and normality on which factor analysis is based. Item response theory is explicitly designed for the modelling of the non-linear relations between ordinal variables and provides a strong alternative to the factor analysis of items. Items may also be combined in parcels that are more likely to satisfy the assumptions of factor analysis than do the items. The use of the Rasch rating scale model and the factor analysis of parcels is illustrated with data obtained with the Locus of Control Inventory. The results of these analyses are compared with the results obtained through the factor analysis of items. It is shown that the Rasch rating scale model and the factoring of parcels produce superior results to the factor analysis of items. Recommendations for the analysis of scales are made. Opsomming Die faktorontleding van items lewer dikwels misleidende resultate op, veral in die opsig dat eendimensionele skale as meerdimensioneel voorkom. Hierdie resultate kan dikwels daaraan toegeskryf word dat daar nie aan die aannames van lineariteit en normaliteit waarop faktorontleding berus, voldoen word nie. Itemresponsteorie, wat eksplisiet vir die modellering van die nie-liniêre verbande tussen ordinale items ontwerp is, bied ’n aantreklike alternatief vir die faktorontleding van items. Items kan ook in pakkies gegroepeer word wat meer waarskynlik aan die aannames van faktorontleding voldoen as individuele items. Die gebruik van die Rasch beoordelingskaalmodel en die faktorontleding van pakkies word aan die hand van data wat met die Lokus van Beheervraelys verkry is, gedemonstreer. Die resultate van hierdie ontledings word vergelyk met die resultate wat deur ‘n faktorontleding van die individuele items verkry is. Die resultate dui daarop dat die Rasch

  8. Modeling and analysis of linearized wheel-rail contact dynamics

    International Nuclear Information System (INIS)

    Soomro, Z.

    2014-01-01

    The dynamics of the railway vehicles are nonlinear and depend upon several factors including vehicle speed, normal load and adhesion level. The presence of contaminants on the railway track makes them unpredictable too. Therefore in order to develop an effective control strategy it is important to analyze the effect of each factor on dynamic response thoroughly. In this paper a linearized model of a railway wheel-set is developed and is later analyzed by varying the speed and adhesion level by keeping the normal load constant. A wheel-set is the wheel-axle assembly of a railroad car. Patch contact is the study of the deformation of solids that touch each other at one or more points. (author)

  9. [Comparison of application of Cochran-Armitage trend test and linear regression analysis for rate trend analysis in epidemiology study].

    Science.gov (United States)

    Wang, D Z; Wang, C; Shen, C F; Zhang, Y; Zhang, H; Song, G D; Xue, X D; Xu, Z L; Zhang, S; Jiang, G H

    2017-05-10

    We described the time trend of acute myocardial infarction (AMI) from 1999 to 2013 in Tianjin incidence rate with Cochran-Armitage trend (CAT) test and linear regression analysis, and the results were compared. Based on actual population, CAT test had much stronger statistical power than linear regression analysis for both overall incidence trend and age specific incidence trend (Cochran-Armitage trend P valuelinear regression P value). The statistical power of CAT test decreased, while the result of linear regression analysis remained the same when population size was reduced by 100 times and AMI incidence rate remained unchanged. The two statistical methods have their advantages and disadvantages. It is necessary to choose statistical method according the fitting degree of data, or comprehensively analyze the results of two methods.

  10. Foundations of linear and generalized linear models

    CERN Document Server

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  11. Linear Covariance Analysis and Epoch State Estimators

    Science.gov (United States)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  12. Linear discriminant analysis of character sequences using occurrences of words

    KAUST Repository

    Dutta, Subhajit; Chaudhuri, Probal; Ghosh, Anil

    2014-01-01

    Classification of character sequences, where the characters come from a finite set, arises in disciplines such as molecular biology and computer science. For discriminant analysis of such character sequences, the Bayes classifier based on Markov models turns out to have class boundaries defined by linear functions of occurrences of words in the sequences. It is shown that for such classifiers based on Markov models with unknown orders, if the orders are estimated from the data using cross-validation, the resulting classifier has Bayes risk consistency under suitable conditions. Even when Markov models are not valid for the data, we develop methods for constructing classifiers based on linear functions of occurrences of words, where the word length is chosen by cross-validation. Such linear classifiers are constructed using ideas of support vector machines, regression depth, and distance weighted discrimination. We show that classifiers with linear class boundaries have certain optimal properties in terms of their asymptotic misclassification probabilities. The performance of these classifiers is demonstrated in various simulated and benchmark data sets.

  13. Linear discriminant analysis of character sequences using occurrences of words

    KAUST Repository

    Dutta, Subhajit

    2014-02-01

    Classification of character sequences, where the characters come from a finite set, arises in disciplines such as molecular biology and computer science. For discriminant analysis of such character sequences, the Bayes classifier based on Markov models turns out to have class boundaries defined by linear functions of occurrences of words in the sequences. It is shown that for such classifiers based on Markov models with unknown orders, if the orders are estimated from the data using cross-validation, the resulting classifier has Bayes risk consistency under suitable conditions. Even when Markov models are not valid for the data, we develop methods for constructing classifiers based on linear functions of occurrences of words, where the word length is chosen by cross-validation. Such linear classifiers are constructed using ideas of support vector machines, regression depth, and distance weighted discrimination. We show that classifiers with linear class boundaries have certain optimal properties in terms of their asymptotic misclassification probabilities. The performance of these classifiers is demonstrated in various simulated and benchmark data sets.

  14. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    Science.gov (United States)

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  15. Mathematical Methods in Wave Propagation: Part 2--Non-Linear Wave Front Analysis

    Science.gov (United States)

    Jeffrey, Alan

    1971-01-01

    The paper presents applications and methods of analysis for non-linear hyperbolic partial differential equations. The paper is concluded by an account of wave front analysis as applied to the piston problem of gas dynamics. (JG)

  16. Linear Stability Analysis of an Acoustically Vaporized Droplet

    Science.gov (United States)

    Siddiqui, Junaid; Qamar, Adnan; Samtaney, Ravi

    2015-11-01

    Acoustic droplet vaporization (ADV) is a phase transition phenomena of a superheat liquid (Dodecafluoropentane, C5F12) droplet to a gaseous bubble, instigated by a high-intensity acoustic pulse. This approach was first studied in imaging applications, and applicable in several therapeutic areas such as gas embolotherapy, thrombus dissolution, and drug delivery. High-speed imaging and theoretical modeling of ADV has elucidated several physical aspects, ranging from bubble nucleation to its subsequent growth. Surface instabilities are known to exist and considered responsible for evolving bubble shapes (non-spherical growth, bubble splitting and bubble droplet encapsulation). We present a linear stability analysis of the dynamically evolving interfaces of an acoustically vaporized micro-droplet (liquid A) in an infinite pool of a second liquid (liquid B). We propose a thermal ADV model for the base state. The linear analysis utilizes spherical harmonics (Ynm, of degree m and order n) and under various physical assumptions results in a time-dependent ODE of the perturbed interface amplitudes (one at the vapor/liquid A interface and the other at the liquid A/liquid B interface). The perturbation amplitudes are found to grow exponentially and do not depend on m. Supported by KAUST Baseline Research Funds.

  17. A fresh look at linear ordinary differential equations with constant coefficients. Revisiting the impulsive response method using factorization

    Science.gov (United States)

    Camporesi, Roberto

    2016-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The approach presented here can be used in a first course on differential equations for science and engineering majors.

  18. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    Science.gov (United States)

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  19. A Fresh Look at Linear Ordinary Differential Equations with Constant Coefficients. Revisiting the Impulsive Response Method Using Factorization

    Science.gov (United States)

    Camporesi, Roberto

    2016-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as…

  20. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    Science.gov (United States)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  1. Stability analysis and stabilization strategies for linear supply chains

    Science.gov (United States)

    Nagatani, Takashi; Helbing, Dirk

    2004-04-01

    Due to delays in the adaptation of production or delivery rates, supply chains can be dynamically unstable with respect to perturbations in the consumption rate, which is known as “bull-whip effect”. Here, we study several conceivable production strategies to stabilize supply chains, which is expressed by different specifications of the management function controlling the production speed in dependence of the stock levels. In particular, we will investigate, whether the reaction to stock levels of other producers or suppliers has a stabilizing effect. We will also demonstrate that the anticipation of future stock levels can stabilize the supply system, given the forecast horizon τ is long enough. To show this, we derive linear stability conditions and carry out simulations for different control strategies. The results indicate that the linear stability analysis is a helpful tool for the judgement of the stabilization effect, although unexpected deviations can occur in the non-linear regime. There are also signs of phase transitions and chaotic behavior, but this remains to be investigated more thoroughly in the future.

  2. Linear and nonlinear analysis of fluid slosh dampers

    Science.gov (United States)

    Sayar, B. A.; Baumgarten, J. R.

    1982-11-01

    A vibrating structure and a container partially filled with fluid are considered coupled in a free vibration mode. To simplify the mathematical analysis, a pendulum model to duplicate the fluid motion and a mass-spring dashpot representing the vibrating structure are used. The equations of motion are derived by Lagrange's energy approach and expressed in parametric form. For a wide range of parametric values the logarithmic decrements of the main system are calculated from theoretical and experimental response curves in the linear analysis. However, for the nonlinear analysis the theoretical and experimental response curves of the main system are compared. Theoretical predictions are justified by experimental observations with excellent agreement. It is concluded finally that for a proper selection of design parameters, containers partially filled with viscous fluids serve as good vibration dampers.

  3. Application of the weak-field asymptotic theory to the analysis of tunneling ionization of linear molecules

    DEFF Research Database (Denmark)

    Madsen, Lars Bojer; Tolstikhin, Oleg I.; Morishita, Toru

    2012-01-01

    The recently developed weak-field asymptotic theory [ Phys. Rev. A 84 053423 (2011)] is applied to the analysis of tunneling ionization of a molecular ion (H2+), several homonuclear (H2, N2, O2) and heteronuclear (CO, HF) diatomic molecules, and a linear triatomic molecule (CO2) in a static...... electric field. The dependence of the ionization rate on the angle between the molecular axis and the field is determined by a structure factor for the highest occupied molecular orbital. This factor is calculated using a virtually exact discrete variable representation wave function for H2+, very accurate...... Hartree-Fock wave functions for the diatomics, and a Hartree-Fock quantum chemistry wave function for CO2. The structure factors are expanded in terms of standard functions and the associated structure coefficients, allowing the determination of the ionization rate for any orientation of the molecule...

  4. Linear operator inequalities for strongly stable weakly regular linear systems

    NARCIS (Netherlands)

    Curtain, RF

    2001-01-01

    We consider the question of the existence of solutions to certain linear operator inequalities (Lur'e equations) for strongly stable, weakly regular linear systems with generating operators A, B, C, 0. These operator inequalities are related to the spectral factorization of an associated Popov

  5. Non-linear elastic thermal stress analysis with phase changes

    International Nuclear Information System (INIS)

    Amada, S.; Yang, W.H.

    1978-01-01

    The non-linear elastic, thermal stress analysis with temperature induced phase changes in the materials is presented. An infinite plate (or body) with a circular hole (or tunnel) is subjected to a thermal loading on its inner surface. The peak temperature around the hole reaches beyond the melting point of the material. The non-linear diffusion equation is solved numerically using the finite difference method. The material properties change rapidly at temperatures where the change of crystal structures and solid-liquid transition occur. The elastic stresses induced by the transient non-homogeneous temperature distribution are calculated. The stresses change remarkably when the phase changes occur and there are residual stresses remaining in the plate after one cycle of thermal loading. (Auth.)

  6. Preoperative factors affecting cost and length of stay for isolated off-pump coronary artery bypass grafting: hierarchical linear model analysis.

    Science.gov (United States)

    Shinjo, Daisuke; Fushimi, Kiyohide

    2015-11-17

    To determine the effect of preoperative patient and hospital factors on resource use, cost and length of stay (LOS) among patients undergoing off-pump coronary artery bypass grafting (OPCAB). Observational retrospective study. Data from the Japanese Administrative Database. Patients who underwent isolated, elective OPCAB between April 2011 and March 2012. The primary outcomes of this study were inpatient cost and LOS associated with OPCAB. A two-level hierarchical linear model was used to examine the effects of patient and hospital characteristics on inpatient costs and LOS. The independent variables were patient and hospital factors. We identified 2491 patients who underwent OPCAB at 268 hospitals. The mean cost of OPCAB was $40 665 ±7774, and the mean LOS was 23.4±8.2 days. The study found that select patient factors and certain comorbidities were associated with a high cost and long LOS. A high hospital OPCAB volume was associated with a low cost (-6.6%; p=0.024) as well as a short LOS (-17.6%, pcost and LOS. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. Non-Linear Multi-Physics Analysis and Multi-Objective Optimization in Electroheating Applications

    Czech Academy of Sciences Publication Activity Database

    di Barba, P.; Doležel, Ivo; Mognaschi, M. E.; Savini, A.; Karban, P.

    2014-01-01

    Roč. 50, č. 2 (2014), s. 7016604-7016604 ISSN 0018-9464 Institutional support: RVO:61388998 Keywords : coupled multi-physics problems * finite element method * non-linear equations Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.386, year: 2014

  8. SU-E-T-627: Failure Modes and Effect Analysis for Monthly Quality Assurance of Linear Accelerator

    International Nuclear Information System (INIS)

    Xie, J; Xiao, Y; Wang, J; Peng, J; Lu, S; Hu, W

    2014-01-01

    Purpose: To develop and implement a failure mode and effect analysis (FMEA) on routine monthly Quality Assurance (QA) tests (physical tests part) of linear accelerator. Methods: A systematic failure mode and effect analysis method was performed for monthly QA procedures. A detailed process tree of monthly QA was created and potential failure modes were defined. Each failure mode may have many influencing factors. For each factor, a risk probability number (RPN) was calculated from the product of probability of occurrence (O), the severity of effect (S), and detectability of the failure (D). The RPN scores are in a range of 1 to 1000, with higher scores indicating stronger correlation to a given influencing factor of a failure mode. Five medical physicists in our institution were responsible to discuss and to define the O, S, D values. Results: 15 possible failure modes were identified and all RPN scores of all influencing factors of these 15 failue modes were from 8 to 150, and the checklist of FMEA in monthly QA was drawn. The system showed consistent and accurate response to erroneous conditions. Conclusion: The influencing factors of RPN greater than 50 were considered as highly-correlated factors of a certain out-oftolerance monthly QA test. FMEA is a fast and flexible tool to develop an implement a quality management (QM) frame work of monthly QA, which improved the QA efficiency of our QA team. The FMEA work may incorporate more quantification and monitoring fuctions in future

  9. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  10. Feature-space-based FMRI analysis using the optimal linear transformation.

    Science.gov (United States)

    Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S

    2010-09-01

    The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.

  11. The Langley Stability and Transition Analysis Code (LASTRAC) : LST, Linear and Nonlinear PSE for 2-D, Axisymmetric, and Infinite Swept Wing Boundary Layers

    Science.gov (United States)

    Chang, Chau-Lyan

    2003-01-01

    During the past two decades, our understanding of laminar-turbulent transition flow physics has advanced significantly owing to, in a large part, the NASA program support such as the National Aerospace Plane (NASP), High-speed Civil Transport (HSCT), and Advanced Subsonic Technology (AST). Experimental, theoretical, as well as computational efforts on various issues such as receptivity and linear and nonlinear evolution of instability waves take part in broadening our knowledge base for this intricate flow phenomenon. Despite all these advances, transition prediction remains a nontrivial task for engineers due to the lack of a widely available, robust, and efficient prediction tool. The design and development of the LASTRAC code is aimed at providing one such engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. LASTRAC was written from scratch based on the state-of-the-art numerical methods for stability analysis and modem software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory (LST) or linear parabolized stability equations (LPSE) method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. Coupled with the built-in receptivity model that is currently under development, the nonlinear PSE method offers a synergistic approach to predict transition onset for a given disturbance environment based on first principles. This paper describes the governing equations, numerical methods, code development, and case studies for the current release of LASTRAC. Practical applications of LASTRAC are demonstrated for linear stability calculations, N-factor transition correlation, non-linear breakdown simulations, and controls of stationary crossflow instability in supersonic swept wing boundary

  12. Flutter analysis of an airfoil with nonlinear damping using equivalent linearization

    Directory of Open Access Journals (Sweden)

    Chen Feixin

    2014-02-01

    Full Text Available The equivalent linearization method (ELM is modified to investigate the nonlinear flutter system of an airfoil with a cubic damping. After obtaining the linearization quantity of the cubic nonlinearity by the ELM, an equivalent system can be deduced and then investigated by linear flutter analysis methods. Different from the routine procedures of the ELM, the frequency rather than the amplitude of limit cycle oscillation (LCO is chosen as an active increment to produce bifurcation charts. Numerical examples show that this modification makes the ELM much more efficient. Meanwhile, the LCOs obtained by the ELM are in good agreement with numerical solutions. The nonlinear damping can delay the occurrence of secondary bifurcation. On the other hand, it has marginal influence on bifurcation characteristics or LCOs.

  13. Linear and nonlinear subspace analysis of hand movements during grasping.

    Science.gov (United States)

    Cui, Phil Hengjun; Visell, Yon

    2014-01-01

    This study investigated nonlinear patterns of coordination, or synergies, underlying whole-hand grasping kinematics. Prior research has shed considerable light on roles played by such coordinated degrees-of-freedom (DOF), illuminating how motor control is facilitated by structural and functional specializations in the brain, peripheral nervous system, and musculoskeletal system. However, existing analyses suppose that the patterns of coordination can be captured by means of linear analyses, as linear combinations of nominally independent DOF. In contrast, hand kinematics is itself highly nonlinear in nature. To address this discrepancy, we sought to to determine whether nonlinear synergies might serve to more accurately and efficiently explain human grasping kinematics than is possible with linear analyses. We analyzed motion capture data acquired from the hands of individuals as they grasped an array of common objects, using four of the most widely used linear and nonlinear dimensionality reduction algorithms. We compared the results using a recently developed algorithm-agnostic quality measure, which enabled us to assess the quality of the dimensional reductions that resulted by assessing the extent to which local neighborhood information in the data was preserved. Although qualitative inspection of this data suggested that nonlinear correlations between kinematic variables were present, we found that linear modeling, in the form of Principle Components Analysis, could perform better than any of the nonlinear techniques we applied.

  14. Multiplication factor versus regression analysis in stature estimation from hand and foot dimensions.

    Science.gov (United States)

    Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha

    2012-05-01

    Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  15. Under which climate and soil conditions the plant productivity-precipitation relationship is linear or nonlinear?

    Science.gov (United States)

    Ye, Jian-Sheng; Pei, Jiu-Ying; Fang, Chao

    2018-03-01

    Understanding under which climate and soil conditions the plant productivity-precipitation relationship is linear or nonlinear is useful for accurately predicting the response of ecosystem function to global environmental change. Using long-term (2000-2016) net primary productivity (NPP)-precipitation datasets derived from satellite observations, we identify >5600pixels in the North Hemisphere landmass that fit either linear or nonlinear temporal NPP-precipitation relationships. Differences in climate (precipitation, radiation, ratio of actual to potential evapotranspiration, temperature) and soil factors (nitrogen, phosphorous, organic carbon, field capacity) between the linear and nonlinear types are evaluated. Our analysis shows that both linear and nonlinear types exhibit similar interannual precipitation variabilities and occurrences of extreme precipitation. Permutational multivariate analysis of variance suggests that linear and nonlinear types differ significantly regarding to radiation, ratio of actual to potential evapotranspiration, and soil factors. The nonlinear type possesses lower radiation and/or less soil nutrients than the linear type, thereby suggesting that nonlinear type features higher degree of limitation from resources other than precipitation. This study suggests several factors limiting the responses of plant productivity to changes in precipitation, thus causing nonlinear NPP-precipitation pattern. Precipitation manipulation and modeling experiments should combine with changes in other climate and soil factors to better predict the response of plant productivity under future climate. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Design and analysis approach for linear aerospike nozzle

    International Nuclear Information System (INIS)

    Khan, S.U.; Khan, A.A.; Munir, A.

    2014-01-01

    The paper presents an aerodynamic design of a simplified linear aerospike nozzle and its detailed exhaust flow analysis with no spike truncation. Analytical method with isentropic planar flow was used to generate the nozzle contour through MATLAB . The developed code produces a number of outputs comprising nozzle wall profile, flow properties along the nozzle wall, thrust coefficient, thrust, as well as amount of nozzle truncation. Results acquired from design code and numerical analyses are compared for observing differences. The numerical analysis adopted an inviscid model carried out through commercially available and reliable computational fluid dynamics (CFD) software. Use of the developed code would assist the readers to perform quick analysis of different aerodynamic design parameters for the aerospike nozzle that has tremendous scope of application in future launch vehicles. Keyword: Rocket propulsion, Aerospike Nozzle, Control Design, Computational Fluid Dynamics. (author)

  17. PWR control system design using advanced linear and non-linear methodologies

    International Nuclear Information System (INIS)

    Rabindran, N.; Whitmarsh-Everiss, M.J.

    2004-01-01

    Consideration is here given to the methodology deployed for non-linear heuristic analysis in the time domain supported by multi-variable linear control system design methods for the purposes of operational dynamics and control system analysis. This methodology is illustrated by the application of structural singular value μ analysis to Pressurised Water Reactor control system design. (author)

  18. A solution approach for non-linear analysis of concrete members

    International Nuclear Information System (INIS)

    Hadi, N. M.; Das, S.

    1999-01-01

    Non-linear solution of reinforced concrete structural members, at and beyond its maximum strength poses complex numerical problems. This is due to the fact that concrete exhibits strain softening behaviour once it reaches its maximum strength. This paper introduces an improved non-linear solution capable to overcome the numerical problems efficiently. The paper also presents a new concept of modeling discrete cracks in concrete members by using gap elements. Gap elements are placed in between two adjacent concrete elements in tensile zone. The magnitude of elongation of gap elements, which represents the width of the crack in concrete, increases edith the increase of tensile stress in those elements. As a result, transfer of local from one concrete element to adjacent elements reduces. Results of non-linear finite element analysis of three concrete beams using this new solution strategy are compared with those obtained by other researchers, and a good agreement is achieved. (authors). 13 refs. 9 figs.,

  19. A meta-analysis of cambium phenology and growth: linear and non-linear patterns in conifers of the northern hemisphere.

    Science.gov (United States)

    Rossi, Sergio; Anfodillo, Tommaso; Cufar, Katarina; Cuny, Henri E; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gricar, Jozica; Gruber, Andreas; King, Gregory M; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B K

    2013-12-01

    Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1-9 years per site from 1998 to 2011. The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern. The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond differently to changes in environmental conditions.

  20. Comparative study between output factors obtained in a linear accelerator used for radiosurgery treatments

    International Nuclear Information System (INIS)

    Velázquez Trejo, J.J.; Olive, K.I.; Gutiérrez Castillo, J.G.; Hardy Pérez, A.E.

    2017-01-01

    Purpose: To compare the output factors obtained in a linear accelerator with conical collimators using five models of detectors, through tree different methods: the ratio of detector readings, the “daisy chain” technique (for diodes) and applying the k fclin, fmsr Qclin, Qmsr factors based in the formalism proposed by the IAEA (this one was applied only to tree detectors). Methods: A linear accelerator Varian-iX was employed with BrainLab conical collimators (30 mm to 7.5 mm), the detectors used were: PTW-PinPoint 31016 (×2), PTW-tipo E 60017 (×2), PTW-microLion 31018 (×2), EDGE (Sun-Nuclear), y PTW-Semiflex 31010. For the first three models were analyzed two detectors with different series. The measurements were carried out in water at depth of 1.5 cm and source to surface distance of 98.5 cm. Results: With the readings ratio method, all detectors showed differences from 3.5% to more than 15% in the smallest field sizes, for the diodes the “daisy chain” method did not provide significant corrections. Applying the k fclin, fmsr Qclin, Qmsr Small the detectors PTW60017, PTW31018 and EDGE showed differences of less than 3%. Conclusions: In small fields the readings ratio method could introduce significant errors in the output factor determination. Applying the k fclin, fmsr Qclin, Qmsr proved to be a viable option. [es

  1. A multiple linear regression analysis of factors affecting the simulated Basic Life Support (BLS) performance with Automated External Defibrillator (AED) in Flemish lifeguards.

    Science.gov (United States)

    Iserbyt, Peter; Schouppe, Gilles; Charlier, Nathalie

    2015-04-01

    Research investigating lifeguards' performance of Basic Life Support (BLS) with Automated External Defibrillator (AED) is limited. Assessing simulated BLS/AED performance in Flemish lifeguards and identifying factors affecting this performance. Six hundred and sixteen (217 female and 399 male) certified Flemish lifeguards (aged 16-71 years) performed BLS with an AED on a Laerdal ResusciAnne manikin simulating an adult victim of drowning. Stepwise multiple linear regression analysis was conducted with BLS/AED performance as outcome variable and demographic data as explanatory variables. Mean BLS/AED performance for all lifeguards was 66.5%. Compression rate and depth adhered closely to ERC 2010 guidelines. Ventilation volume and flow rate exceeded the guidelines. A significant regression model, F(6, 415)=25.61, p<.001, ES=.38, explained 27% of the variance in BLS performance (R2=.27). Significant predictors were age (beta=-.31, p<.001), years of certification (beta=-.41, p<.001), time on duty per year (beta=-.25, p<.001), practising BLS skills (beta=.11, p=.011), and being a professional lifeguard (beta=-.13, p=.029). 71% of lifeguards reported not practising BLS/AED. Being young, recently certified, few days of employment per year, practising BLS skills and not being a professional lifeguard are factors associated with higher BLS/AED performance. Measures should be taken to prevent BLS/AED performances from decaying with age and longer certification. Refresher courses could include a formal skills test and lifeguards should be encouraged to practise their BLS/AED skills. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Adjustment of Adaptive Gain with Bounded Linear Stability Analysis to Improve Time-Delay Margin for Metrics-Driven Adaptive Control

    Science.gov (United States)

    Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas

    2009-01-01

    This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.

  3. Three dimensional non-linear cracking analysis of prestressed concrete containment vessel

    International Nuclear Information System (INIS)

    Al-Obaid, Y.F.

    2001-01-01

    The paper gives full development of three-dimensional cracking matrices. These matrices are simulated in three-dimensional non-linear finite element analysis adopted for concrete containment vessels. The analysis includes a combination of conventional steel, the steel line r and prestressing tendons and the anisotropic stress-relations for concrete and concrete aggregate interlocking. The analysis is then extended and is linked to cracking analysis within the global finite element program OBAID. The analytical results compare well with those available from a model test. (author)

  4. Edmonton obesity staging system among pediatric patients: a validation and obesogenic risk factor analysis.

    Science.gov (United States)

    Grammatikopoulou, M G; Chourdakis, M; Gkiouras, K; Roumeli, P; Poulimeneas, D; Apostolidou, E; Chountalas, I; Tirodimos, I; Filippou, O; Papadakou-Lagogianni, S; Dardavessis, T

    2018-01-08

    The Edmonton Obesity Staging System for Pediatrics (EOSS-P) is a useful tool, delineating different obesity severity tiers associated with distinct treatment barriers. The aim of the study was to apply the EOSS-P on a Greek pediatric cohort and assess risk factors associated with each stage, compared to normal weight controls. A total of 361 children (2-14 years old), outpatients of an Athenian hospital, participated in this case-control study by forming two groups: the obese (n = 203) and the normoweight controls (n = 158). Anthropometry, blood pressure, blood and biochemical markers, comorbidities and obesogenic lifestyle parameters were recorded and the EOSS-P was applied. Validation of EOSS-P stages was conducted by juxtaposing them with IOTF-defined weight status. Obesogenic risk factors' analysis was conducted by constructing gender-and-age-adjusted (GA) and multivariate logistic models. The majority of obese children were stratified at stage 1 (46.0%), 17.0% were on stage 0, and 37.0% on stage 2. The validation analysis revealed that EOSS-P stages greater than 0 were associated with diastolic blood pressure and levels of glucose, cholesterol, LDL and ALT. Reduced obesity odds were observed among children playing outdoors and increased odds for every screen time hour, both in the GA and in the multivariate analyses (all P  2 times/week was associated with reduced obesity odds in the GA analysis (OR = 0.57, 95% CI = 0.33-0.98, P linear = 0.047), it lost its significance in the multivariate analysis (P linear = 0.145). Analogous results were recorded in the analyses of the abovementioned physical activity risk factors for the EOSS-P stages. Linear relationships were observed for fast-food consumption and IOTF-defined obesity and higher than 0 EOSS-P stages. Parental obesity status was associated with all EOSS-P stages and IOTF-defined obesity status. Few outpatients were healthy obese (stage 0), while the majority exhibited several comorbidities

  5. Association between parental socio-demographic factors and declined linear growth of young children in Jakarta

    Directory of Open Access Journals (Sweden)

    Hartono Gunardi

    2018-02-01

    Full Text Available Background: In Indonesia, approximately 35.5% of children under five years old were stunted. Stunting is related to shorter adult stature, poor cognition and educational performance, low adult wages, lost productivity, and higher risk of nutrition-related chronic disease. The aim of this study was to identify parental socio-demographic risk factors of declined linear growth in children younger than 2 years old.Methods: This was a cohort-prospective study between August 2012 and May 2014 at three primary community health care centers (Puskesmas in Jakarta, Indonesia, namely Puskesmas Jatinegara, Mampang, and Tebet. Subjects were healthy children under 2 years old, in which their weight and height were measured serially (at 6–11 weeks old and 18–24 months old. The length-for-age based on those data was used to determine stature status. The serial measurement was done to detect growth pattern. Parental socio-demographic data were obtained from questionnairesResults: From the total of 160 subjects, 14 (8.7% showed declined growth pattern from normal to stunted and 10 (6.2% to severely stunted. As many as 134 (83.8% subjects showed consistent normal growth pattern. Only 2 (1.2% showed improvement in the linear growth. Maternal education duration less than 9 years (RR=2.60, 95% CI=1.23–5.46; p=0.02 showed statistically significant association with declined linear growth in children.Conclusion: Mother with education duration less than 9 years was the determining socio-demographic risk factor that contributed to the declined linear growth in children less than 2 years of age.

  6. Application of perturbation theory to the non-linear vibration analysis of a string including the bending moment effects

    International Nuclear Information System (INIS)

    Esmaeilzadeh Khadem, S.; Rezaee, M.

    2001-01-01

    In this paper the large amplitude and non-linear vibration of a string is considered. The initial tension, lateral vibration amplitude, diameter and the modulus of elasticity of the string have main effects on its natural frequencies. Increasing the lateral vibration amplitude makes the assumption of constant initial tension invalid. In this case, therefore, it is impossible to use the classical equation of string with small amplitude transverse motion assumption. On the other hand, by increasing the string diameter, the bending moment effect will increase dramatically, and acts as an impressive restoring moment. Considering the effects of the bending moments, the nonlinear equation governing the large amplitude transverse vibration of a string is derived. The time dependent portion of the governing equation has the from of Duff ing equation is solved using the perturbation theory. The results of the analysis are shown in appropriate graphs, and the natural frequencies of the string due to the non-linear factors are compared with the natural frequencies of the linear vibration os a string without bending moment effects

  7. Design Analysis of Taper Width Variations in Magnetless Linear Machine for Traction Applications

    Directory of Open Access Journals (Sweden)

    Saadha Aminath

    2018-01-01

    Full Text Available Linear motors are being used in a different application with a huge popularity in the use of transport industry. With the invention of maglev trains and other high-speed trains, linear motors are being used for the translation and braking applications for these systems. However, a huge drawback of the linear motor design is the cogging force, low thrust values, and voltage ripples. This paper aims to study the force analysis with change in taper/teeth width of the motor stator and mover to understand the best teeth ratio to obtain a high flux density and a high thrust. The analysis is conducted through JMAG software and it is found that the optimum teeth ratio for both the stator and mover gives an increase of 94.4% increases compared to the 0.5mm stator and mover width.

  8. Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance

    Energy Technology Data Exchange (ETDEWEB)

    O' Daniel, Jennifer C., E-mail: jennifer.odaniel@duke.edu; Yin, Fang-Fang

    2017-05-01

    Purpose: To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. Methods and Materials: A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Results: Although the failure severity was greatest for daily imaging QA (imaging vs treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Conclusions: Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number.

  9. Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance.

    Science.gov (United States)

    O'Daniel, Jennifer C; Yin, Fang-Fang

    2017-05-01

    To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Although the failure severity was greatest for daily imaging QA (imaging vs treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Near-infrared reflectance analysis by Gauss-Jordan linear algebra

    International Nuclear Information System (INIS)

    Honigs, D.E.; Freelin, J.M.; Hieftje, G.M.; Hirschfeld, T.B.

    1983-01-01

    Near-infrared reflectance analysis is an analytical technique that uses the near-infrared diffuse reflectance of a sample at several discrete wavelengths to predict the concentration of one or more of the chemical species in that sample. However, because near-infrared bands from solid samples are both abundant and broad, the reflectance at a given wavelength usually contains contributions from several sample components, requiring extensive calculations on overlapped bands. In the present study, these calculations have been performed using an approach similar to that employed in multi-component spectrophotometry, but with Gauss-Jordan linear algebra serving as the computational vehicle. Using this approach, correlations for percent protein in wheat flour and percent benzene in hydrocarbons have been obtained and are evaluated. The advantages of a linear-algebra approach over the common one employing stepwise regression are explored

  11. Hybrid System Modeling and Full Cycle Operation Analysis of a Two-Stroke Free-Piston Linear Generator

    Directory of Open Access Journals (Sweden)

    Peng Sun

    2017-02-01

    Full Text Available Free-piston linear generators (FPLGs have attractive application prospects for hybrid electric vehicles (HEVs owing to their high-efficiency, low-emissions and multi-fuel flexibility. In order to achieve long-term stable operation, the hybrid system design and full-cycle operation strategy are essential factors that should be considered. A 25 kW FPLG consisting of an internal combustion engine (ICE, a linear electric machine (LEM and a gas spring (GS is designed. To improve the power density and generating efficiency, the LEM is assembled with two modular flat-type double-sided PM LEM units, which sandwich a common moving-magnet plate supported by a middle keel beam and bilateral slide guide rails to enhance the stiffness of the moving plate. For the convenience of operation processes analysis, the coupling hybrid system is modeled mathematically and a full cycle simulation model is established. Top-level systemic control strategies including the starting, stable operating, fault recovering and stopping strategies are analyzed and discussed. The analysis results validate that the system can run stably and robustly with the proposed full cycle operation strategy. The effective electric output power can reach 26.36 kW with an overall system efficiency of 36.32%.

  12. Linear and non-linear energy barriers in systems of interacting single-domain ferromagnetic particles

    International Nuclear Information System (INIS)

    Petrila, Iulian; Bodale, Ilie; Rotarescu, Cristian; Stancu, Alexandru

    2011-01-01

    A comparative analysis between linear and non-linear energy barriers used for modeling statistical thermally-excited ferromagnetic systems is presented. The linear energy barrier is obtained by new symmetry considerations about the anisotropy energy and the link with the non-linear energy barrier is also presented. For a relevant analysis we compare the effects of linear and non-linear energy barriers implemented in two different models: Preisach-Neel and Ising-Metropolis. The differences between energy barriers which are reflected in different coercive field dependence of the temperature are also presented. -- Highlights: → The linear energy barrier is obtained from symmetry considerations. → The linear and non-linear energy barriers are calibrated and implemented in Preisach-Neel and Ising-Metropolis models. → The temperature and time effects of the linear and non-linear energy barriers are analyzed.

  13. Airfoil stall interpreted through linear stability analysis

    Science.gov (United States)

    Busquet, Denis; Juniper, Matthew; Richez, Francois; Marquet, Olivier; Sipp, Denis

    2017-11-01

    Although airfoil stall has been widely investigated, the origin of this phenomenon, which manifests as a sudden drop of lift, is still not clearly understood. In the specific case of static stall, multiple steady solutions have been identified experimentally and numerically around the stall angle. We are interested here in investigating the stability of these steady solutions so as to first model and then control the dynamics. The study is performed on a 2D helicopter blade airfoil OA209 at low Mach number, M 0.2 and high Reynolds number, Re 1.8 ×106 . Steady RANS computation using a Spalart-Allmaras model is coupled with continuation methods (pseudo-arclength and Newton's method) to obtain steady states for several angles of incidence. The results show one upper branch (high lift), one lower branch (low lift) connected by a middle branch, characterizing an hysteresis phenomenon. A linear stability analysis performed around these equilibrium states highlights a mode responsible for stall, which starts with a low frequency oscillation. A bifurcation scenario is deduced from the behaviour of this mode. To shed light on the nonlinear behavior, a low order nonlinear model is created with the same linear stability behavior as that observed for that airfoil.

  14. Mathematical modelling and linear stability analysis of laser fusion cutting

    International Nuclear Information System (INIS)

    Hermanns, Torsten; Schulz, Wolfgang; Vossen, Georg; Thombansen, Ulrich

    2016-01-01

    A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.

  15. Application of range-test in multiple linear regression analysis in ...

    African Journals Online (AJOL)

    Application of range-test in multiple linear regression analysis in the presence of outliers is studied in this paper. First, the plot of the explanatory variables (i.e. Administration, Social/Commercial, Economic services and Transfer) on the dependent variable (i.e. GDP) was done to identify the statistical trend over the years.

  16. Mathematical modelling and linear stability analysis of laser fusion cutting

    Energy Technology Data Exchange (ETDEWEB)

    Hermanns, Torsten; Schulz, Wolfgang [RWTH Aachen University, Chair for Nonlinear Dynamics, Steinbachstr. 15, 52047 Aachen (Germany); Vossen, Georg [Niederrhein University of Applied Sciences, Chair for Applied Mathematics and Numerical Simulations, Reinarzstr.. 49, 47805 Krefeld (Germany); Thombansen, Ulrich [RWTH Aachen University, Chair for Laser Technology, Steinbachstr. 15, 52047 Aachen (Germany)

    2016-06-08

    A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.

  17. Analysis of an inventory model for both linearly decreasing demand and holding cost

    Science.gov (United States)

    Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.

    2016-03-01

    This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.

  18. A SOCIOLOGICAL ANALYSIS OF THE CHILDBEARING COEFFICIENT IN THE ALTAI REGION BASED ON METHOD OF FUZZY LINEAR REGRESSION

    Directory of Open Access Journals (Sweden)

    Sergei Vladimirovich Varaksin

    2017-06-01

    Full Text Available Purpose. Construction of a mathematical model of the dynamics of childbearing change in the Altai region in 2000–2016, analysis of the dynamics of changes in birth rates for multiple age categories of women of childbearing age. Methodology. A auxiliary analysis element is the construction of linear mathematical models of the dynamics of childbearing by using fuzzy linear regression method based on fuzzy numbers. Fuzzy linear regression is considered as an alternative to standard statistical linear regression for short time series and unknown distribution law. The parameters of fuzzy linear and standard statistical regressions for childbearing time series were defined with using the built in language MatLab algorithm. Method of fuzzy linear regression is not used in sociological researches yet. Results. There are made the conclusions about the socio-demographic changes in society, the high efficiency of the demographic policy of the leadership of the region and the country, and the applicability of the method of fuzzy linear regression for sociological analysis.

  19. Stability analysis of linear switching systems with time delays

    International Nuclear Information System (INIS)

    Li Ping; Zhong Shouming; Cui Jinzhong

    2009-01-01

    The issue of stability analysis of linear switching system with discrete and distributed time delays is studied in this paper. An appropriate switching rule is applied to guarantee the stability of the whole switching system. Our results use a Riccati-type Lyapunov functional under a condition on the time delay. So, switching systems with mixed delays are developed. A numerical example is given to illustrate the effectiveness of our results.

  20. Quantifying the predictive consequences of model error with linear subspace analysis

    Science.gov (United States)

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  1. Weighted functional linear regression models for gene-based association analysis.

    Science.gov (United States)

    Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I

    2018-01-01

    Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.

  2. A linear programming manual

    Science.gov (United States)

    Tuey, R. C.

    1972-01-01

    Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.

  3. Analysis of factors important for the occurrence of Campylobacter in Danish broiler flocks

    DEFF Research Database (Denmark)

    Sommer, Helle Mølgaard; Heuer, Ole Eske; Sørensen, Anna Irene Vedel

    2013-01-01

    a multivariate analysis including all 43 variables. A multivariate analysis was conducted using a generalized linear model, and the correlations between the houses from the same farms were accounted for by adding a variance structure to the model. The procedures for analyses included backward elimination...... of positive flocks/total number of flocks delivered over the 2-year period).The following factors were found to be significantly associated with the occurrence of Campylobacter in the broiler flocks: old broiler houses, late introduction of whole wheat in the feed, relatively high broiler age at slaughter...

  4. A primer for biomedical scientists on how to execute model II linear regression analysis.

    Science.gov (United States)

    Ludbrook, John

    2012-04-01

    1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.

  5. On macroeconomic values investigation using fuzzy linear regression analysis

    Directory of Open Access Journals (Sweden)

    Richard Pospíšil

    2017-06-01

    Full Text Available The theoretical background for abstract formalization of the vague phenomenon of complex systems is the fuzzy set theory. In the paper, vague data is defined as specialized fuzzy sets - fuzzy numbers and there is described a fuzzy linear regression model as a fuzzy function with fuzzy numbers as vague parameters. To identify the fuzzy coefficients of the model, the genetic algorithm is used. The linear approximation of the vague function together with its possibility area is analytically and graphically expressed. A suitable application is performed in the tasks of the time series fuzzy regression analysis. The time-trend and seasonal cycles including their possibility areas are calculated and expressed. The examples are presented from the economy field, namely the time-development of unemployment, agricultural production and construction respectively between 2009 and 2011 in the Czech Republic. The results are shown in the form of the fuzzy regression models of variables of time series. For the period 2009-2011, the analysis assumptions about seasonal behaviour of variables and the relationship between them were confirmed; in 2010, the system behaved fuzzier and the relationships between the variables were vaguer, that has a lot of causes, from the different elasticity of demand, through state interventions to globalization and transnational impacts.

  6. Cryptanalysis of DES with a reduced number of rounds: Sequences of linear factors in block ciphers

    NARCIS (Netherlands)

    D. Chaum (David); J.-H. Evertse (Jan-Hendrik)

    1985-01-01

    textabstractA blockcipher is said to have a linear factor if, for all plaintexts and keys, there is a fixed non-empty set of key bits whose simultaneous complementation leaves the exclusive-or sum of a fixed non-empty set of ciphertext bits unchanged.

  7. Estimation of the behavior factor of existing RC-MRF buildings

    Science.gov (United States)

    Vona, Marco; Mastroberti, Monica

    2018-01-01

    In recent years, several research groups have studied a new generation of analysis methods for seismic response assessment of existing buildings. Nevertheless, many important developments are still needed in order to define more reliable and effective assessment procedures. Moreover, regarding existing buildings, it should be highlighted that due to the low knowledge level, the linear elastic analysis is the only analysis method allowed. The same codes (such as NTC2008, EC8) consider the linear dynamic analysis with behavior factor as the reference method for the evaluation of seismic demand. This type of analysis is based on a linear-elastic structural model subject to a design spectrum, obtained by reducing the elastic spectrum through a behavior factor. The behavior factor (reduction factor or q factor in some codes) is used to reduce the elastic spectrum ordinate or the forces obtained from a linear analysis in order to take into account the non-linear structural capacities. The behavior factors should be defined based on several parameters that influence the seismic nonlinear capacity, such as mechanical materials characteristics, structural system, irregularity and design procedures. In practical applications, there is still an evident lack of detailed rules and accurate behavior factor values adequate for existing buildings. In this work, some investigations of the seismic capacity of the main existing RC-MRF building types have been carried out. In order to make a correct evaluation of the seismic force demand, actual behavior factor values coherent with force based seismic safety assessment procedure have been proposed and compared with the values reported in the Italian seismic code, NTC08.

  8. Noise analysis of fluid-valve system in a linear compressor using CAE

    International Nuclear Information System (INIS)

    Lee, Jun Ho; Jeong, Weui Bong; Kim, Dang Ju

    2009-01-01

    A linear compressor in a refrigerator uses piston motion to transfer refrigerant so its efficiency is higher than a previous reciprocal compressor. Because of interaction between refrigerant and valves system in the linear compressor, however, noise has been a main issue. In spite of doing many experimental researches, there is no way to rightly predict the noise. In order to solve this limitation, the CAE analysis is applied. For giving credit to these computational data, all of the data are experimentally validated.

  9. Determining the Number of Factors in P-Technique Factor Analysis

    Science.gov (United States)

    Lo, Lawrence L.; Molenaar, Peter C. M.; Rovine, Michael

    2017-01-01

    Determining the number of factors is a critical first step in exploratory factor analysis. Although various criteria and methods for determining the number of factors have been evaluated in the usual between-subjects R-technique factor analysis, there is still question of how these methods perform in within-subjects P-technique factor analysis. A…

  10. Factor Analysis and Modelling for Rapid Quality Assessment of Croatian Wheat Cultivars with Different Gluten Characteristics

    Directory of Open Access Journals (Sweden)

    Želimir Kurtanjek

    2008-01-01

    Full Text Available Factor analysis and multivariate chemometric modelling for rapid assessment of baking quality of wheat cultivars from Slavonia region, Croatia, have been applied. The cultivars Žitarka, Kata, Monika, Ana, Demetra, Divana and Sana were grown under controlled conditions at the experimental field of Agricultural Institute Osijek during three years (2000–2002. Their quality properties were evaluated by 45 different chemical, physical and biochemical variables. The measured variables were grouped as: indirect quality parameters (6, farinographic parameters (7, extensographic parameters (5, baking test parameters (2 and reversed phase-high performance liquid chromatography (RP-HPLC of gluten proteins (25. The aim of this study is to establish minimal number (three, i.e. principal factors, among the 45 variables and to derive multivariate linear regression models for their use in simple and fast prediction of wheat properties. Selection of the principal factors based on the principal component analysis (PCA has been applied. The first three main factors of the analysis include: total glutenins (TGT, total ω-gliadins (Tω- and the ratio of dough resistance/extensibility (R/Ext. These factors account for 76.45 % of the total variance. Linear regression models gave average regression coefficients (R evaluated for the parameter groups: indirect quality R=0.91, baking test R=0.63, farinographic R=0.78, extensographic R=0.95 and RP-HPLC of gluten data R=0.90. Errors in the model predictions were evaluated by the 95 % significance intervals of the calibration lines. Practical applications of the models for rapid quality assessment and laboratory experiment planning were emphasized.

  11. A Beginner’s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis

    Directory of Open Access Journals (Sweden)

    An Gie Yong

    2013-10-01

    Full Text Available The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.

  12. Rotordynamic analysis for stepped-labyrinth gas seals using moody's friction-factor model

    International Nuclear Information System (INIS)

    Ha, Tae Woong

    2001-01-01

    The governing equations are derived for the analysis of a stepped labyrinth gas seal generally used in high performance compressors, gas turbines, and steam turbines. The bulk-flow is assumed for a single cavity control volume set up in a stepped labyrinth cavity and the flow is assumed to be completely turbulent in the circumferential direction. The Moody's wall-friction-factor model is used for the calculation of wall shear stresses in the single cavity control volume. For the reaction force developed by the stepped labyrinth gas seal, linearized zeroth-order and first-order perturbation equations are developed for small motion about a centered position. Integration of the resultant first-order pressure distribution along and around the seal defines the rotordynamic coefficients of the stepped labyrinth gas seal. The resulting leakage and rotordynamic characteristics of the stepped labyrinth gas seal are presented and compared with Scharrer's theoretical analysis using Blasius' wall-friction-factor model. The present analysis shows a good qualitative agreement of leakage characteristics with Scharrer's analysis, but underpredicts by about 20 %. For the rotordynamic coefficients, the present analysis generally yields smaller predicted values compared with Scharrer's analysis

  13. Effect of Genetic and Environmental Factors on Linear Udder ...

    African Journals Online (AJOL)

    The effects of evaluators, sex of calf, breed, sire, parity, month of calving and season of lactation on linear udder conformation traits and milk yield was investigated in the dairy herd of the National Animal Production Research Institute, Shika, Zaria, Nigeria. Seven linear udder conformation traits coupled with milk yield of 25 ...

  14. Fourier two-level analysis for discontinuous Galerkin discretization with linear elements

    NARCIS (Netherlands)

    P.W. Hemker (Piet); W. Hoffmann; M.H. van Raalte (Marc)

    2002-01-01

    textabstractIn this paper we study the convergence of a multigrid method for the solution of a linear second order elliptic equation, discretized by discontinuous Galerkin (DG) methods, and we give a detailed analysis of the convergence fordifferent block-relaxation strategies. In addition to an

  15. Analysis about factors affecting the degree of damage of buildings in earthquake

    International Nuclear Information System (INIS)

    Jia, Jing; Yan, Jinghong

    2015-01-01

    Earthquakes have been affecting human's safety through human's history. Previous studies on earthquake, mostly, focused on the performance of buildings or evaluating damages. This paper, however, compares different factors that have influence on the damage of buildings with a case study in Wenchuan earthquake, using multiple linear regression methodology, so as to identify to what extend this factors influence the buildings’ damages, then give the rank of importance of these factors. In this process, authors take the type of structure as a dummy variable to compare the degree of damages caused by different types of structure, which were barely studied before. Besides, Factor Analysis Methodology(FA) will be adapted to classify factors, the results of which will simplify later study. The outcome of this study would make a big difference in optimizing the seismic design and improving residential seismic quality. (paper)

  16. Econometrics analysis of consumer behaviour: a linear expenditure system applied to energy

    International Nuclear Information System (INIS)

    Giansante, C.; Ferrari, V.

    1996-12-01

    In economics literature the expenditure system specification is a well known subject. The problem is to define a coherent representation of consumer behaviour through functional forms easy to calculate. In this work it is used the Stone-Geary Linear Expenditure System and its multi-level decision process version. The Linear Expenditure system is characterized by an easy calculating estimation procedure, and its multi-level specification allows substitution and complementary relations between goods. Moreover, the utility function separability condition on which the Utility Tree Approach is based, justifies to use an estimation procedure in two or more steps. This allows to use an high degree of expenditure categories disaggregation, impossible to reach the Linear Expediture System. The analysis is applied to energy sectors

  17. Linear stability analysis of the gas injection augmented natural circulation of STAR-LM

    International Nuclear Information System (INIS)

    Yeon-Jong Yoo; Qiao Wu; James J Sienicki

    2005-01-01

    Full text of publication follows: A linear stability analysis has been performed for the gas injection augmented natural circulation of the Secure Transportable Autonomous Reactor - Liquid Metal (STAR-LM). Natural circulation is of great interest for the development of Generation-IV nuclear energy systems due to its vital role in the area of passive safety and reliability. One of such systems is STAR-LM under development by Argonne National Laboratory. STAR-LM is a 400 MWt class modular, proliferation-resistant, and passively safe liquid metal-cooled fast reactor system that uses inert lead (Pb) coolant and the advanced power conversion system that consists of a gas turbine Brayton cycle utilizing supercritical carbon dioxide (CO 2 ) to obtain higher plant efficiency. The primary loop of STAR-LM relies only on the natural circulation to eliminate the use of circulation pumps for passive safety consideration. To enhance the natural circulation of the primary coolant, STAR-LM optionally incorporates the additional driving force provided by the injection of noncondensable gas into the primary coolant above the reactor core, which is effective in removing heat from the core and transferring it to the secondary working fluid without the attainment of excessive coolant temperature at nominal operating power. Therefore, it naturally raises the concern about the natural circulation instability due to the relatively high temperature change in the core and the two-phase flow condition in the hot leg above the core. For the ease of analysis, the flow path of the loop was partitioned into five thermal-hydraulically distinct sections, i.e., heated core, unheated core, hot leg, heat exchanger, and cold leg. The one-dimensional single-phase flow field equations governing the natural circulation, i.e., continuity, momentum, and energy equations, were used for each section except the hot leg. For the hot leg, the one-dimensional homogeneous equilibrium two-phase flow field

  18. Simple estimating method of damages of concrete gravity dam based on linear dynamic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sasaki, T.; Kanenawa, K.; Yamaguchi, Y. [Public Works Research Institute, Tsukuba, Ibaraki (Japan). Hydraulic Engineering Research Group

    2004-07-01

    Due to the occurrence of large earthquakes like the Kobe Earthquake in 1995, there is a strong need to verify seismic resistance of dams against much larger earthquake motions than those considered in the present design standard in Japan. Problems exist in using nonlinear analysis to evaluate the safety of dams including: that the influence which the set material properties have on the results of nonlinear analysis is large, and that the results of nonlinear analysis differ greatly according to the damage estimation models or analysis programs. This paper reports the evaluation indices based on a linear dynamic analysis method and the characteristics of the progress of cracks in concrete gravity dams with different shapes using a nonlinear dynamic analysis method. The study concludes that if simple linear dynamic analysis is appropriately conducted to estimate tensile stress at potential locations of initiating cracks, the damage due to cracks would be predicted roughly. 4 refs., 1 tab., 13 figs.

  19. Some mathematical problems in non-linear Physics

    International Nuclear Information System (INIS)

    1983-01-01

    The main results contained in this report are the following: I) A general analysis of non-autonomous conserved densities for simple linear evolution systems. II) Partial differential systems within a wide class are converted into Lagrange an form. III) Rigorous criteria for existence of integrating factor matrices. IV) Isolation of all third-order evolution equations with high order symmetries and conservation laws. (Author) 3 refs

  20. Micosoft Excel Sensitivity Analysis for Linear and Stochastic Program Feed Formulation

    Science.gov (United States)

    Sensitivity analysis is a part of mathematical programming solutions and is used in making nutritional and economic decisions for a given feed formulation problem. The terms, shadow price and reduced cost, are familiar linear program (LP) terms to feed formulators. Because of the nonlinear nature of...

  1. Painlevйe analysis and integrability of two-coupled non-linear ...

    Indian Academy of Sciences (India)

    the Painlevйe property. In this case the system is expected to be integrable. In recent years more attention is paid to the study of coupled non-linear oscilla- ... Painlevйe analysis. To be self-contained, in §2 we briefly outline the salient features.

  2. Linear degrees of freedom in speech production: analysis of cineradio- and labio-film data and articulatory-acoustic modeling.

    Science.gov (United States)

    Beautemps, D; Badin, P; Bailly, G

    2001-05-01

    The following contribution addresses several issues concerning speech degrees of freedom in French oral vowels, stop, and fricative consonants based on an analysis of tongue and lip shapes extracted from cineradio- and labio-films. The midsagittal tongue shapes have been submitted to a linear decomposition where some of the loading factors were selected such as jaw and larynx position while four other components were derived from principal component analysis (PCA). For the lips, in addition to the more traditional protrusion and opening components, a supplementary component was extracted to explain the upward movement of both the upper and lower lips in [v] production. A linear articulatory model was developed; the six tongue degrees of freedom were used as the articulatory control parameters of the midsagittal tongue contours and explained 96% of the tongue data variance. These control parameters were also used to specify the frontal lip width dimension derived from the labio-film front views. Finally, this model was complemented by a conversion model going from the midsagittal to the area function, based on a fitting of the midsagittal distances and the formant frequencies for both vowels and consonants.

  3. Z-score linear discriminant analysis for EEG based brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    Full Text Available Linear discriminant analysis (LDA is one of the most popular classification algorithms for brain-computer interfaces (BCI. LDA assumes Gaussian distribution of the data, with equal covariance matrices for the concerned classes, however, the assumption is not usually held in actual BCI applications, where the heteroscedastic class distributions are usually observed. This paper proposes an enhanced version of LDA, namely z-score linear discriminant analysis (Z-LDA, which introduces a new decision boundary definition strategy to handle with the heteroscedastic class distributions. Z-LDA defines decision boundary through z-score utilizing both mean and standard deviation information of the projected data, which can adaptively adjust the decision boundary to fit for heteroscedastic distribution situation. Results derived from both simulation dataset and two actual BCI datasets consistently show that Z-LDA achieves significantly higher average classification accuracies than conventional LDA, indicating the superiority of the new proposed decision boundary definition strategy.

  4. [Relations between biomedical variables: mathematical analysis or linear algebra?].

    Science.gov (United States)

    Hucher, M; Berlie, J; Brunet, M

    1977-01-01

    The authors, after a short reminder of one pattern's structure, stress on the possible double approach of relations uniting the variables of this pattern: use of fonctions, what is within the mathematical analysis sphere, use of linear algebra profiting by matricial calculation's development and automatiosation. They precise the respective interests on these methods, their bounds and the imperatives for utilization, according to the kind of variables, of data, and the objective for work, understanding phenomenons or helping towards decision.

  5. Study on non-linear bistable dynamics model based EEG signal discrimination analysis method.

    Science.gov (United States)

    Ying, Xiaoguo; Lin, Han; Hui, Guohua

    2015-01-01

    Electroencephalogram (EEG) is the recording of electrical activity along the scalp. EEG measures voltage fluctuations generating from ionic current flows within the neurons of the brain. EEG signal is looked as one of the most important factors that will be focused in the next 20 years. In this paper, EEG signal discrimination based on non-linear bistable dynamical model was proposed. EEG signals were processed by non-linear bistable dynamical model, and features of EEG signals were characterized by coherence index. Experimental results showed that the proposed method could properly extract the features of different EEG signals.

  6. Linear transform of the multi-target survival curve

    Energy Technology Data Exchange (ETDEWEB)

    Watson, J V [Cambridge Univ. (UK). Dept. of Clinical Oncology and Radiotherapeutics

    1978-07-01

    A completely linear transform of the multi-target survival curve is presented. This enables all data, including those on the shoulder region of the curve, to be analysed. The necessity to make a subjective assessment about which data points to exclude for conventional methods of analysis is, therefore, removed. The analysis has also been adapted to include a 'Pike-Alper' method of assessing dose modification factors. For the data cited this predicts compatibility with the hypothesis of a true oxygen 'dose-modification' whereas the conventional Pike-Alper analysis does not.

  7. Simulation and sensitivity analysis for heavy linear paraffins production in LAB production Plant

    Directory of Open Access Journals (Sweden)

    Karimi Hajir

    2014-12-01

    Full Text Available Linear alkyl benzene (LAB is vastly utilized for the production of biodegradable detergents and emulsifiers. Predistillation unit is a part of LAB production plant in which that produced heavy linear paraffins (nC10-nC13. In this study, a mathematical model has been developed for heavy linear paraffins production in distillation columns, which has been solved using a commercial code. The models have been validated by the actual data. The effects of process parameters such as reflux rate, and reflux temperature using Gradient Search technique has been investigated. The sensitivity analysis shows that optimum reflux in columns are achieved.

  8. An easy guide to factor analysis

    CERN Document Server

    Kline, Paul

    2014-01-01

    Factor analysis is a statistical technique widely used in psychology and the social sciences. With the advent of powerful computers, factor analysis and other multivariate methods are now available to many more people. An Easy Guide to Factor Analysis presents and explains factor analysis as clearly and simply as possible. The author, Paul Kline, carefully defines all statistical terms and demonstrates step-by-step how to work out a simple example of principal components analysis and rotation. He further explains other methods of factor analysis, including confirmatory and path analysis, a

  9. Influence of plant root morphology and tissue composition on phenanthrene uptake: Stepwise multiple linear regression analysis

    International Nuclear Information System (INIS)

    Zhan, Xinhua; Liang, Xiao; Xu, Guohua; Zhou, Lixiang

    2013-01-01

    Polycyclic aromatic hydrocarbons (PAHs) are contaminants that reside mainly in surface soils. Dietary intake of plant-based foods can make a major contribution to total PAH exposure. Little information is available on the relationship between root morphology and plant uptake of PAHs. An understanding of plant root morphologic and compositional factors that affect root uptake of contaminants is important and can inform both agricultural (chemical contamination of crops) and engineering (phytoremediation) applications. Five crop plant species are grown hydroponically in solutions containing the PAH phenanthrene. Measurements are taken for 1) phenanthrene uptake, 2) root morphology – specific surface area, volume, surface area, tip number and total root length and 3) root tissue composition – water, lipid, protein and carbohydrate content. These factors are compared through Pearson's correlation and multiple linear regression analysis. The major factors which promote phenanthrene uptake are specific surface area and lipid content. -- Highlights: •There is no correlation between phenanthrene uptake and total root length, and water. •Specific surface area and lipid are the most crucial factors for phenanthrene uptake. •The contribution of specific surface area is greater than that of lipid. -- The contribution of specific surface area is greater than that of lipid in the two most important root morphological and compositional factors affecting phenanthrene uptake

  10. Quantitative analysis of results of quality control tests in linear accelerators used in radiotherapy; Analise quantitativa dos resultados de testes de controle de qualidade em aceleradores lineares usados em radioterapia

    Energy Technology Data Exchange (ETDEWEB)

    Passaro, Bruno M.; Rodrigues, Laura N. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Videira, Heber S., E-mail: bruno.passaro@gmail.com [Universidade de Sao Paulo (HCFMRP/USP), Sao Paulo, SP (Brazil). Faculdade de Medicina. Hospital das Clinicas

    2013-04-15

    The aim of this study is to assess and analyze the stability of the calibration factor of three linear accelerators, as well as the other dosimetric parameters normally included in a program of quality control in radiotherapy. The average calibration factors of the accelerators for the period of approximately four years for the Clinac 600C and Clinac 6EX were (0.998±0.012) and (0.996±0.014), respectively. For the Clinac 2100CD 6 MV and 15 MV was (1.008±0.009) and (1.006±0.010), respectively, in a period of approximately four years. The data of the calibration factors were divided into four subgroups for a more detailed analysis of behavior over the years. Through statistical analysis of calibration factors, we found that for the 600C and Clinacs 2100CD, is an expected probability that more than 90% of cases the values are within acceptable ranges according to TG-142, while for the Clinac 6EX is expected around 85% since this had several exchanges of accelerator components. The values of TPR20,10 of three accelerators are practically constant and within acceptable limits according to the TG-142. It can be concluded that a detailed study of data from the calibration factor of the accelerators and TPR{sub 20},{sub 10} from a quantitative point of view, is extremely useful in a quality assurance program. (author)

  11. PATH ANALYSIS OF RECORDING SYSTEM INNOVATION FACTORS AFFECTING ADOPTION OF GOAT FARMERS

    Directory of Open Access Journals (Sweden)

    S. Okkyla

    2014-09-01

    Full Text Available The objective of this study was to evaluate the path analysis of recording system innovation factorsaffecting adoption of goat farmers. This study was conducted from January to February 2014 inPringapus District, Semarang Regency by using survey method. For determining the location, this studyused purposive sampling method. The amount of respondents were determined by quota samplingmethod. Total respondents randomly chosed were 146 farmers. The data were descriptively andquantitatively analyzed by using path analysis of statistical package for the social science (SPSS 16.Independent variables in this study were internal factor, motivation, innovation characteristics,information source, and dependent variable was adoption. Analysis of linear regression showed thatthere was no significant effect of internal factor on adoption, so that it was important to use the trimmingmethod in path analysis. The result of path analysis showed that the influence of motivation, innovationcharacteristics and information source on adoption were 0.168; 0.720 and 0.09, respectively. Innovationcharacteristics were the greatest effect on adoption. In conclusion, by improving innovationcharacteristics of respondent through motivation and information source may significantly increase theadoption of recording system in goat farmers.

  12. Multivariate factor analysis of Girgentana goat milk composition

    Directory of Open Access Journals (Sweden)

    Pietro Giaccone

    2010-01-01

    Full Text Available The interpretation of the several variables that contribute to defining milk quality is difficult due to the high degree of  correlation among them. In this case, one of the best methods of statistical processing is factor analysis, which belongs  to the multivariate groups; for our study this particular statistical approach was employed.  A total of 1485 individual goat milk samples from 117 Girgentana goats, were collected fortnightly from January to July,  and analysed for physical and chemical composition, and clotting properties. Milk pH and tritable acidity were within the  normal range for fresh goat milk. Morning milk yield resulted 704 ± 323 g with 3.93 ± 1.23% and 3.48±0.38% for fat  and protein percentages, respectively. The milk urea content was 43.70 ± 8.28 mg/dl. The clotting ability of Girgentana  milk was quite good, with a renneting time equal to 16.96 ± 3.08 minutes, a rate of curd formation of 2.01 ± 1.63 min-  utes and a curd firmness of 25.08 ± 7.67 millimetres.  Factor analysis was performed by applying axis orthogonal rotation (rotation type VARIMAX; the analysis grouped the  milk components into three latent or common factors. The first, which explained 51.2% of the total covariance, was  defined as “slow milks”, because it was linked to r and pH. The second latent factor, which explained 36.2% of the total  covariance, was defined as “milk yield”, because it is positively correlated to the morning milk yield and to the urea con-  tent, whilst negatively correlated to the fat percentage. The third latent factor, which explained 12.6% of the total covari-  ance, was defined as “curd firmness,” because it is linked to protein percentage, a30 and titatrable acidity. With the aim  of evaluating the influence of environmental effects (stage of kidding, parity and type of kidding, factor scores were anal-  ysed with the mixed linear model. Results showed significant effects of the season of

  13. On the dynamic analysis of piecewise-linear networks

    OpenAIRE

    Heemels, W.P.M.H.; Camlibel, M.K.; Schumacher, J.M.

    2002-01-01

    Piecewise-linear (PL) modeling is often used to approximate the behavior of nonlinear circuits. One of the possible PL modeling methodologies is based on the linear complementarity problem, and this approach has already been used extensively in the circuits and systems community for static networks. In this paper, the object of study will be dynamic electrical circuits that can be recast as linear complementarity systems, i.e., as interconnections of linear time-invariant differential equatio...

  14. Commissioning measurements for photon beam data on three TrueBeam linear accelerators, and comparison with Trilogy and Clinac 2100 linear accelerators

    Science.gov (United States)

    2013-01-01

    This study presents the beam data measurement results from the commissioning of three TrueBeam linear accelerators. An additional evaluation of the measured beam data within the TrueBeam linear accelerators contrasted with two other linear accelerators from the same manufacturer (i.e., Clinac and Trilogy) was performed to identify and evaluate any differences in the beam characteristics between the machines and to evaluate the possibility of beam matching for standard photon energies. We performed a comparison of commissioned photon beam data for two standard photon energies (6 MV and 15 MV) and one flattening filter‐free (“FFF”) photon energy (10 FFF) between three different TrueBeam linear accelerators. An analysis of the beam data was then performed to evaluate the reproducibility of the results and the possibility of “beam matching” between the TrueBeam linear accelerators. Additionally, the data from the TrueBeam linear accelerator was compared with comparable data obtained from one Clinac and one Trilogy linear accelerator models produced by the same manufacturer to evaluate the possibility of “beam matching” between the TrueBeam linear accelerators and the previous models. The energies evaluated between the linear accelerator models are the 6 MV for low energy and the 15 MV for high energy. PDD and output factor data showed less than 1% variation and profile data showed variations within 1% or 2 mm between the three TrueBeam linear accelerators. PDD and profile data between the TrueBeam, the Clinac, and Trilogy linear accelerators were almost identical (less than 1% variation). Small variations were observed in the shape of the profile for 15 MV at shallow depths (linear accelerators; the TrueBeam data resulted in a slightly greater penumbra width. The diagonal scans demonstrated significant differences in the profile shapes at a distance greater than 20 cm from the central axis, and this was more notable for the 15 MV energy. Output factor

  15. Stability, performance and sensitivity analysis of I.I.D. jump linear systems

    Science.gov (United States)

    Chávez Fuentes, Jorge R.; González, Oscar R.; Gray, W. Steven

    2018-06-01

    This paper presents a symmetric Kronecker product analysis of independent and identically distributed jump linear systems to develop new, lower dimensional equations for the stability and performance analysis of this type of systems than what is currently available. In addition, new closed form expressions characterising multi-parameter relative sensitivity functions for performance metrics are introduced. The analysis technique is illustrated with a distributed fault-tolerant flight control example where the communication links are allowed to fail randomly.

  16. MetabR: an R script for linear model analysis of quantitative metabolomic data

    Directory of Open Access Journals (Sweden)

    Ernest Ben

    2012-10-01

    Full Text Available Abstract Background Metabolomics is an emerging high-throughput approach to systems biology, but data analysis tools are lacking compared to other systems level disciplines such as transcriptomics and proteomics. Metabolomic data analysis requires a normalization step to remove systematic effects of confounding variables on metabolite measurements. Current tools may not correctly normalize every metabolite when the relationships between each metabolite quantity and fixed-effect confounding variables are different, or for the effects of random-effect confounding variables. Linear mixed models, an established methodology in the microarray literature, offer a standardized and flexible approach for removing the effects of fixed- and random-effect confounding variables from metabolomic data. Findings Here we present a simple menu-driven program, “MetabR”, designed to aid researchers with no programming background in statistical analysis of metabolomic data. Written in the open-source statistical programming language R, MetabR implements linear mixed models to normalize metabolomic data and analysis of variance (ANOVA to test treatment differences. MetabR exports normalized data, checks statistical model assumptions, identifies differentially abundant metabolites, and produces output files to help with data interpretation. Example data are provided to illustrate normalization for common confounding variables and to demonstrate the utility of the MetabR program. Conclusions We developed MetabR as a simple and user-friendly tool for implementing linear mixed model-based normalization and statistical analysis of targeted metabolomic data, which helps to fill a lack of available data analysis tools in this field. The program, user guide, example data, and any future news or updates related to the program may be found at http://metabr.r-forge.r-project.org/.

  17. Dynamic analysis of aircraft impact using the linear elastic finite element codes FINEL, SAP and STARDYNE

    International Nuclear Information System (INIS)

    Lundsager, P.; Krenk, S.

    1975-08-01

    The static and dynamic response of a cylindrical/ spherical containment to a Boeing 720 impact is computed using 3 different linear elastic computer codes: FINEL, SAP and STARDYNE. Stress and displacement fields are shown together with time histories for a point in the impact zone. The main conclusions from this study are: - In this case the maximum dynamic load factors for stress and displacements were close to 1, but a static analysis alone is not fully sufficient. - More realistic load time histories should be considered. - The main effects seem to be local. The present study does not indicate general collapse from elastic stresses alone. - Further study of material properties at high rates is needed. (author)

  18. Generalized Linear Models in Vehicle Insurance

    Directory of Open Access Journals (Sweden)

    Silvie Kafková

    2014-01-01

    Full Text Available Actuaries in insurance companies try to find the best model for an estimation of insurance premium. It depends on many risk factors, e.g. the car characteristics and the profile of the driver. In this paper, an analysis of the portfolio of vehicle insurance data using a generalized linear model (GLM is performed. The main advantage of the approach presented in this article is that the GLMs are not limited by inflexible preconditions. Our aim is to predict the relation of annual claim frequency on given risk factors. Based on a large real-world sample of data from 57 410 vehicles, the present study proposed a classification analysis approach that addresses the selection of predictor variables. The models with different predictor variables are compared by analysis of deviance and Akaike information criterion (AIC. Based on this comparison, the model for the best estimate of annual claim frequency is chosen. All statistical calculations are computed in R environment, which contains stats package with the function for the estimation of parameters of GLM and the function for analysis of deviation.

  19. Stress Induced in Periodontal Ligament under Orthodontic Loading (Part II): A Comparison of Linear Versus Non-Linear Fem Study.

    Science.gov (United States)

    Hemanth, M; Deoli, Shilpi; Raghuveer, H P; Rani, M S; Hegde, Chatura; Vedavathi, B

    2015-09-01

    Simulation of periodontal ligament (PDL) using non-linear finite element method (FEM) analysis gives better insight into understanding of the biology of tooth movement. The stresses in the PDL were evaluated for intrusion and lingual root torque using non-linear properties. A three-dimensional (3D) FEM model of the maxillary incisors was generated using Solidworks modeling software. Stresses in the PDL were evaluated for intrusive and lingual root torque movements by 3D FEM using ANSYS software. These stresses were compared with linear and non-linear analyses. For intrusive and lingual root torque movements, distribution of stress over the PDL was within the range of optimal stress value as proposed by Lee, but was exceeding the force system given by Proffit as optimum forces for orthodontic tooth movement with linear properties. When same force load was applied in non-linear analysis, stresses were more compared to linear analysis and were beyond the optimal stress range as proposed by Lee for both intrusive and lingual root torque. To get the same stress as linear analysis, iterations were done using non-linear properties and the force level was reduced. This shows that the force level required for non-linear analysis is lesser than that of linear analysis.

  20. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis

    Science.gov (United States)

    Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.

    2006-01-01

    Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…

  1. Analysis of key factors influencing the evaporation performances of an oriented linear cutting copper fiber sintered felt

    Science.gov (United States)

    Pan, Minqiang; Zhong, Yujian

    2018-01-01

    Porous structure can effectively enhance the heat transfer efficiency. A kind of micro vaporizer using the oriented linear cutting copper fiber sintered felt is proposed in this work. Multiple long cutting copper fibers are firstly fabricated with a multi-tooth tool and then sintered together in parallel to form uniform thickness metal fiber sintered felts that provided a characteristic of oriented microchannels. The temperature rise response and thermal conversion efficiency are experimentally investigated to evaluate the influences of porosity, surface structure, feed flow rate and input power on the evaporation characteristics. It is indicated that the temperature rise response of water is mainly affected by input power and feed flow rate. High input power and low feed flow rate present better temperature rise response of water. Porosity rather than surface structure plays an important role in the temperature rise response of water at a relatively high input power. The thermal conversion efficiency is dominated by the input power and surface structure. The oriented linear cutting copper fiber sintered felts for three kinds of porosities show better thermal conversion efficiency than that of the oriented linear copper wire sintered felt when the input power is less than 115 W. All the sintered felts have almost the same performance of thermal conversion at a high input power.

  2. Steady state and linear stability analysis of a supercritical water natural circulation loop

    International Nuclear Information System (INIS)

    Sharma, Manish; Pilkhwal, D.S.; Vijayan, P.K.; Saha, D.; Sinha, R.K.

    2010-01-01

    Supercritical water (SCW) has excellent heat transfer characteristics as a coolant for nuclear reactors. Besides it results in high thermal efficiency of the plant. However, the flow can experience instabilities in supercritical water reactors, as the density change is very large for the supercritical fluids. A computer code SUCLIN using supercritical water properties has been developed to carry out the steady state and linear stability analysis of a SCW natural circulation loop. The conservation equations of mass, momentum and energy have been linearized by imposing small perturbation in flow rate, enthalpy, pressure and specific volume. The equations have been solved analytically to generate the characteristic equation. The roots of the equation determine the stability of the system. The code has been qualitatively assessed with published results and has been extensively used for studying the effect of diameter, height, heater inlet temperature, pressure and local loss coefficients on steady state and stability behavior of a Supercritical Water Natural Circulation Loop (SCWNCL). The present paper describes the linear stability analysis model and the results obtained in detail.

  3. Advanced statistics: linear regression, part I: simple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  4. Principal Component Analysis: Resources for an Essential Application of Linear Algebra

    Science.gov (United States)

    Pankavich, Stephen; Swanson, Rebecca

    2015-01-01

    Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…

  5. The flow analysis of supercavitating cascade by linear theory

    Energy Technology Data Exchange (ETDEWEB)

    Park, E.T. [Sung Kyun Kwan Univ., Seoul (Korea, Republic of); Hwang, Y. [Seoul National Univ., Seoul (Korea, Republic of)

    1996-06-01

    In order to reduce damages due to cavitation effects and to improve performance of fluid machinery, supercavitation around the cascade and the hydraulic characteristics of supercavitating cascade must be analyzed accurately. And the study on the effects of cavitation on fluid machinery and analysis on the performances of supercavitating hydrofoil through various elements governing flow field are critically important. In this study comparison of experiment results with the computed results of linear theory using singularity method was obtainable. Specially singularity points like sources and vortexes on hydrofoil and freestreamline were distributed to analyze two dimensional flow field of supercavitating cascade, and governing equations of flow field were derived and hydraulic characteristics of cascade were calculated by numerical analysis of the governing equations. 7 refs., 6 figs.

  6. Spherically symmetric analysis on open FLRW solution in non-linear massive gravity

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Chien-I; Izumi, Keisuke; Chen, Pisin, E-mail: chienichiang@berkeley.edu, E-mail: izumi@phys.ntu.edu.tw, E-mail: chen@slac.stanford.edu [Leung Center for Cosmology and Particle Astrophysics, National Taiwan University, Taipei 10617, Taiwan (China)

    2012-12-01

    We study non-linear massive gravity in the spherically symmetric context. Our main motivation is to investigate the effect of helicity-0 mode which remains elusive after analysis of cosmological perturbation around an open Friedmann-Lemaitre-Robertson-Walker (FLRW) universe. The non-linear form of the effective energy-momentum tensor stemming from the mass term is derived for the spherically symmetric case. Only in the special case where the area of the two sphere is not deviated away from the FLRW universe, the effective energy momentum tensor becomes completely the same as that of cosmological constant. This opens a window for discriminating the non-linear massive gravity from general relativity (GR). Indeed, by further solving these spherically symmetric gravitational equations of motion in vacuum to the linear order, we obtain a solution which has an arbitrary time-dependent parameter. In GR, this parameter is a constant and corresponds to the mass of a star. Our result means that Birkhoff's theorem no longer holds in the non-linear massive gravity and suggests that energy can probably be emitted superluminously (with infinite speed) on the self-accelerating background by the helicity-0 mode, which could be a potential plague of this theory.

  7. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  8. Apatite fission track analysis: geological thermal history analysis based on a three-dimensional random process of linear radiation damage

    International Nuclear Information System (INIS)

    Galbraith, R.F.; Laslett, G.M.; Green, P.F.; Duddy, I.R.

    1990-01-01

    Spontaneous fission of uranium atoms over geological time creates a random process of linearly shaped features (fission tracks) inside an apatite crystal. The theoretical distributions associated with this process are governed by the elapsed time and temperature history, but other factors are also reflected in empirical measurements as consequences of sampling by plane section and chemical etching. These include geometrical biases leading to over-representation of long tracks, the shape and orientation of host features when sampling totally confined tracks, and 'gaps' in heavily annealed tracks. We study the estimation of geological parameters in the presence of these factors using measurements on both confined tracks and projected semi-tracks. Of particular interest is a history of sedimentation, uplift and erosion giving rise to a two-component mixture of tracks in which the parameters reflect the current temperature, the maximum temperature and the timing of uplift. A full likelihood analysis based on all measured densities, lengths and orientations is feasible, but because some geometrical biases and measurement limitations are only partly understood it seems preferable to use conditional likelihoods given numbers and orientations of confined tracks. (author)

  9. Linear and Nonlinear Multiset Canonical Correlation Analysis (invited talk)

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Nielsen, Allan Aasbjerg; Larsen, Rasmus

    2002-01-01

    This paper deals with decompositioning of multiset data. Friedman's alternating conditional expectations (ACE) algorithm is extended to handle multiple sets of variables of different mixtures. The new algorithm finds estimates of the optimal transformations of the involved variables that maximize...... the sum of the pair-wise correlations over all sets. The new algorithm is termed multi-set ACE (MACE) and can find multiple orthogonal eigensolutions. MACE is a generalization of the linear multiset correlations analysis (MCCA). It handles multivariate multisets of arbitrary mixtures of both continuous...

  10. [Factors and validity analysis of Mini-Mental State Examination in Chinese elderly people].

    Science.gov (United States)

    Gao, Ming-yue; Yang, Min; Kuang, Wei-hong; Qiu, Pei-yuan

    2015-06-18

    To examine factors that may have impact on the Mini-Mental State Examination (MMSE) screening validity, which could lead to further establishing the general model of the MMSE score in Chinese health elderly and to improve the screening accuracy of the existing MMSE reference. Based on the data of the Chinese Longitudinal Healthy Longevity Survey (CLHLS), the MMSE scores of 19,117 normal elderly and 137 dementia patients who met the inclusion criteria were used for the analysis. The area under the curve (AUC) and validity indexes were used to compare the screening accuracy of various criteria. Multiple linear regression was used to identify factors that had impact on the MMSE score for both the normal and dementia elderly. Descriptive analysis was performed for differences in the MMSE scores by age trends and gender between the normal and dementia elderly. The AUC of MMSE was ≥0.75(Pvalidity of MMSE in CLHLS is not affected by educational level. The analysis of factors that may impact on the MMSE screening validity are gender, age, vision and residence which with validity identification. These four factors can be used as assist tool of MMSE in the screening of dementia to improve the screening accuracy.

  11. Useful tools for non-linear systems: Several non-linear integral inequalities

    Czech Academy of Sciences Publication Activity Database

    Agahi, H.; Mohammadpour, A.; Mesiar, Radko; Vaezpour, M. S.

    2013-01-01

    Roč. 49, č. 1 (2013), s. 73-80 ISSN 0950-7051 R&D Projects: GA ČR GAP402/11/0378 Institutional support: RVO:67985556 Keywords : Monotone measure * Comonotone functions * Integral inequalities * Universal integral Subject RIV: BA - General Mathematics Impact factor: 3.058, year: 2013 http://library.utia.cas.cz/separaty/2013/E/mesiar-useful tools for non-linear systems several non-linear integral inequalities.pdf

  12. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  13. Factor-of-safety formulations for linear and parabolic failure envelopes for rock. Technical memorandum report RSI-0038

    International Nuclear Information System (INIS)

    Gnirk, P.F.

    1975-01-01

    This report presents documentation of the basic formulation of the factor-of-safety relationships for linear and parabolic failure criteria for rock with an example application for a candidate room-and-pillar configuration at the proposed Alpha repository site in New Mexico. 8 figures, 4 tables

  14. An improved multiple linear regression and data analysis computer program package

    Science.gov (United States)

    Sidik, S. M.

    1972-01-01

    NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.

  15. Determination and analysis of non-linear index profiles in electron-beam-deposited MgOAl2O3ZrO2 ternary composite thin-film optical coatings

    International Nuclear Information System (INIS)

    Sahoo, N.K.; Thakur, S.; Senthilkumar, M.; Das, N.C.

    2005-01-01

    Thickness-dependent index non-linearity in thin films has been a thought provoking as well as intriguing topic in the field of optical coatings. The characterization and analysis of such inhomogeneous index profiles pose several degrees of challenges to thin-film researchers depending upon the availability of relevant experimental and process-monitoring-related information. In the present work, a variety of novel experimental non-linear index profiles have been observed in thin films of MgOAl 2 O 3 ZrO 2 ternary composites in solid solution under various electron-beam deposition parameters. Analysis and derivation of these non-linear spectral index profiles have been carried out by an inverse-synthesis approach using a real-time optical monitoring signal and post-deposition transmittance and reflection spectra. Most of the non-linear index functions are observed to fit polynomial equations of order seven or eight very well. In this paper, the application of such a non-linear index function has also been demonstrated in designing electric-field-optimized high-damage-threshold multilayer coatings such as normal- and oblique-incidence edge filters and a broadband beam splitter for p-polarized light. Such designs can also advantageously maintain the microstructural stability of the multilayer structure due to the low stress factor of the non-linear ternary composite layers. (orig.)

  16. Dynamics and acceleration in linear structures

    International Nuclear Information System (INIS)

    Le Duff, J.

    1985-06-01

    Basic methods of linear acceleration are reviewed. Both cases of non relativistic and ultra relativistic particles are considered. Induction linac, radiofrequency quadrupole are mentioned. Fundamental parameters of accelerating structures are recalled; they are transit time factor, shunt impedance, quality factor and stored energy, phase velocity and group velocity, filling time, space harmonics in loaded waveguides. Energy gain in linear accelerating structures is considered through standing wave structures and travelling wave structures. Then particle dynamics in linear accelerators is studied: longitudinal motion, transverse motion and dynamics in RFQ

  17. Factor analysis

    CERN Document Server

    Gorsuch, Richard L

    2013-01-01

    Comprehensive and comprehensible, this classic covers the basic and advanced topics essential for using factor analysis as a scientific tool in psychology, education, sociology, and related areas. Emphasizing the usefulness of the techniques, it presents sufficient mathematical background for understanding and sufficient discussion of applications for effective use. This includes not only theory but also the empirical evaluations of the importance of mathematical distinctions for applied scientific analysis.

  18. Numerical linear analysis of the effects of diamagnetic and shear flow on ballooning modes

    Science.gov (United States)

    Yanqing, HUANG; Tianyang, XIA; Bin, GUI

    2018-04-01

    The linear analysis of the influence of diamagnetic effect and toroidal rotation at the edge of tokamak plasmas with BOUT++ is discussed in this paper. This analysis is done by solving the dispersion relation, which is calculated through the numerical integration of the terms with different physics. This method is able to reveal the contributions of the different terms to the total growth rate. The diamagnetic effect stabilizes the ideal ballooning modes through inhibiting the contribution of curvature. The toroidal rotation effect is also able to suppress the curvature-driving term, and the stronger shearing rate leads to a stronger stabilization effect. In addition, through linear analysis using the energy form, the curvature-driving term provides the free energy absorbed by the line-bending term, diamagnetic term and convective term.

  19. Theoretical foundations of functional data analysis, with an introduction to linear operators

    CERN Document Server

    Hsing, Tailen

    2015-01-01

    Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators provides a uniquely broad compendium of the key mathematical concepts and results that are relevant for the theoretical development of functional data analysis (FDA).The self-contained treatment of selected topics of functional analysis and operator theory includes reproducing kernel Hilbert spaces, singular value decomposition of compact operators on Hilbert spaces and perturbation theory for both self-adjoint and non self-adjoint operators. The probabilistic foundation for FDA is described from the

  20. Frame sequences analysis technique of linear objects movement

    Science.gov (United States)

    Oshchepkova, V. Y.; Berg, I. A.; Shchepkin, D. V.; Kopylova, G. V.

    2017-12-01

    Obtaining data by noninvasive methods are often needed in many fields of science and engineering. This is achieved through video recording in various frame rate and light spectra. In doing so quantitative analysis of movement of the objects being studied becomes an important component of the research. This work discusses analysis of motion of linear objects on the two-dimensional plane. The complexity of this problem increases when the frame contains numerous objects whose images may overlap. This study uses a sequence containing 30 frames at the resolution of 62 × 62 pixels and frame rate of 2 Hz. It was required to determine the average velocity of objects motion. This velocity was found as an average velocity for 8-12 objects with the error of 15%. After processing dependencies of the average velocity vs. control parameters were found. The processing was performed in the software environment GMimPro with the subsequent approximation of the data obtained using the Hill equation.

  1. A Nutritional Analysis of the Food Basket in BIH: A Linear Programming Approach

    Directory of Open Access Journals (Sweden)

    Arnaut-Berilo Almira

    2017-04-01

    Full Text Available This paper presents linear and goal programming optimization models for determining and analyzing the food basket in Bosnia and Herzegovina (BiH in terms of adequate nutritional needs according to World Health Organization (WHO standards and World Bank (WB recommendations. A linear programming (LP model and goal linear programming model (GLP are adequate since price and nutrient contents are linearly related to food weight. The LP model provides information about the minimal value and the structure of the food basket for an average person in BiH based on nutrient needs. GLP models are designed to give us information on minimal deviations from nutrient needs if the budget is fixed. Based on these results, poverty analysis can be performed. The data used for the models consisted of 158 food items from the general consumption of the population of BiH according to COICOP classifications, with average prices in 2015 for these products.

  2. A Linear Analysis of a Blended Wing Body (BWB Aircraft Model

    Directory of Open Access Journals (Sweden)

    Claudia Alice STATE

    2011-09-01

    Full Text Available In this article a linear analysis of a Blended Wing Body (BWB aircraft model is performed. The BWB concept is in the attention of both military and civil sectors for the fact that has reduced radar signature (in the absence of a conventional tail and the possibility to carry more people. The trim values are computed, also the eigenvalues and the Jacobian matrix evaluated into the trim point are analyzed. A linear simulation in the MatLab environment is presented in order to express numerically the symbolic computations presented. The initial system is corrected in the way of increasing the consistency and coherence of the modeled type of motion and, also, suggestions are made for future work.

  3. Enhanced linear-array photoacoustic beamforming using modified coherence factor

    Science.gov (United States)

    Mozaffarzadeh, Moein; Yan, Yan; Mehrmohammadi, Mohammad; Makkiabadi, Bahador

    2018-02-01

    Photoacoustic imaging (PAI) is a promising medical imaging modality providing the spatial resolution of ultrasound imaging and the contrast of optical imaging. For linear-array PAI, a beamformer can be used as the reconstruction algorithm. Delay-and-sum (DAS) is the most prevalent beamforming algorithm in PAI. However, using DAS beamformer leads to low-resolution images as well as high sidelobes due to nondesired contribution of off-axis signals. Coherence factor (CF) is a weighting method in which each pixel of the reconstructed image is weighted, based on the spatial spectrum of the aperture, to mainly improve the contrast. We demonstrate that the numerator of the formula of CF contains a DAS algebra and propose the use of a delay-multiply-and-sum beamformer instead of the available DAS on the numerator. The proposed weighting technique, modified CF (MCF), has been evaluated numerically and experimentally compared to CF. It was shown that MCF leads to lower sidelobes and better detectable targets. The quantitative results of the experiment (using wire targets) show that MCF leads to for about 45% and 40% improvement, in comparison with CF, in the terms of signal-to-noise ratio and full-width-half-maximum, respectively.

  4. The oscillatory behavior of heated channels: an analysis of the density effect. Part I. The mechanism (non linear analysis). Part II. The oscillations thresholds (linearized analysis)

    International Nuclear Information System (INIS)

    Boure, J.

    1967-01-01

    The problem of the oscillatory behavior of heated channels is presented in terms of delay-times and a density effect model is proposed to explain the behavior. The density effect is the consequence of the physical relationship between enthalpy and density of the fluid. In the first part non-linear equations are derived from the model in a dimensionless form. A description of the mechanism of oscillations is given, based on the analysis of the equations. An inventory of the governing parameters is established. At this point of the study, some facts in agreement with the experiments can be pointed out. In the second part the start of the oscillatory behavior of heated channels is studied in terms of the density effect. The threshold equations are derived, after linearization of the equations obtained in Part I. They can be solved rigorously by numerical methods to yield: -1) a relation between the describing parameters at the onset of oscillations, and -2) the frequency of the oscillations. By comparing the results predicted by the model to the experimental behavior of actual systems, the density effect is very often shown to be the actual cause of oscillatory behaviors. (author) [fr

  5. Coupled Analytical-Finite Element Methods for Linear Electromagnetic Actuator Analysis

    Directory of Open Access Journals (Sweden)

    K. Srairi

    2005-09-01

    Full Text Available In this paper, a linear electromagnetic actuator with moving parts is analyzed. The movement is considered through the modification of boundary conditions only using coupled analytical and finite element analysis. In order to evaluate the dynamic performance of the device, the coupling between electric, magnetic and mechanical phenomena is established. The displacement of the moving parts and the inductor current are determined when the device is supplied by capacitor discharge voltage.

  6. Exploratory Bi-factor Analysis: The Oblique Case

    OpenAIRE

    Jennrich, Robert L.; Bentler, Peter M.

    2011-01-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford (1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler (2011) introduced an exploratory form of bi-factor analysis that does not require one to provide an explicit bi-factor structure a priori. They use exploratory factor analysis and a bi-factor rotation criterion designed to produce a rotated loading mat...

  7. Linear analysis near a steady-state of biochemical networks: control analysis, correlation metrics and circuit theory

    Directory of Open Access Journals (Sweden)

    Qian Hong

    2008-05-01

    Full Text Available Abstract Background: Several approaches, including metabolic control analysis (MCA, flux balance analysis (FBA, correlation metric construction (CMC, and biochemical circuit theory (BCT, have been developed for the quantitative analysis of complex biochemical networks. Here, we present a comprehensive theory of linear analysis for nonequilibrium steady-state (NESS biochemical reaction networks that unites these disparate approaches in a common mathematical framework and thermodynamic basis. Results: In this theory a number of relationships between key matrices are introduced: the matrix A obtained in the standard, linear-dynamic-stability analysis of the steady-state can be decomposed as A = SRT where R and S are directly related to the elasticity-coefficient matrix for the fluxes and chemical potentials in MCA, respectively; the control-coefficients for the fluxes and chemical potentials can be written in terms of RT BS and ST BS respectively where matrix B is the inverse of A; the matrix S is precisely the stoichiometric matrix in FBA; and the matrix eAt plays a central role in CMC. Conclusion: One key finding that emerges from this analysis is that the well-known summation theorems in MCA take different forms depending on whether metabolic steady-state is maintained by flux injection or concentration clamping. We demonstrate that if rate-limiting steps exist in a biochemical pathway, they are the steps with smallest biochemical conductances and largest flux control-coefficients. We hypothesize that biochemical networks for cellular signaling have a different strategy for minimizing energy waste and being efficient than do biochemical networks for biosynthesis. We also discuss the intimate relationship between MCA and biochemical systems analysis (BSA.

  8. Classical linear-control analysis applied to business-cycle dynamics and stability

    Science.gov (United States)

    Wingrove, R. C.

    1983-01-01

    Linear control analysis is applied as an aid in understanding the fluctuations of business cycles in the past, and to examine monetary policies that might improve stabilization. The analysis shows how different policies change the frequency and damping of the economic system dynamics, and how they modify the amplitude of the fluctuations that are caused by random disturbances. Examples are used to show how policy feedbacks and policy lags can be incorporated, and how different monetary strategies for stabilization can be analytically compared. Representative numerical results are used to illustrate the main points.

  9. Analysis of infant cry through weighted linear prediction cepstral coefficients and Probabilistic Neural Network.

    Science.gov (United States)

    Hariharan, M; Chee, Lim Sin; Yaacob, Sazali

    2012-06-01

    Acoustic analysis of infant cry signals has been proven to be an excellent tool in the area of automatic detection of pathological status of an infant. This paper investigates the application of parameter weighting for linear prediction cepstral coefficients (LPCCs) to provide the robust representation of infant cry signals. Three classes of infant cry signals were considered such as normal cry signals, cry signals from deaf babies and babies with asphyxia. A Probabilistic Neural Network (PNN) is suggested to classify the infant cry signals into normal and pathological cries. PNN is trained with different spread factor or smoothing parameter to obtain better classification accuracy. The experimental results demonstrate that the suggested features and classification algorithms give very promising classification accuracy of above 98% and it expounds that the suggested method can be used to help medical professionals for diagnosing pathological status of an infant from cry signals.

  10. Introduction to generalized linear models

    CERN Document Server

    Dobson, Annette J

    2008-01-01

    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  11. Comparison of linear and non-linear monotonicity-based shape reconstruction using exact matrix characterizations

    DEFF Research Database (Denmark)

    Garde, Henrik

    2018-01-01

    . For a fair comparison, exact matrix characterizations are used when probing the monotonicity relations to avoid errors from numerical solution to PDEs and numerical integration. Using a special factorization of the Neumann-to-Dirichlet map also makes the non-linear method as fast as the linear method...

  12. Experimental and numerical analysis of behavior of electromagnetic annular linear induction pump

    International Nuclear Information System (INIS)

    Goldsteins, Linards

    2015-01-01

    The research explores the issue of magnetohydrodynamic (MHD) instability in electromagnetic induction pumps with focus on the regimes of high slip Reynolds magnetic number (Rm s ) in Annular Linear Induction Pumps (ALIP) operating with liquid sodium. The context of the thesis is French GEN IV Sodium Fast Reactor research and development program for ASTRID in a framework of which the use of high discharge ALIP in the secondary cooling loops is being studied. CEA has designed, realized and will exploit PEMDYN facility, able to represent MHD instability in high discharge ALIP. In the thesis stability of an ideal ALIP is elaborated theoretically using linear stability analysis. Analysis revealed that strong amplification of perturbation is expected after convective stability threshold is reached. Theory is supported with numerical results and experiments reported in literature. Stable operation and stabilization technique operating with two frequencies in case of an ideal ALIP is discussed and necessary conditions derived. Detailed numerical models of flat linear induction pump (FLIP) taking into account effects of a real pump are developed. New technique of magnetic field measurements has been introduced and experimental results demonstrate a qualitative agreement with numerical models capturing all principal phenomena such as oscillation of magnetic field and perturbed velocity profiles. These results give significantly more profound insight in the phenomenon of MHD instability and can be used as a reference in further studies. (author) [fr

  13. Engineering Mathematical Analysis Method for Productivity Rate in Linear Arrangement Serial Structure Automated Flow Assembly Line

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2015-01-01

    Full Text Available Productivity rate (Q or production rate is one of the important indicator criteria for industrial engineer to improve the system and finish good output in production or assembly line. Mathematical and statistical analysis method is required to be applied for productivity rate in industry visual overviews of the failure factors and further improvement within the production line especially for automated flow line since it is complicated. Mathematical model of productivity rate in linear arrangement serial structure automated flow line with different failure rate and bottleneck machining time parameters becomes the basic model for this productivity analysis. This paper presents the engineering mathematical analysis method which is applied in an automotive company which possesses automated flow assembly line in final assembly line to produce motorcycle in Malaysia. DCAS engineering and mathematical analysis method that consists of four stages known as data collection, calculation and comparison, analysis, and sustainable improvement is used to analyze productivity in automated flow assembly line based on particular mathematical model. Variety of failure rate that causes loss of productivity and bottleneck machining time is shown specifically in mathematic figure and presents the sustainable solution for productivity improvement for this final assembly automated flow line.

  14. Factors affecting construction performance: exploratory factor analysis

    Science.gov (United States)

    Soewin, E.; Chinda, T.

    2018-04-01

    The present work attempts to develop a multidimensional performance evaluation framework for a construction company by considering all relevant measures of performance. Based on the previous studies, this study hypothesizes nine key factors, with a total of 57 associated items. The hypothesized factors, with their associated items, are then used to develop questionnaire survey to gather data. The exploratory factor analysis (EFA) was applied to the collected data which gave rise 10 factors with 57 items affecting construction performance. The findings further reveal that the items constituting ten key performance factors (KPIs) namely; 1) Time, 2) Cost, 3) Quality, 4) Safety & Health, 5) Internal Stakeholder, 6) External Stakeholder, 7) Client Satisfaction, 8) Financial Performance, 9) Environment, and 10) Information, Technology & Innovation. The analysis helps to develop multi-dimensional performance evaluation framework for an effective measurement of the construction performance. The 10 key performance factors can be broadly categorized into economic aspect, social aspect, environmental aspect, and technology aspects. It is important to understand a multi-dimension performance evaluation framework by including all key factors affecting the construction performance of a company, so that the management level can effectively plan to implement an effective performance development plan to match with the mission and vision of the company.

  15. Multiple Linear Regression Analysis of Factors Affecting Real Property Price Index From Case Study Research In Istanbul/Turkey

    Science.gov (United States)

    Denli, H. H.; Koc, Z.

    2015-12-01

    Estimation of real properties depending on standards is difficult to apply in time and location. Regression analysis construct mathematical models which describe or explain relationships that may exist between variables. The problem of identifying price differences of properties to obtain a price index can be converted into a regression problem, and standard techniques of regression analysis can be used to estimate the index. Considering regression analysis for real estate valuation, which are presented in real marketing process with its current characteristics and quantifiers, the method will help us to find the effective factors or variables in the formation of the value. In this study, prices of housing for sale in Zeytinburnu, a district in Istanbul, are associated with its characteristics to find a price index, based on information received from a real estate web page. The associated variables used for the analysis are age, size in m2, number of floors having the house, floor number of the estate and number of rooms. The price of the estate represents the dependent variable, whereas the rest are independent variables. Prices from 60 real estates have been used for the analysis. Same price valued locations have been found and plotted on the map and equivalence curves have been drawn identifying the same valued zones as lines.

  16. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    Science.gov (United States)

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  17. Linear algebra

    CERN Document Server

    Said-Houari, Belkacem

    2017-01-01

    This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...

  18. A Design of Mechanical Frequency Converter Linear and Non-linear Spring Combination for Energy Harvesting

    International Nuclear Information System (INIS)

    Yamamoto, K; Fujita, T; Kanda, K; Maenaka, K; Badel, A; Formosa, F

    2014-01-01

    In this study, the improvement of energy harvesting from wideband vibration with random change by using a combination of linear and nonlinear spring system is investigated. The system consists of curved beam spring for non-linear buckling, which supports the linear mass-spring resonator. Applying shock acceleration generates a snap through action to the buckling spring. From the FEM analysis, we showed that the snap through acceleration from the buckling action has no relationship with the applied shock amplitude and duration. We use this uniform acceleration as an impulse shock source for the linear resonator. It is easy to obtain the maximum shock response from the uniform snap through acceleration by using a shock response spectrum (SRS) analysis method. At first we investigated the relationship between the snap-through behaviour and an initial curved deflection. Then a time response result for non-linear springs with snap through and minimum force that makes a buckling behaviour were obtained by FEM analysis. By obtaining the optimum SRS frequency for linear resonator, we decided its resonant frequency with the MATLAB simulator

  19. Design and analysis of linear oscillatory single-phase permanent magnet generator for free-piston stirling engine systems

    Science.gov (United States)

    Kim, Jeong-Man; Choi, Jang-Young; Lee, Kyu-Seok; Lee, Sung-Ho

    2017-05-01

    This study focuses on the design and analysis of a linear oscillatory single-phase permanent magnet generator for free-piston stirling engine (FPSE) systems. In order to implement the design of linear oscillatory generator (LOG) for suitable FPSEs, we conducted electromagnetic analysis of LOGs with varying design parameters. Then, detent force analysis was conducted using assisted PM. Using the assisted PM gave us the advantage of using mechanical strength by detent force. To improve the efficiency, we conducted characteristic analysis of eddy-current loss with respect to the PM segment. Finally, the experimental result was analyzed to confirm the prediction of the FEA.

  20. Linear analysis of rotationally invariant, radially variant tomographic imaging systems

    International Nuclear Information System (INIS)

    Huesmann, R.H.

    1990-01-01

    This paper describes a method to analyze the linear imaging characteristics of rotationally invariant, radially variant tomographic imaging systems using singular value decomposition (SVD). When the projection measurements from such a system are assumed to be samples from independent and identically distributed multi-normal random variables, the best estimate of the emission intensity is given by the unweighted least squares estimator. The noise amplification of this estimator is inversely proportional to the singular values of the normal matrix used to model projection and backprojection. After choosing an acceptable noise amplification, the new method can determine the number of parameters and hence the number of pixels that should be estimated from data acquired from an existing system with a fixed number of angles and projection bins. Conversely, for the design of a new system, the number of angles and projection bins necessary for a given number of pixels and noise amplification can be determined. In general, computing the SVD of the projection normal matrix has cubic computational complexity. However, the projection normal matrix for this class of rotationally invariant, radially variant systems has a block circulant form. A fast parallel algorithm to compute the SVD of this block circulant matrix makes the singular value analysis practical by asymptotically reducing the computation complexity of the method by a multiplicative factor equal to the number of angles squared

  1. A factor analysis to detect factors influencing building national brand

    Directory of Open Access Journals (Sweden)

    Naser Azad

    Full Text Available Developing a national brand is one of the most important issues for development of a brand. In this study, we present factor analysis to detect the most important factors in building a national brand. The proposed study uses factor analysis to extract the most influencing factors and the sample size has been chosen from two major auto makers in Iran called Iran Khodro and Saipa. The questionnaire was designed in Likert scale and distributed among 235 experts. Cronbach alpha is calculated as 84%, which is well above the minimum desirable limit of 0.70. The implementation of factor analysis provides six factors including “cultural image of customers”, “exciting characteristics”, “competitive pricing strategies”, “perception image” and “previous perceptions”.

  2. Tests of the linearity assumption in the dose-effect relationship for radiation-induced cancer

    International Nuclear Information System (INIS)

    Cohen, A.F.; Cohen, B.L.

    1980-01-01

    The validity of the BEIR linear extrapolation to low doses of the dose-effect relationship for radiation induced cancer is tested by use of natural radiation making use of selectivity on type of cancer, smoking habits, sex, age group, geographic area and/or time period. For lung cancer, a linear interpolation between zero dose-zero effect and the data from radon-induced cancers in miners implies that the majority of all lung cancers among non-smokers are due to radon; since lung cancers in miners are mostly small-cell undifferentiated (SCU), a rather rare type in general, linearity over predicts the frequency of SCU lung cancers among non smokers by a factor of 10, and among non-smoking females age 25-44 by a factor of 24. Similarly, linearity predicts that the majority of all lung cancers early in this century were due to radon even after due consideration is given to cases missed by poor diagnostic efficiency (this matter is considered in some detail). For the 30-40 age range, linearity over predicts the total lung cancer rate at that time by a factor of 3-6; for SCU lung cancer, the over-prediction is by at least a factor of 10. Other causes of lung cancer are considered which further enhance the degree to which the linearity assumption over-estimates the effects of low level radiation. A similar analysis is applied to leukemia induced by natural radiation. It is concluded that the upper limit for this is not higher than estimates from the linearity hypothesis. (author)

  3. Reactivity-induced time-dependencies of EBR-II linear and non-linear feedbacks

    International Nuclear Information System (INIS)

    Grimm, K.N.; Meneghetti, D.

    1988-01-01

    Time-dependent linear feedback reactivities are calculated for stereotypical subassemblies in the EBR-II reactor. These quantities are calculated from nodal reactivities obtained from a kinetic code analysis of an experiment in which the change in power resulted from the dropping of a control rod. Shown with these linear reactivities are the reactivity associated with the control-rod shaft contraction and also time-dependent non-linear (mainly bowing) component deduced from the inverse kinetics of the experimentally measured fission power and the calculated linear reactivities. (author)

  4. Analysis of supply chain, scale factor, and optimum plant capacity for the production of ethanol from corn stover

    International Nuclear Information System (INIS)

    Leboreiro, Jose; Hilaly, Ahmad K.

    2013-01-01

    A detailed model is used to perform a thorough analysis on ethanol production from corn stover via the dilute acid process. The biomass supply chain cost model accounts for all steps needed to source corn stover including collection, transportation, and storage. The manufacturing cost model is based on work done at NREL; attainable conversions of key process parameters are used to calculate production cost. The choice of capital investment scaling function and scaling parameter has a significant impact on the optimum plant capacity. For the widely used exponential function, the scaling factors are functions of plant capacity. The pre-exponential factor decreases with increasing plant capacity while the exponential factor increases as the plant capacity increases. The use of scaling parameters calculated for small plant capacities leads to falsely large optimum plants; data from a wide range of plant capacities is required to produce accurate results. A mathematical expression to scale capital investment for fermentation-based biorefineries is proposed which accounts for the linear scaling behavior of bio-reactors (such as saccharification vessels and fermentors) as well as the exponential nature of all other plant equipment. Ignoring the linear scaling behavior of bio-reactors leads to artificially large optimum plant capacities. The minimum production cost is found to be in the range of 789–830 $ m −3 which is significantly higher than previously reported. Optimum plant capacities are in the range of 5750–9850 Mg d −1 . The optimum plant capacity and production cost are highly sensitive to farmer participation in biomass harvest for low participation rates. -- Highlights: •A detailed model is used to perform a technoeconomic analysis for the production of ethanol from corn stover. •The capital investment scaling factors were found to be a function of plant capacity. •Bio-reactors (such as saccharification vessels and fermentors) in large size

  5. Non linear structures seismic analysis by modal synthesis

    International Nuclear Information System (INIS)

    Aita, S.; Brochard, D.; Guilbaud, D.; Gibert, R.J.

    1987-01-01

    The structures submitted to a seismic excitation, may present a great amplitude response which induces a non linear behaviour. These non linearities have an important influence on the response of the structure. Even in this case (local shocks) the modal synthesis method remains attractive. In this paper we will present the way of taking into account, a local non linearity (shock between structures) in the seismic response of structures, by using the modal synthesis method [fr

  6. Accurate Evaluation of Expected Shortfall for Linear Portfolios with Elliptically Distributed Risk Factors

    Directory of Open Access Journals (Sweden)

    Dobrislav Dobrev∗

    2017-02-01

    Full Text Available We provide an accurate closed-form expression for the expected shortfall of linear portfolios with elliptically distributed risk factors. Our results aim to correct inaccuracies that originate in Kamdem (2005 and are present also in at least thirty other papers referencing it, including the recent survey by Nadarajah et al. (2014 on estimation methods for expected shortfall. In particular, we show that the correction we provide in the popular multivariate Student t setting eliminates understatement of expected shortfall by a factor varying from at least four to more than 100 across different tail quantiles and degrees of freedom. As such, the resulting economic impact in financial risk management applications could be significant. We further correct such errors encountered also in closely related results in Kamdem (2007 and 2009 for mixtures of elliptical distributions. More generally, our findings point to the extra scrutiny required when deploying new methods for expected shortfall estimation in practice.

  7. Linear stability analysis of heated parallel channels

    International Nuclear Information System (INIS)

    Nourbakhsh, H.P.; Isbin, H.S.

    1982-01-01

    An analyis is presented of thermal hydraulic stability of flow in parallel channels covering the range from inlet subcooling to exit superheat. The model is based on a one-dimensional drift velocity formulation of the two phase flow conservation equations. The system of equations is linearized by assuming small disturbances about the steady state. The dynamic response of the system to an inlet flow perturbation is derived yielding the characteristic equation which predicts the onset of instabilities. A specific application is carried out for homogeneous and regional uniformly heated systems. The particular case of equal characteristic frequencies of two-phase and single phase vapor region is studied in detail. The D-partition method and the Mikhailov stability criterion are used for determining the marginal stability boundary. Stability predictions from the present analysis are compared with the experimental data from the solar test facility. 8 references

  8. Identifying Plant Part Composition of Forest Logging Residue Using Infrared Spectral Data and Linear Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Gifty E. Acquah

    2016-08-01

    Full Text Available As new markets, technologies and economies evolve in the low carbon bioeconomy, forest logging residue, a largely untapped renewable resource will play a vital role. The feedstock can however be variable depending on plant species and plant part component. This heterogeneity can influence the physical, chemical and thermochemical properties of the material, and thus the final yield and quality of products. Although it is challenging to control compositional variability of a batch of feedstock, it is feasible to monitor this heterogeneity and make the necessary changes in process parameters. Such a system will be a first step towards optimization, quality assurance and cost-effectiveness of processes in the emerging biofuel/chemical industry. The objective of this study was therefore to qualitatively classify forest logging residue made up of different plant parts using both near infrared spectroscopy (NIRS and Fourier transform infrared spectroscopy (FTIRS together with linear discriminant analysis (LDA. Forest logging residue harvested from several Pinus taeda (loblolly pine plantations in Alabama, USA, were classified into three plant part components: clean wood, wood and bark and slash (i.e., limbs and foliage. Five-fold cross-validated linear discriminant functions had classification accuracies of over 96% for both NIRS and FTIRS based models. An extra factor/principal component (PC was however needed to achieve this in FTIRS modeling. Analysis of factor loadings of both NIR and FTIR spectra showed that, the statistically different amount of cellulose in the three plant part components of logging residue contributed to their initial separation. This study demonstrated that NIR or FTIR spectroscopy coupled with PCA and LDA has the potential to be used as a high throughput tool in classifying the plant part makeup of a batch of forest logging residue feedstock. Thus, NIR/FTIR could be employed as a tool to rapidly probe/monitor the variability

  9. Factor analysis of multivariate data

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A.; Mahadevan, R.

    A brief introduction to factor analysis is presented. A FORTRAN program, which can perform the Q-mode and R-mode factor analysis and the singular value decomposition of a given data matrix is presented in Appendix B. This computer program, uses...

  10. Linear methods in band theory

    DEFF Research Database (Denmark)

    Andersen, O. Krogh

    1975-01-01

    of Korringa-Kohn-Rostoker, linear-combination-of-atomic-orbitals, and cellular methods; the secular matrix is linear in energy, the overlap integrals factorize as potential parameters and structure constants, the latter are canonical in the sense that they neither depend on the energy nor the cell volume...

  11. Equivalent linearization method for limit cycle flutter analysis of plate-type structure in axial flow

    International Nuclear Information System (INIS)

    Lu Li; Yang Yiren

    2009-01-01

    The responses and limit cycle flutter of a plate-type structure with cubic stiffness in viscous flow were studied. The continuous system was dispersed by utilizing Galerkin Method. The equivalent linearization concept was performed to predict the ranges of limit cycle flutter velocities. The coupled map of flutter amplitude-equivalent linear stiffness-critical velocity was used to analyze the stability of limit cycle flutter. The theoretical results agree well with the results of numerical integration, which indicates that the equivalent linearization concept is available to the analysis of limit cycle flutter of plate-type structure. (authors)

  12. Design and analysis of linear oscillatory single-phase permanent magnet generator for free-piston stirling engine systems

    Directory of Open Access Journals (Sweden)

    Jeong-Man Kim

    2017-05-01

    Full Text Available This study focuses on the design and analysis of a linear oscillatory single-phase permanent magnet generator for free-piston stirling engine (FPSE systems. In order to implement the design of linear oscillatory generator (LOG for suitable FPSEs, we conducted electromagnetic analysis of LOGs with varying design parameters. Then, detent force analysis was conducted using assisted PM. Using the assisted PM gave us the advantage of using mechanical strength by detent force. To improve the efficiency, we conducted characteristic analysis of eddy-current loss with respect to the PM segment. Finally, the experimental result was analyzed to confirm the prediction of the FEA.

  13. An empirical study for ranking risk factors using linear assignment: A case study of road construction

    Directory of Open Access Journals (Sweden)

    Amin Foroughi

    2012-04-01

    Full Text Available Road construction projects are considered as the most important governmental issues since there are normally heavy investments required in such projects. There is also shortage of financial resources in governmental budget, which makes the asset allocation more challenging. One primary step in reducing the cost is to determine different risks associated with execution of such project activities. In this study, we present some important risk factors associated with road construction in two levels for a real-world case study of rail-road industry located between two cities of Esfahan and Deligan. The first group of risk factors includes the probability and the effects for various attributes including cost, time, quality and performance. The second group of risk factors includes socio-economical factors as well as political and managerial aspects. The study finds 21 main risk factors as well as 193 sub risk factors. The factors are ranked using groups decision-making method called linear assignment. The preliminary results indicate that the road construction projects could finish faster with better outcome should we carefully consider risk factors and attempt to reduce their impacts.

  14. Factor analysis and scintigraphy

    International Nuclear Information System (INIS)

    Di Paola, R.; Penel, C.; Bazin, J.P.; Berche, C.

    1976-01-01

    The goal of factor analysis is usually to achieve reduction of a large set of data, extracting essential features without previous hypothesis. Due to the development of computerized systems, the use of largest sampling, the possibility of sequential data acquisition and the increase of dynamic studies, the problem of data compression can be encountered now in routine. Thus, results obtained for compression of scintigraphic images were first presented. Then possibilities given by factor analysis for scan processing were discussed. At last, use of this analysis for multidimensional studies and specially dynamic studies were considered for compression and processing [fr

  15. Enhanced linear-array photoacoustic beamforming using modified coherence factor.

    Science.gov (United States)

    Mozaffarzadeh, Moein; Yan, Yan; Mehrmohammadi, Mohammad; Makkiabadi, Bahador

    2018-02-01

    Photoacoustic imaging (PAI) is a promising medical imaging modality providing the spatial resolution of ultrasound imaging and the contrast of optical imaging. For linear-array PAI, a beamformer can be used as the reconstruction algorithm. Delay-and-sum (DAS) is the most prevalent beamforming algorithm in PAI. However, using DAS beamformer leads to low-resolution images as well as high sidelobes due to nondesired contribution of off-axis signals. Coherence factor (CF) is a weighting method in which each pixel of the reconstructed image is weighted, based on the spatial spectrum of the aperture, to mainly improve the contrast. We demonstrate that the numerator of the formula of CF contains a DAS algebra and propose the use of a delay-multiply-and-sum beamformer instead of the available DAS on the numerator. The proposed weighting technique, modified CF (MCF), has been evaluated numerically and experimentally compared to CF. It was shown that MCF leads to lower sidelobes and better detectable targets. The quantitative results of the experiment (using wire targets) show that MCF leads to for about 45% and 40% improvement, in comparison with CF, in the terms of signal-to-noise ratio and full-width-half-maximum, respectively. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  16. Equivalent linear and nonlinear site response analysis for design and risk assessment of safety-related nuclear structures

    International Nuclear Information System (INIS)

    Bolisetti, Chandrakanth; Whittaker, Andrew S.; Mason, H. Benjamin; Almufti, Ibrahim; Willford, Michael

    2014-01-01

    Highlights: • Performed equivalent linear and nonlinear site response analyses using industry-standard numerical programs. • Considered a wide range of sites and input ground motions. • Noted the practical issues encountered while using these programs. • Examined differences between the responses calculated from different programs. • Results of biaxial and uniaxial analyses are compared. - Abstract: Site response analysis is a precursor to soil-structure interaction analysis, which is an essential component in the seismic analysis of safety-related nuclear structures. Output from site response analysis provides input to soil-structure interaction analysis. Current practice in calculating site response for safety-related nuclear applications mainly involves the equivalent linear method in the frequency-domain. Nonlinear time-domain methods are used by some for the assessment of buildings, bridges and petrochemical facilities. Several commercial programs have been developed for site response analysis but none of them have been formally validated for large strains and high frequencies, which are crucial for the performance assessment of safety-related nuclear structures. This study sheds light on the applicability of some industry-standard equivalent linear (SHAKE) and nonlinear (DEEPSOIL and LS-DYNA) programs across a broad range of frequencies, earthquake shaking intensities, and sites ranging from stiff sand to hard rock, all with a focus on application to safety-related nuclear structures. Results show that the equivalent linear method is unable to reproduce the high frequency acceleration response, resulting in almost constant spectral accelerations in the short period range. Analysis using LS-DYNA occasionally results in some unrealistic high frequency acceleration ‘noise’, which can be removed by smoothing the piece-wise linear backbone curve. Analysis using DEEPSOIL results in abrupt variations in the peak strains of consecutive soil layers

  17. Relationship between linear type and fertility traits in Nguni cows.

    Science.gov (United States)

    Zindove, T J; Chimonyo, M; Nephawe, K A

    2015-06-01

    The objective of the study was to assess the dimensionality of seven linear traits (body condition score, body stature, body length, heart girth, navel height, body depth and flank circumference) in Nguni cows using factor analysis and indicate the relationship between the extracted latent variables and calving interval (CI) and age at first calving (AFC). The traits were measured between December 2012 and November 2013 on 1559 Nguni cows kept under thornveld, succulent karoo, grassland and bushveld vegetation types. Low partial correlations (-0.04 to 0.51), high Kaiser statistic for measure of sampling adequacy scores and significance of the Bartlett sphericity test (P1. Factor 1 included body condition score, body depth, flank circumference and heart girth and represented body capacity of cows. Factor 2 included body length, body stature and navel height and represented frame size of cows. CI and AFC decreased linearly with increase of factor 1. There was a quadratic increase in AFC as factor 2 increased (Pbody capacity and the other to the frame size of the cows. Small-framed cows with large body capacities have shorter CI and AFC.

  18. Hybrid PV/diesel solar power system design using multi-level factor analysis optimization

    Science.gov (United States)

    Drake, Joshua P.

    Solar power systems represent a large area of interest across a spectrum of organizations at a global level. It was determined that a clear understanding of current state of the art software and design methods, as well as optimization methods, could be used to improve the design methodology. Solar power design literature was researched for an in depth understanding of solar power system design methods and algorithms. Multiple software packages for the design and optimization of solar power systems were analyzed for a critical understanding of their design workflow. In addition, several methods of optimization were studied, including brute force, Pareto analysis, Monte Carlo, linear and nonlinear programming, and multi-way factor analysis. Factor analysis was selected as the most efficient optimization method for engineering design as it applied to solar power system design. The solar power design algorithms, software work flow analysis, and factor analysis optimization were combined to develop a solar power system design optimization software package called FireDrake. This software was used for the design of multiple solar power systems in conjunction with an energy audit case study performed in seven Tibetan refugee camps located in Mainpat, India. A report of solar system designs for the camps, as well as a proposed schedule for future installations was generated. It was determined that there were several improvements that could be made to the state of the art in modern solar power system design, though the complexity of current applications is significant.

  19. Driving factors behind carbon dioxide emissions in China: A modified production-theoretical decomposition analysis

    International Nuclear Information System (INIS)

    Wang, Qunwei; Chiu, Yung-Ho; Chiu, Ching-Ren

    2015-01-01

    Research on the driving factors behind carbon dioxide emission changes in China can inform better carbon emission reduction policies and help develop a low-carbon economy. As one of important methods, production-theoretical decomposition analysis (PDA) has been widely used to understand these driving factors. To avoid the infeasibility issue in solving the linear programming, this study proposed a modified PDA approach to decompose carbon dioxide emission changes into seven drivers. Using 2005–2010 data, the study found that economic development was the largest factor of increasing carbon dioxide emissions. The second factor was energy structure (reflecting potential carbon), and the third factor was low energy efficiency. Technological advances, energy intensity reductions, and carbon dioxide emission efficiency improvements were the negative driving factors reducing carbon dioxide emission growth rates. Carbon dioxide emissions and driving factors varied significantly across east, central and west China. - Highlights: • A modified PDA used to decompose carbon dioxide emission changes into seven drivers. • Two models were proposed to ameliorate the infeasible occasions. • Economic development was the largest factor of increasing CO_2 emissions in China.

  20. Relation between nonlinear or 'not-linear' characteristics in nuclear kinetics and noise analysis of neutron flux

    International Nuclear Information System (INIS)

    Kataoka, H.

    1975-01-01

    The 'not-linear' or '2nd-class-nonlinear' characteristics in nuclear reactor kinetics with the feedback effect in the high-power operation and induce the increase in the amplitude of the neutron flux noise, specially in the very low frequency region. The fundamental behaviour of 'not-linear' characteristics and its effect for the reactor noise was investigated. Application of the reactor noise analysis technique to power reactors has not been successful because of unknown large disagreement between the result of the conventional theoretical analysis and the experimental facts. When the cause of this discrepancy is clear, reactor noise analysis techniques can be effectively applied to instrumentation, control, monitoring and diagnosis of power reactors. (author)

  1. Focal spot motion of linear accelerators and its effect on portal image analysis

    International Nuclear Information System (INIS)

    Sonke, Jan-Jakob; Brand, Bob; Herk, Marcel van

    2003-01-01

    The focal spot of a linear accelerator is often considered to have a fully stable position. In practice, however, the beam control loop of a linear accelerator needs to stabilize after the beam is turned on. As a result, some motion of the focal spot might occur during the start-up phase of irradiation. When acquiring portal images, this motion will affect the projected position of anatomy and field edges, especially when low exposures are used. In this paper, the motion of the focal spot and the effect of this motion on portal image analysis are quantified. A slightly tilted narrow slit phantom was placed at the isocenter of several linear accelerators and images were acquired (3.5 frames per second) by means of an amorphous silicon flat panel imager positioned ∼0.7 m below the isocenter. The motion of the focal spot was determined by converting the tilted slit images to subpixel accurate line spread functions. The error in portal image analysis due to focal spot motion was estimated by a subtraction of the relative displacement of the projected slit from the relative displacement of the field edges. It was found that the motion of the focal spot depends on the control system and design of the accelerator. The shift of the focal spot at the start of irradiation ranges between 0.05-0.7 mm in the gun-target (GT) direction. In the left-right (AB) direction the shift is generally smaller. The resulting error in portal image analysis due to focal spot motion ranges between 0.05-1.1 mm for a dose corresponding to two monitor units (MUs). For 20 MUs, the effect of the focal spot motion reduces to 0.01-0.3 mm. The error in portal image analysis due to focal spot motion can be reduced by reducing the applied dose rate

  2. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  3. Aortic and Hepatic Contrast Enhancement During Hepatic-Arterial and Portal Venous Phase Computed Tomography Scanning: Multivariate Linear Regression Analysis Using Age, Sex, Total Body Weight, Height, and Cardiac Output.

    Science.gov (United States)

    Masuda, Takanori; Nakaura, Takeshi; Funama, Yoshinori; Higaki, Toru; Kiguchi, Masao; Imada, Naoyuki; Sato, Tomoyasu; Awai, Kazuo

    We evaluated the effect of the age, sex, total body weight (TBW), height (HT) and cardiac output (CO) of patients on aortic and hepatic contrast enhancement during hepatic-arterial phase (HAP) and portal venous phase (PVP) computed tomography (CT) scanning. This prospective study received institutional review board approval; prior informed consent to participate was obtained from all 168 patients. All were examined using our routine protocol; the contrast material was 600 mg/kg iodine. Cardiac output was measured with a portable electrical velocimeter within 5 minutes of starting the CT scan. We calculated contrast enhancement (per gram of iodine: [INCREMENT]HU/gI) of the abdominal aorta during the HAP and of the liver parenchyma during the PVP. We performed univariate and multivariate linear regression analysis between all patient characteristics and the [INCREMENT]HU/gI of aortic- and liver parenchymal enhancement. Univariate linear regression analysis demonstrated statistically significant correlations between the [INCREMENT]HU/gI and the age, sex, TBW, HT, and CO (all P linear regression analysis showed that only the TBW and CO were of independent predictive value (P linear regression analysis only the TBW and CO were significantly correlated with aortic and liver parenchymal enhancement; the age, sex, and HT were not. The CO was the only independent factor affecting aortic and liver parenchymal enhancement at hepatic CT when the protocol was adjusted for the TBW.

  4. Spectral analysis of linear relations and degenerate operator semigroups

    International Nuclear Information System (INIS)

    Baskakov, A G; Chernyshov, K I

    2002-01-01

    Several problems of the spectral theory of linear relations in Banach spaces are considered. Linear differential inclusions in a Banach space are studied. The construction of the phase space and solutions is carried out with the help of the spectral theory of linear relations, ergodic theorems, and degenerate operator semigroups

  5. LINEAR AND NON-LINEAR ANALYSES OF CABLE-STAYED STEEL FRAME SUBJECTED TO SEISMIC ACTIONS

    Directory of Open Access Journals (Sweden)

    Marko Đuran

    2017-01-01

    Full Text Available In this study, linear and non-linear dynamic analyses of a cable-stayed steel frame subjected to seismic actions are performed. The analyzed cable-stayed frame is the main supporting structure of a wide-span sports hall. Since the complex dynamic behavior of cable-stayed structures results in significant geometric nonlinearity, a nonlinear time history analysis is conducted. As a reference, an analysis using the European standard approach, the so-called linear modal response spectrum method, is also performed. The analyses are conducted for different seismic actions considering dependence on the response spectrums for various ground types and the corresponding artificially generated accelerograms. Despite fundamental differences between the two analyses, results indicate that the modal response spectrum analysis is surprisingly consistent with the internal forces and bending moment distributions of the nonlinear time history analysis. However, significantly smaller values of bending moments, internal forces, and displacements are obtained with the response spectrum analysis.

  6. Analysis of the linear induction motor in transient operation

    Energy Technology Data Exchange (ETDEWEB)

    Gentile, G; Rotondale, N; Scarano, M

    1987-05-01

    The paper deals with the analysis of a bilateral linear induction motor in transient operation. We have considered an impressed voltage one-dimensional model which takes into account end effects. The real winding distribution of the armature has been represented as a lumped parameters system. By using the space vectors methodology, the partial differential equation of the sheet is solved bythe variable separation method. Therefore it's possible to arrange a system of ordinary differential equations where the unknown quantities are the space vectors of the air-gap flux density and sheet currents. Finally, we have analyzed the characteristic quantities for a no-load starting of small power motors.

  7. Relatively Inexact Proximal Point Algorithm and Linear Convergence Analysis

    Directory of Open Access Journals (Sweden)

    Ram U. Verma

    2009-01-01

    Full Text Available Based on a notion of relatively maximal (m-relaxed monotonicity, the approximation solvability of a general class of inclusion problems is discussed, while generalizing Rockafellar's theorem (1976 on linear convergence using the proximal point algorithm in a real Hilbert space setting. Convergence analysis, based on this new model, is simpler and compact than that of the celebrated technique of Rockafellar in which the Lipschitz continuity at 0 of the inverse of the set-valued mapping is applied. Furthermore, it can be used to generalize the Yosida approximation, which, in turn, can be applied to first-order evolution equations as well as evolution inclusions.

  8. Selection and optimization of spectrometric amplifiers for gamma spectrometry: part II - linearity, live time correction factors and software

    International Nuclear Information System (INIS)

    Moraes, Marco Antonio Proenca Vieira de; Pugliesi, Reinaldo

    1996-01-01

    The objective of the present work was to establish simple criteria to choose the best combination of electronic modules to achieve an adequate high resolution gamma spectrometer. Linearity, live time correction factors and softwares of a gamma spectrometric system composed by a Hp Ge detector have been studied by using several kinds of spectrometric amplifiers: Canberra 2021, Canberra 2025, Ortec 673 and Tennelec 244 and the MCA cards Ortec and Nucleus. The results showed low values of integral non-linearity for all spectrometric amplifiers connected to the Ortec and Nucleus boards. The MCA card should be able to correct amplifier dead time for 17 kcps count rates. (author)

  9. On the dynamic analysis of piecewise-linear networks

    NARCIS (Netherlands)

    Heemels, WPMH; Camlibel, MK; Schumacher, JM

    Piecewise-linear (PL) modeling is often used to approximate the behavior of nonlinear circuits. One of the possible PL modeling methodologies is based on the linear complementarity problem, and this approach has already been used extensively in the circuits and systems community for static networks.

  10. From linear to generalized linear mixed models: A case study in repeated measures

    Science.gov (United States)

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  11. Linear models with R

    CERN Document Server

    Faraway, Julian J

    2014-01-01

    A Hands-On Way to Learning Data AnalysisPart of the core of statistics, linear models are used to make predictions and explain the relationship between the response and the predictors. Understanding linear models is crucial to a broader competence in the practice of statistics. Linear Models with R, Second Edition explains how to use linear models in physical science, engineering, social science, and business applications. The book incorporates several improvements that reflect how the world of R has greatly expanded since the publication of the first edition.New to the Second EditionReorganiz

  12. This research is to study the factors which influence the business success of small business ‘processed rotan’. The data employed in the study are primary data within the period of July to August 2013, 30 research observations through census method. Method of analysis used in the study is multiple linear regressions. The results of analysis showed that the factors of labor, innovation and promotion have positive and significant influence on the business success of small business ‘processed rotan’ simultaneously. The analysis also showed that partially labor has positive and significant influence on the business success, yet innovation and promotion have insignificant and positive influence on the business success.

    OpenAIRE

    Nasution, Inggrita Gusti Sari; Muchtar, Yasmin Chairunnisa

    2013-01-01

    This research is to study the factors which influence the business success of small business ‘processed rotan’. The data employed in the study are primary data within the period of July to August 2013, 30 research observations through census method. Method of analysis used in the study is multiple linear regressions. The results of analysis showed that the factors of labor, innovation and promotion have positive and significant influence on the business success of small busine...

  13. Materials analysis using x-ray linear attenuation coefficient measurements at four photon energies

    International Nuclear Information System (INIS)

    Midgley, S M

    2005-01-01

    The analytical properties of an accurate parameterization scheme for the x-ray linear attenuation coefficient are examined. The parameterization utilizes an additive combination of N compositional- and energy-dependent coefficients. The former were derived from a parameterization of elemental cross-sections using a polynomial in atomic number. The compositional-dependent coefficients are referred to as the mixture parameters, representing the electron density and higher order statistical moments describing elemental distribution. Additivity is an important property of the parameterization, allowing measured x-ray linear attenuation coefficients to be written as linear simultaneous equations, and then solved for the unknown coefficients. The energy-dependent coefficients can be determined by calibration from measurements with materials of known composition. The inverse problem may be utilized for materials analysis, whereby the simultaneous equations represent multi-energy linear attenuation coefficient measurements, and are solved for the mixture parameters. For in vivo studies, the choice of measurement energies is restricted to the diagnostic region (approximately 20 keV to 150 keV), where the parameterization requires N ≥ 4 energies. We identify a mathematical pathology that must be overcome in order to solve the inverse problem in this energy regime. An iterative inversion strategy is presented for materials analysis using four or more measurements, and then tested against real data obtained at energies 32 keV to 66 keV. The results demonstrate that it is possible to recover the electron density to within ±4% and fourth mixture parameter. It is also a key finding that the second and third mixture parameters cannot be recovered, as they are of minor importance in the parameterization at diagnostic x-ray energies

  14. Non-linear unidimensional Debye screening in plasmas

    International Nuclear Information System (INIS)

    Clemente, R.A.; Martin, P.

    1992-01-01

    An exact analytical solution for T e = T i and an approximate solution for T e ≠ T i have been obtained for the unidimensional non-linear Debye potential. The approximate expression is a solution of the Poisson equation obtained by expanding up to third order the Boltzmann's factors. The analysis shows that the effective Debye screening length can be quite different from the usual Debye length, when the potential to thermal energy ratio of the particles is not much smaller than unity. (author)

  15. Simplified non-linear time-history analysis based on the Theory of Plasticity

    DEFF Research Database (Denmark)

    Costa, Joao Domingues

    2005-01-01

    This paper aims at giving a contribution to the problem of developing simplified non-linear time-history (NLTH) analysis of structures which dynamical response is mainly governed by plastic deformations, able to provide designers with sufficiently accurate results. The method to be presented...... is based on the Theory of Plasticity. Firstly, the formulation and the computational procedure to perform time-history analysis of a rigid-plastic single degree of freedom (SDOF) system are presented. The necessary conditions for the method to incorporate pinching as well as strength degradation...

  16. voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.

    Science.gov (United States)

    Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K

    2014-02-03

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.

  17. Non-linear singular problems in p-adic analysis: associative algebras of p-adic distributions

    International Nuclear Information System (INIS)

    Albeverio, S; Khrennikov, A Yu; Shelkovich, V M

    2005-01-01

    We propose an algebraic theory which can be used for solving both linear and non-linear singular problems of p-adic analysis related to p-adic distributions (generalized functions). We construct the p-adic Colombeau-Egorov algebra of generalized functions, in which Vladimirov's pseudo-differential operator plays the role of differentiation. This algebra is closed under Fourier transformation and associative convolution. Pointvalues of generalized functions are defined, and it turns out that any generalized function is uniquely determined by its pointvalues. We also construct an associative algebra of asymptotic distributions, which is generated by the linear span of the set of associated homogeneous p-adic distributions. This algebra is embedded in the Colombeau-Egorov algebra as a subalgebra. In addition, a new technique for constructing weak asymptotics is developed

  18. Factors Affecting Online Groupwork Interest: A Multilevel Analysis

    Science.gov (United States)

    Du, Jianxia; Xu, Jianzhong; Fan, Xitao

    2013-01-01

    The purpose of the present study is to examine the personal and contextual factors that may affect students' online groupwork interest. Using the data obtained from graduate students in an online course, both student- and group-level predictors for online groupwork interest were analyzed within the framework of hierarchical linear modeling…

  19. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Slattery, S. R.; Wilson, P. P. H. [Engineering Physics Department, University of Wisconsin - Madison, 1500 Engineering Dr., Madison, WI 53706 (United States); Evans, T. M. [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37830 (United States)

    2013-07-01

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)

  20. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    International Nuclear Information System (INIS)

    Slattery, S. R.; Wilson, P. P. H.; Evans, T. M.

    2013-01-01

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)

  1. The effect of zinc supplementation on linear growth, body composition, and growth factors in preterm infants.

    Science.gov (United States)

    Díaz-Gómez, N Marta; Doménech, Eduardo; Barroso, Flora; Castells, Silvia; Cortabarria, Carmen; Jiménez, Alejandro

    2003-05-01

    The aim of our study was to evaluate the effect of zinc supplementation on linear growth, body composition, and growth factors in premature infants. Thirty-six preterm infants (gestational age: 32.0 +/- 2.1 weeks, birth weight: 1704 +/- 364 g) participated in a longitudinal double-blind, randomized clinical trial. They were randomly allocated either to the supplemental (S) group fed with a standard term formula supplemented with zinc (final content 10 mg/L) and a small quantity of copper (final content 0.6 mg/L), or to the placebo group fed with the same formula without supplementation (final content of zinc: 5 mg/L and copper: 0.4 mg/L), from 36 weeks postconceptional age until 6 months corrected postnatal age. At each evaluation, anthropometric variables and bioelectrical impedance were measured, a 3-day dietary record was collected, and a blood sample was taken. We analyzed serum levels of total alkaline phosphatase, skeletal alkaline phosphatase (sALP), insulin growth factor (IGF)-I, IGF binding protein-3, IGF binding protein-1, zinc and copper, and the concentrations of zinc in erythrocytes. The S group had significantly higher zinc levels in serum and erythrocytes and lower serum copper levels with respect to the placebo group. We found that the S group had a greater linear growth (from baseline to 3 months corrected age: Delta score deviation standard length: 1.32 +/-.8 vs.38 +/-.8). The increase in total body water and in serum levels of sALP was also significantly higher in the S group (total body water: 3 months; corrected age: 3.8 +/-.5 vs 3.5 +/-.4 kg, 6 months; corrected age: 4.5 +/-.5 vs 4.2 +/-.4 kg; sALP: 3 months; corrected age: 140.2 +/- 28.7 vs 118.7 +/- 18.8 micro g/L). Zinc supplementation has a positive effect on linear growth in premature infants.

  2. A primer on linear models

    CERN Document Server

    Monahan, John F

    2008-01-01

    Preface Examples of the General Linear Model Introduction One-Sample Problem Simple Linear Regression Multiple Regression One-Way ANOVA First Discussion The Two-Way Nested Model Two-Way Crossed Model Analysis of Covariance Autoregression Discussion The Linear Least Squares Problem The Normal Equations The Geometry of Least Squares Reparameterization Gram-Schmidt Orthonormalization Estimability and Least Squares Estimators Assumptions for the Linear Mean Model Confounding, Identifiability, and Estimability Estimability and Least Squares Estimators F

  3. Characterising non-linear dynamics in nocturnal breathing patterns of healthy infants using recurrence quantification analysis.

    Science.gov (United States)

    Terrill, Philip I; Wilson, Stephen J; Suresh, Sadasivam; Cooper, David M; Dakin, Carolyn

    2013-05-01

    Breathing dynamics vary between infant sleep states, and are likely to exhibit non-linear behaviour. This study applied the non-linear analytical tool recurrence quantification analysis (RQA) to 400 breath interval periods of REM and N-REM sleep, and then using an overlapping moving window. The RQA variables were different between sleep states, with REM radius 150% greater than N-REM radius, and REM laminarity 79% greater than N-REM laminarity. RQA allowed the observation of temporal variations in non-linear breathing dynamics across a night's sleep at 30s resolution, and provides a basis for quantifying changes in complex breathing dynamics with physiology and pathology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Linear and non-linear stability analysis for finite difference discretizations of high-order Boussinesq equations

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Bingham, Harry B.; Madsen, Per A.

    2004-01-01

    of rotational and irrotational formulations in two horizontal dimensions provides evidence that the irrotational formulation has significantly better stability properties when the deep-water non-linearity is high, particularly on refined grids. Computation of matrix pseudospectra shows that the system is only...... insight into the numerical behaviour of this rather complicated system of non-linear PDEs....

  5. Microlocal analysis of a seismic linearized inverse problem

    NARCIS (Netherlands)

    Stolk, C.C.

    1999-01-01

    The seismic inverse problem is to determine the wavespeed c x in the interior of a medium from measurements at the boundary In this paper we analyze the linearized inverse problem in general acoustic media The problem is to nd a left inverse of the linearized forward map F or equivalently to nd the

  6. Analysis by numerical simulations of non-linear phenomenons in vertical pump rotor dynamic

    International Nuclear Information System (INIS)

    Bediou, J.; Pasqualini, G.

    1992-01-01

    Controlling dynamical behavior of main coolant pumps shaftlines is an interesting subject for the user and the constructor. The first is mainly concerned by the interpretation of on field observed behavior, monitoring, reliability and preventive maintenance of his machines. The second must in addition manage with sometimes contradictory requirements related to mechanical design and performances optimization (shaft diameter reduction, clearance,...). The use of numerical modeling is now a classical technique for simple analysis (rough prediction of critical speeds for instance) but is still limited, in particular for vertical shaftline especially when equipped with hydrodynamic bearings, due to the complexity of encountered phenomenons in that type of machine. The vertical position of the shaftline seems to be the origin of non linear dynamical behavior, the analysis of which, as presented in the following discussion, requires specific modelization of fluid film, particularly for hydrodynamic bearings. The low static load generally no longer allows use of stiffness and damping coefficients classically calculated by linearizing fluid film equations near a stable static equilibrium position. For the analysis of such machines, specific numerical models have been developed at Electricite de France in a package for general rotordynamics analysis. Numerical models are briefly described. Then an example is precisely presented and discussed to illustrate some considered phenomenons and their consequences on machine behavior. In this example, the authors interpret the observed behavior by using numerical models, and demonstrate the advantage of such analysis for better understanding of vertical pumps rotordynamic

  7. Quantization of liver tissue in dual kVp computed tomography using linear discriminant analysis

    Science.gov (United States)

    Tkaczyk, J. Eric; Langan, David; Wu, Xiaoye; Xu, Daniel; Benson, Thomas; Pack, Jed D.; Schmitz, Andrea; Hara, Amy; Palicek, William; Licato, Paul; Leverentz, Jaynne

    2009-02-01

    Linear discriminate analysis (LDA) is applied to dual kVp CT and used for tissue characterization. The potential to quantitatively model both malignant and benign, hypo-intense liver lesions is evaluated by analysis of portal-phase, intravenous CT scan data obtained on human patients. Masses with an a priori classification are mapped to a distribution of points in basis material space. The degree of localization of tissue types in the material basis space is related to both quantum noise and real compositional differences. The density maps are analyzed with LDA and studied with system simulations to differentiate these factors. The discriminant analysis is formulated so as to incorporate the known statistical properties of the data. Effective kVp separation and mAs relates to precision of tissue localization. Bias in the material position is related to the degree of X-ray scatter and partial-volume effect. Experimental data and simulations demonstrate that for single energy (HU) imaging or image-based decomposition pixel values of water-like tissues depend on proximity to other iodine-filled bodies. Beam-hardening errors cause a shift in image value on the scale of that difference sought between in cancerous and cystic lessons. In contrast, projection-based decomposition or its equivalent when implemented on a carefully calibrated system can provide accurate data. On such a system, LDA may provide novel quantitative capabilities for tissue characterization in dual energy CT.

  8. Exploratory Bi-Factor Analysis: The Oblique Case

    Science.gov (United States)

    Jennrich, Robert I.; Bentler, Peter M.

    2012-01-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford ("Psychometrika" 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler ("Psychometrika" 76:537-549, 2011) introduced an exploratory form of bi-factor…

  9. Analysis of stress intensity factor for a Griffith crack opened under constant pressure in a plate with temperature dependent properties

    International Nuclear Information System (INIS)

    Hata, Toshiaki

    1982-01-01

    Recently, the research on the thermal stress of structural materials has become important with the progress of nuclear reactor technology. In the case of large temperature gradient, the change of the physical properties of materials must be taken into account. The thermal stress analysis for the things with cracks taking the temperature dependence of properties into account has scarcely been carried out. In this report, the general method of solution of three-dimensional problems using perturbation method and the extension of thermo-elastic displacement potential method is shown for the case in which Young's modulus changes according to the exponential function of temperature. Moreover, using this method, the effect of the temperature dependence of properties on the stress intensity factor of the cracks subjected to internal pressure in a strip exposed to linear thermal flow was clarified. In the analysis, Young's modulus, the coefficient of linear thermal expansion and thermal conductivity were assumed to be dependent on temperature. The method of solution, the analysis of stress intensity factor considering the change of properties due to temperature, and the numerical calculation for a square plate with a crack are explained. (Kako, I.)

  10. Matrices and linear transformations

    CERN Document Server

    Cullen, Charles G

    1990-01-01

    ""Comprehensive . . . an excellent introduction to the subject."" - Electronic Engineer's Design Magazine.This introductory textbook, aimed at sophomore- and junior-level undergraduates in mathematics, engineering, and the physical sciences, offers a smooth, in-depth treatment of linear algebra and matrix theory. The major objects of study are matrices over an arbitrary field. Contents include Matrices and Linear Systems; Vector Spaces; Determinants; Linear Transformations; Similarity: Part I and Part II; Polynomials and Polynomial Matrices; Matrix Analysis; and Numerical Methods. The first

  11. Non-linear models for the relation between cardiovascular risk factors and intake of wine, beer and spirits.

    Science.gov (United States)

    Ambler, Gareth; Royston, Patrick; Head, Jenny

    2003-02-15

    It is generally accepted that moderate consumption of alcohol is associated with a reduced risk of coronary heart disease (CHD). It is not clear however whether this benefit is derived through the consumption of a specific beverage type, for example, wine. In this paper the associations between known CHD risk factors and different beverage types are investigated using a novel approach with non-linear modelling. Two types of model are proposed which are designed to detect differential effects of beverage type. These may be viewed as extensions of Box and Tidwell's power-linear model. The risk factors high density lipoprotein cholesterol, fibrinogen and systolic blood pressure are considered using data from a large longitudinal study of British civil servants (Whitehall II). The results for males suggest that gram for gram of alcohol, the effect of wine differs from that of beer and spirits, particularly for systolic blood pressure. In particular increasing wine consumption is associated with slightly more favourable levels of all three risk factors studied. For females there is evidence of a differential relationship only for systolic blood pressure. These findings are tentative but suggest that further research is required to clarify the similarities and differences between the results for males and females and to establish whether either of the models is the more appropriate. However, having clarified these issues, the apparent benefit of consuming wine instead of other alcoholic beverages may be relatively small. Copyright 2003 John Wiley & Sons, Ltd.

  12. PERFORMANCE OPTIMIZATION OF LINEAR INDUCTION MOTOR BY EDDY CURRENT AND FLUX DENSITY DISTRIBUTION ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. S. MANNA

    2011-12-01

    Full Text Available The development of electromagnetic devices as machines, transformers, heating devices confronts the engineers with several problems. For the design of an optimized geometry and the prediction of the operational behaviour an accurate knowledge of the dependencies of the field quantities inside the magnetic circuits is necessary. This paper provides the eddy current and core flux density distribution analysis in linear induction motor. Magnetic flux in the air gap of the Linear Induction Motor (LIM is reduced to various losses such as end effects, fringes, effect, skin effects etc. The finite element based software package COMSOL Multiphysics Inc. USA is used to get the reliable and accurate computational results for optimization the performance of Linear Induction Motor (LIM. The geometrical characteristics of LIM are varied to find the optimal point of thrust and minimum flux leakage during static and dynamic conditions.

  13. Core seismic behaviour: linear and non-linear models

    International Nuclear Information System (INIS)

    Bernard, M.; Van Dorsselaere, M.; Gauvain, M.; Jenapierre-Gantenbein, M.

    1981-08-01

    The usual methodology for the core seismic behaviour analysis leads to a double complementary approach: to define a core model to be included in the reactor-block seismic response analysis, simple enough but representative of basic movements (diagrid or slab), to define a finer core model, with basic data issued from the first model. This paper presents the history of the different models of both kinds. The inert mass model (IMM) yielded a first rough diagrid movement. The direct linear model (DLM), without shocks and with sodium as an added mass, let to two different ones: DLM 1 with independent movements of the fuel and radial blanket subassemblies, and DLM 2 with a core combined movement. The non-linear (NLM) ''CORALIE'' uses the same basic modelization (Finite Element Beams) but accounts for shocks. It studies the response of a diameter on flats and takes into account the fluid coupling and the wrapper tube flexibility at the pad level. Damping consists of one modal part of 2% and one part due to shocks. Finally, ''CORALIE'' yields the time-history of the displacements and efforts on the supports, but damping (probably greater than 2%) and fluid-structures interaction are still to be precised. The validation experiments were performed on a RAPSODIE core mock-up on scale 1, in similitude of 1/3 as to SPX 1. The equivalent linear model (ELM) was developed for the SPX 1 reactor-block response analysis and a specified seismic level (SB or SM). It is composed of several oscillators fixed to the diagrid and yields the same maximum displacements and efforts than the NLM. The SPX 1 core seismic analysis with a diagrid input spectrum which corresponds to a 0,1 g group acceleration, has been carried out with these models: some aspects of these calculations are presented here

  14. Transcription factors, coregulators, and epigenetic marks are linearly correlated and highly redundant.

    Directory of Open Access Journals (Sweden)

    Tobias Ahsendorf

    Full Text Available The DNA microstates that regulate transcription include sequence-specific transcription factors (TFs, coregulatory complexes, nucleosomes, histone modifications, DNA methylation, and parts of the three-dimensional architecture of genomes, which could create an enormous combinatorial complexity across the genome. However, many proteins and epigenetic marks are known to colocalize, suggesting that the information content encoded in these marks can be compressed. It has so far proved difficult to understand this compression in a systematic and quantitative manner. Here, we show that simple linear models can reliably predict the data generated by the ENCODE and Roadmap Epigenomics consortia. Further, we demonstrate that a small number of marks can predict all other marks with high average correlation across the genome, systematically revealing the substantial information compression that is present in different cell lines. We find that the linear models for activating marks are typically cell line-independent, while those for silencing marks are predominantly cell line-specific. Of particular note, a nuclear receptor corepressor, transducin beta-like 1 X-linked receptor 1 (TBLR1, was highly predictive of other marks in two hematopoietic cell lines. The methodology presented here shows how the potentially vast complexity of TFs, coregulators, and epigenetic marks at eukaryotic genes is highly redundant and that the information present can be compressed onto a much smaller subset of marks. These findings could be used to efficiently characterize cell lines and tissues based on a small number of diagnostic marks and suggest how the DNA microstates, which regulate the expression of individual genes, can be specified.

  15. Non-linear optical materials

    CERN Document Server

    Saravanan, R

    2018-01-01

    Non-linear optical materials have widespread and promising applications, but the efforts to understand the local structure, electron density distribution and bonding is still lacking. The present work explores the structural details, the electron density distribution and the local bond length distribution of some non-linear optical materials. It also gives estimation of the optical band gap, the particle size, crystallite size, and the elemental composition from UV-Visible analysis, SEM, XRD and EDS of some non-linear optical materials respectively.

  16. Design of a transverse-flux permanent-magnet linear generator and controller for use with a free-piston stirling engine

    Science.gov (United States)

    Zheng, Jigui; Huang, Yuping; Wu, Hongxing; Zheng, Ping

    2016-07-01

    Transverse-flux with high efficiency has been applied in Stirling engine and permanent magnet synchronous linear generator system, however it is restricted for large application because of low and complex process. A novel type of cylindrical, non-overlapping, transverse-flux, and permanent-magnet linear motor(TFPLM) is investigated, furthermore, a high power factor and less process complexity structure research is developed. The impact of magnetic leakage factor on power factor is discussed, by using the Finite Element Analysis(FEA) model of stirling engine and TFPLM, an optimization method for electro-magnetic design of TFPLM is proposed based on magnetic leakage factor. The relation between power factor and structure parameter is investigated, and a structure parameter optimization method is proposed taking power factor maximum as a goal. At last, the test bench is founded, starting experimental and generating experimental are performed, and a good agreement of simulation and experimental is achieved. The power factor is improved and the process complexity is decreased. This research provides the instruction to design high-power factor permanent-magnet linear generator.

  17. Estimate the contribution of incubation parameters influence egg hatchability using multiple linear regression analysis.

    Science.gov (United States)

    Khalil, Mohamed H; Shebl, Mostafa K; Kosba, Mohamed A; El-Sabrout, Karim; Zaki, Nesma

    2016-08-01

    This research was conducted to determine the most affecting parameters on hatchability of indigenous and improved local chickens' eggs. Five parameters were studied (fertility, early and late embryonic mortalities, shape index, egg weight, and egg weight loss) on four strains, namely Fayoumi, Alexandria, Matrouh, and Montazah. Multiple linear regression was performed on the studied parameters to determine the most influencing one on hatchability. The results showed significant differences in commercial and scientific hatchability among strains. Alexandria strain has the highest significant commercial hatchability (80.70%). Regarding the studied strains, highly significant differences in hatching chick weight among strains were observed. Using multiple linear regression analysis, fertility made the greatest percent contribution (71.31%) to hatchability, and the lowest percent contributions were made by shape index and egg weight loss. A prediction of hatchability using multiple regression analysis could be a good tool to improve hatchability percentage in chickens.

  18. Non-linear analysis of wave progagation using transform methods and plates and shells using integral equations

    Science.gov (United States)

    Pipkins, Daniel Scott

    Two diverse topics of relevance in modern computational mechanics are treated. The first involves the modeling of linear and non-linear wave propagation in flexible, lattice structures. The technique used combines the Laplace Transform with the Finite Element Method (FEM). The procedure is to transform the governing differential equations and boundary conditions into the transform domain where the FEM formulation is carried out. For linear problems, the transformed differential equations can be solved exactly, hence the method is exact. As a result, each member of the lattice structure is modeled using only one element. In the non-linear problem, the method is no longer exact. The approximation introduced is a spatial discretization of the transformed non-linear terms. The non-linear terms are represented in the transform domain by making use of the complex convolution theorem. A weak formulation of the resulting transformed non-linear equations yields a set of element level matrix equations. The trial and test functions used in the weak formulation correspond to the exact solution of the linear part of the transformed governing differential equation. Numerical results are presented for both linear and non-linear systems. The linear systems modeled are longitudinal and torsional rods and Bernoulli-Euler and Timoshenko beams. For non-linear systems, a viscoelastic rod and Von Karman type beam are modeled. The second topic is the analysis of plates and shallow shells under-going finite deflections by the Field/Boundary Element Method. Numerical results are presented for two plate problems. The first is the bifurcation problem associated with a square plate having free boundaries which is loaded by four, self equilibrating corner forces. The results are compared to two existing numerical solutions of the problem which differ substantially. linear model are compared to those

  19. Cement Leakage in Percutaneous Vertebral Augmentation for Osteoporotic Vertebral Compression Fractures: Analysis of Risk Factors.

    Science.gov (United States)

    Xie, Weixing; Jin, Daxiang; Ma, Hui; Ding, Jinyong; Xu, Jixi; Zhang, Shuncong; Liang, De

    2016-05-01

    The risk factors for cement leakage were retrospectively reviewed in 192 patients who underwent percutaneous vertebral augmentation (PVA). To discuss the factors related to the cement leakage in PVA procedure for the treatment of osteoporotic vertebral compression fractures. PVA is widely applied for the treatment of osteoporotic vertebral fractures. Cement leakage is a major complication of this procedure. The risk factors for cement leakage were controversial. A retrospective review of 192 patients who underwent PVA was conducted. The following data were recorded: age, sex, bone density, number of fractured vertebrae before surgery, number of treated vertebrae, severity of the treated vertebrae, operative approach, volume of injected bone cement, preoperative vertebral compression ratio, preoperative local kyphosis angle, intraosseous clefts, preoperative vertebral cortical bone defect, and ratio and type of cement leakage. To study the correlation between each factor and cement leakage ratio, bivariate regression analysis was employed to perform univariate analysis, whereas multivariate linear regression analysis was employed to perform multivariate analysis. The study included 192 patients (282 treated vertebrae), and cement leakage occurred in 100 vertebrae (35.46%). The vertebrae with preoperative cortical bone defects generally exhibited higher cement leakage ratio, and the leakage is typically type C. Vertebrae with intact cortical bones before the procedure tend to experience type S leakage. Univariate analysis showed that patient age, bone density, number of fractured vertebrae before surgery, and vertebral cortical bone were associated with cement leakage ratio (Pcement leakage are bone density and vertebral cortical bone defect, with standardized partial regression coefficients of -0.085 and 0.144, respectively. High bone density and vertebral cortical bone defect are independent risk factors associated with bone cement leakage.

  20. Multidisciplinary Inverse Reliability Analysis Based on Collaborative Optimization with Combination of Linear Approximations

    Directory of Open Access Journals (Sweden)

    Xin-Jia Meng

    2015-01-01

    Full Text Available Multidisciplinary reliability is an important part of the reliability-based multidisciplinary design optimization (RBMDO. However, it usually has a considerable amount of calculation. The purpose of this paper is to improve the computational efficiency of multidisciplinary inverse reliability analysis. A multidisciplinary inverse reliability analysis method based on collaborative optimization with combination of linear approximations (CLA-CO is proposed in this paper. In the proposed method, the multidisciplinary reliability assessment problem is first transformed into a problem of most probable failure point (MPP search of inverse reliability, and then the process of searching for MPP of multidisciplinary inverse reliability is performed based on the framework of CLA-CO. This method improves the MPP searching process through two elements. One is treating the discipline analyses as the equality constraints in the subsystem optimization, and the other is using linear approximations corresponding to subsystem responses as the replacement of the consistency equality constraint in system optimization. With these two elements, the proposed method realizes the parallel analysis of each discipline, and it also has a higher computational efficiency. Additionally, there are no difficulties in applying the proposed method to problems with nonnormal distribution variables. One mathematical test problem and an electronic packaging problem are used to demonstrate the effectiveness of the proposed method.

  1. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2015-08-01

    Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques.

  2. Linear ubiquitination in immunity.

    Science.gov (United States)

    Shimizu, Yutaka; Taraborrelli, Lucia; Walczak, Henning

    2015-07-01

    Linear ubiquitination is a post-translational protein modification recently discovered to be crucial for innate and adaptive immune signaling. The function of linear ubiquitin chains is regulated at multiple levels: generation, recognition, and removal. These chains are generated by the linear ubiquitin chain assembly complex (LUBAC), the only known ubiquitin E3 capable of forming the linear ubiquitin linkage de novo. LUBAC is not only relevant for activation of nuclear factor-κB (NF-κB) and mitogen-activated protein kinases (MAPKs) in various signaling pathways, but importantly, it also regulates cell death downstream of immune receptors capable of inducing this response. Recognition of the linear ubiquitin linkage is specifically mediated by certain ubiquitin receptors, which is crucial for translation into the intended signaling outputs. LUBAC deficiency results in attenuated gene activation and increased cell death, causing pathologic conditions in both, mice, and humans. Removal of ubiquitin chains is mediated by deubiquitinases (DUBs). Two of them, OTULIN and CYLD, are constitutively associated with LUBAC. Here, we review the current knowledge on linear ubiquitination in immune signaling pathways and the biochemical mechanisms as to how linear polyubiquitin exerts its functions distinctly from those of other ubiquitin linkage types. © 2015 The Authors. Immunological Reviews Published by John Wiley & Sons Ltd.

  3. Non-linear Analysis of Scalp EEG by Using Bispectra: The Effect of the Reference Choice

    Directory of Open Access Journals (Sweden)

    Federico Chella

    2017-05-01

    Full Text Available Bispectral analysis is a signal processing technique that makes it possible to capture the non-linear and non-Gaussian properties of the EEG signals. It has found various applications in EEG research and clinical practice, including the assessment of anesthetic depth, the identification of epileptic seizures, and more recently, the evaluation of non-linear cross-frequency brain functional connectivity. However, the validity and reliability of the indices drawn from bispectral analysis of EEG signals are potentially biased by the use of a non-neutral EEG reference. The present study aims at investigating the effects of the reference choice on the analysis of the non-linear features of EEG signals through bicoherence, as well as on the estimation of cross-frequency EEG connectivity through two different non-linear measures, i.e., the cross-bicoherence and the antisymmetric cross-bicoherence. To this end, four commonly used reference schemes were considered: the vertex electrode (Cz, the digitally linked mastoids, the average reference, and the Reference Electrode Standardization Technique (REST. The reference effects were assessed both in simulations and in a real EEG experiment. The simulations allowed to investigated: (i the effects of the electrode density on the performance of the above references in the estimation of bispectral measures; and (ii the effects of the head model accuracy in the performance of the REST. For real data, the EEG signals recorded from 10 subjects during eyes open resting state were examined, and the distortions induced by the reference choice in the patterns of alpha-beta bicoherence, cross-bicoherence, and antisymmetric cross-bicoherence were assessed. The results showed significant differences in the findings depending on the chosen reference, with the REST providing superior performance than all the other references in approximating the ideal neutral reference. In conclusion, this study highlights the importance of

  4. Linear system theory

    Science.gov (United States)

    Callier, Frank M.; Desoer, Charles A.

    1991-01-01

    The aim of this book is to provide a systematic and rigorous access to the main topics of linear state-space system theory in both the continuous-time case and the discrete-time case; and the I/O description of linear systems. The main thrusts of the work are the analysis of system descriptions and derivations of their properties, LQ-optimal control, state feedback and state estimation, and MIMO unity-feedback systems.

  5. Sensitivity Analysis of the USLE Soil Erodibility Factor to Its Determining Parameters

    Science.gov (United States)

    Mitova, Milena; Rousseva, Svetla

    2014-05-01

    Soil erosion is recognized as one of the most serious soil threats worldwide. Soil erosion prediction is the first step in soil conservation planning. The Universal Soil Loss Equation (USLE) is one of the most widely used models for soil erosion predictions. One of the five USLE predictors is the soil erodibility factor (K-factor), which evaluates the impact of soil characteristics on soil erosion rates. Soil erodibility nomograph defines K-factor depending on soil characteristics, such as: particle size distribution (fractions finer that 0.002 mm and from 0.1 to 0.002 mm), organic matter content, soil structure and soil profile water permeability. Identifying the soil characteristics, which mostly influence the K-factor would give an opportunity to control the soil loss through erosion by controlling the parameters, which reduce the K-factor value. The aim of the report is to present the results of analysis of the relative weight of these soil characteristics in the K-factor values. The relative impact of the soil characteristics on K-factor was studied through a series of statistical analyses of data from the geographic database for soil erosion risk assessments in Bulgaria. Degree of correlation between K-factor values and the parameters that determine it was studied by correlation analysis. The sensitivity of the K-factor was determined by studying the variance of each parameter within the range between minimum and maximum possible values considering average value of the other factors. Normalizing transformation of data sets was applied because of the different dimensions and the orders of variation of the values of the various parameters. The results show that the content of particles finer than 0.002 mm has the most significant relative impact on the soil erodibility, followed by the content of particles with size from 0.1 mm to 0.002 mm, the class of the water permeability of the soil profile, the content of organic matter and the aggregation class. The

  6. Three dimensional finite element linear analysis of reinforced concrete structures

    International Nuclear Information System (INIS)

    Inbasakaran, M.; Pandarinathan, V.G.; Krishnamoorthy, C.S.

    1979-01-01

    A twenty noded isoparametric reinforced concrete solid element for the three dimensional linear elastic stress analysis of reinforced concrete structures is presented. The reinforcement is directly included as an integral part of the element thus facilitating discretization of the structure independent of the orientation of reinforcement. Concrete stiffness is evaluated by taking 3 x 3 x 3 Gauss integration rule and steel stiffness is evaluated numerically by considering three Gaussian points along the length of reinforcement. The numerical integration for steel stiffness necessiates the conversion of global coordiantes of the Gaussian points to nondimensional local coordinates and this is done by Newton Raphson iterative method. Subroutines for the above formulation have been developed and added to SAP and STAP routines for solving the examples. The validity of the reinforced concrete element is verified by comparison of results from finite element analysis and analytical results. It is concluded that this finite element model provides a valuable analytical tool for the three dimensional elastic stress analysis of concrete structures like beams curved in plan and nuclear containment vessels. (orig.)

  7. Ordinal Log-Linear Models for Contingency Tables

    Directory of Open Access Journals (Sweden)

    Brzezińska Justyna

    2016-12-01

    Full Text Available A log-linear analysis is a method providing a comprehensive scheme to describe the association for categorical variables in a contingency table. The log-linear model specifies how the expected counts depend on the levels of the categorical variables for these cells and provide detailed information on the associations. The aim of this paper is to present theoretical, as well as empirical, aspects of ordinal log-linear models used for contingency tables with ordinal variables. We introduce log-linear models for ordinal variables: linear-by-linear association, row effect model, column effect model and RC Goodman’s model. Algorithm, advantages and disadvantages will be discussed in the paper. An empirical analysis will be conducted with the use of R.

  8. Design, analysis and fabrication of a linear permanent magnet ...

    Indian Academy of Sciences (India)

    MONOJIT SEAL

    Linear permanent magnet synchronous machine; LPMSM—fabrication; design optimisation; finite-element ... induction motor (LIM) prototype was patented in 1890 [1]. Since then, linear ..... Also, for manual winding, more slot area is allotted to ...

  9. Longitudinal Jitter Analysis of a Linear Accelerator Electron Gun

    Directory of Open Access Journals (Sweden)

    MingShan Liu

    2016-11-01

    Full Text Available We present measurements and analysis of the longitudinal timing jitter of a Beijing Electron Positron Collider (BEPCII linear accelerator electron gun. We simulated the longitudinal jitter effect of the gun using PARMELA to evaluate beam performance, including: beam profile, average energy, energy spread, and XY emittances. The maximum percentage difference of the beam parameters is calculated to be 100%, 13.27%, 42.24% and 65.01%, 86.81%, respectively. Due to this, the bunching efficiency is reduced to 54%. However, the longitudinal phase difference of the reference particle was 9.89°. The simulation results are in agreement with tests and are helpful to optimize the beam parameters by tuning the trigger timing of the gun during the bunching process.

  10. MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY

    OpenAIRE

    Chayalakshmi C.L

    2018-01-01

    MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY ABSTRACT Calculation of boiler efficiency is essential if its parameters need to be controlled for either maintaining or enhancing its efficiency. But determination of boiler efficiency using conventional method is time consuming and very expensive. Hence, it is not recommended to find boiler efficiency frequently. The work presented in this paper deals with establishing the statistical mo...

  11. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet.

    Science.gov (United States)

    Brown, A M

    2001-06-01

    The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.

  12. Thermal radiation analysis for small satellites with single-node model using techniques of equivalent linearization

    International Nuclear Information System (INIS)

    Anh, N.D.; Hieu, N.N.; Chung, P.N.; Anh, N.T.

    2016-01-01

    Highlights: • Linearization criteria are presented for a single-node model of satellite thermal. • A nonlinear algebraic system for linearization coefficients is obtained. • The temperature evolutions obtained from different methods are explored. • The temperature mean and amplitudes versus the heat capacity are discussed. • The dual criterion approach yields smaller errors than other approximate methods. - Abstract: In this paper, the method of equivalent linearization is extended to the thermal analysis of satellite using both conventional and dual criteria of linearization. These criteria are applied to a differential nonlinear equation of single-node model of the heat transfer of a small satellite in the Low Earth Orbit. A system of nonlinear algebraic equations for linearization coefficients is obtained in the closed form and then solved by the iteration method. The temperature evolution, average values and amplitudes versus the heat capacity obtained by various approaches including Runge–Kutta algorithm, conventional and dual criteria of equivalent linearization, and Grande's approach are compared together. Numerical results reveal that temperature responses obtained from the method of linearization and Grande's approach are quite close to those obtained from the Runge–Kutta method. The dual criterion yields smaller errors than those of the remaining methods when the nonlinearity of the system increases, namely, when the heat capacity varies in the range [1.0, 3.0] × 10 4  J K −1 .

  13. Isotherms and thermodynamics by linear and non-linear regression analysis for the sorption of methylene blue onto activated carbon: Comparison of various error functions

    International Nuclear Information System (INIS)

    Kumar, K. Vasanth; Porkodi, K.; Rocha, F.

    2008-01-01

    A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of methylene blue sorption by activated carbon. The r 2 was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions, namely coefficient of determination (r 2 ), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r 2 was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K 2 was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm

  14. ANALYSIS OF FACTORS CAUSING WATER DAMAGE TO LOESS DOUBLE-ARCHED TUNNEL BASED ON TFN-AHP

    Directory of Open Access Journals (Sweden)

    Mao Zheng-jun

    2017-04-01

    Full Text Available In order to analysis the factors causing water damage to loess double-arched tunnel, this paper conducts field investigation on water damage to tunnels on Lishi-Jundu Expressway in Shanxi, China, confirms its development characteristics, builds an index system (covering 36 evaluation indexes for construction condition, design stage, construction stage, and operation stage for the factors causing water damage to loess double-arched tunnel, applies TFN-AHP (triangular fuzzy number-analytic hierarchy process in calculating the weight of indexes at different levels, and obtains the final sequence of weight of the factors causing water seepage to loess double-arched tunnel. It is found out that water damage to loess double-arched tunnel always develops in construction joints, expansion joints, settlement joints, and lining joints of tunnel and even around them; there is dotted water seepage, linear water seepage, and planar water seepage according to the trace and scope of water damage to tunnel lining. The result shows that water damage to loess double-arched tunnel mainly refers to linear water seepage, planar water seepage is also developed well, and partition and equipment box at the entrance and exit of tunnel are prone to water seepage; construction stage is crucial for controlling water damage to loess double-arched tunnel, atmospheric precipitation is the main water source, and the structure defect of double-arched tunnel increases the possibility of water seepage; the final sequence for weight of various factors is similar to the actual result.

  15. Analysis and Design of a Maglev Permanent Magnet Synchronous Linear Motor to Reduce Additional Torque in dq Current Control

    Directory of Open Access Journals (Sweden)

    Feng Xing

    2018-03-01

    Full Text Available The maglev linear motor has three degrees of motion freedom, which are respectively realized by the thrust force in the x-axis, the levitation force in the z-axis and the torque around the y-axis. Both the thrust force and levitation force can be seen as the sum of the forces on the three windings. The resultant thrust force and resultant levitation force are independently controlled by d-axis current and q-axis current respectively. Thus, the commonly used dq transformation control strategy is suitable for realizing the control of the resultant force, either thrust force and levitation force. However, the forces on the three windings also generate additional torque because they do not pass the mover mass center. To realize the maglev system high-precision control, a maglev linear motor with a new structure is proposed in this paper to decrease this torque. First, the electromagnetic model of the motor can be deduced through the Lorenz force formula. Second, the analytic method and finite element method are used to explore the reason of this additional torque and what factors affect its change trend. Furthermore, a maglev linear motor with a new structure is proposed, with two sets of 90 degrees shifted winding designed on the mover. Under such a structure, the mover position dependent periodic part of the additional torque can be offset. Finally, the theoretical analysis is validated by the simulation result that the additionally generated rotating torque can be offset with little fluctuation in the proposed new-structure maglev linear motor. Moreover, the control system is built in MATLAB/Simulink, which shows that it has small thrust ripple and high-precision performance.

  16. Linear and nonlinear dynamic analysis by boundary element method. Ph.D. Thesis, 1986 Final Report

    Science.gov (United States)

    Ahmad, Shahid

    1991-01-01

    An advanced implementation of the direct boundary element method (BEM) applicable to free-vibration, periodic (steady-state) vibration and linear and nonlinear transient dynamic problems involving two and three-dimensional isotropic solids of arbitrary shape is presented. Interior, exterior, and half-space problems can all be solved by the present formulation. For the free-vibration analysis, a new real variable BEM formulation is presented which solves the free-vibration problem in the form of algebraic equations (formed from the static kernels) and needs only surface discretization. In the area of time-domain transient analysis, the BEM is well suited because it gives an implicit formulation. Although the integral formulations are elegant, because of the complexity of the formulation it has never been implemented in exact form. In the present work, linear and nonlinear time domain transient analysis for three-dimensional solids has been implemented in a general and complete manner. The formulation and implementation of the nonlinear, transient, dynamic analysis presented here is the first ever in the field of boundary element analysis. Almost all the existing formulation of BEM in dynamics use the constant variation of the variables in space and time which is very unrealistic for engineering problems and, in some cases, it leads to unacceptably inaccurate results. In the present work, linear and quadratic isoparametric boundary elements are used for discretization of geometry and functional variations in space. In addition, higher order variations in time are used. These methods of analysis are applicable to piecewise-homogeneous materials, such that not only problems of the layered media and the soil-structure interaction can be analyzed but also a large problem can be solved by the usual sub-structuring technique. The analyses have been incorporated in a versatile, general-purpose computer program. Some numerical problems are solved and, through comparisons

  17. A Simple Linear Regression Method for Quantitative Trait Loci Linkage Analysis With Censored Observations

    OpenAIRE

    Anderson, Carl A.; McRae, Allan F.; Visscher, Peter M.

    2006-01-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using...

  18. Sub-regional linear programming models in land use analysis: a case study of the Neguev settlement, Costa Rica.

    NARCIS (Netherlands)

    Schipper, R.A.; Stoorvogel, J.J.; Jansen, D.M.

    1995-01-01

    The paper deals with linear programming as a tool for land use analysis at the sub-regional level. A linear programming model of a case study area, the Neguev settlement in the Atlantic zone of Costa Rica, is presented. The matrix of the model includes five submatrices each encompassing a different

  19. Linear ubiquitination signals in adaptive immune responses.

    Science.gov (United States)

    Ikeda, Fumiyo

    2015-07-01

    Ubiquitin can form eight different linkage types of chains using the intrinsic Met 1 residue or one of the seven intrinsic Lys residues. Each linkage type of ubiquitin chain has a distinct three-dimensional topology, functioning as a tag to attract specific signaling molecules, which are so-called ubiquitin readers, and regulates various biological functions. Ubiquitin chains linked via Met 1 in a head-to-tail manner are called linear ubiquitin chains. Linear ubiquitination plays an important role in the regulation of cellular signaling, including the best-characterized tumor necrosis factor (TNF)-induced canonical nuclear factor-κB (NF-κB) pathway. Linear ubiquitin chains are specifically generated by an E3 ligase complex called the linear ubiquitin chain assembly complex (LUBAC) and hydrolyzed by a deubiquitinase (DUB) called ovarian tumor (OTU) DUB with linear linkage specificity (OTULIN). LUBAC linearly ubiquitinates critical molecules in the TNF pathway, such as NEMO and RIPK1. The linear ubiquitin chains are then recognized by the ubiquitin readers, including NEMO, which control the TNF pathway. Accumulating evidence indicates an importance of the LUBAC complex in the regulation of apoptosis, development, and inflammation in mice. In this article, I focus on the role of linear ubiquitin chains in adaptive immune responses with an emphasis on the TNF-induced signaling pathways. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. NBLDA: negative binomial linear discriminant analysis for RNA-Seq data.

    Science.gov (United States)

    Dong, Kai; Zhao, Hongyu; Tong, Tiejun; Wan, Xiang

    2016-09-13

    RNA-sequencing (RNA-Seq) has become a powerful technology to characterize gene expression profiles because it is more accurate and comprehensive than microarrays. Although statistical methods that have been developed for microarray data can be applied to RNA-Seq data, they are not ideal due to the discrete nature of RNA-Seq data. The Poisson distribution and negative binomial distribution are commonly used to model count data. Recently, Witten (Annals Appl Stat 5:2493-2518, 2011) proposed a Poisson linear discriminant analysis for RNA-Seq data. The Poisson assumption may not be as appropriate as the negative binomial distribution when biological replicates are available and in the presence of overdispersion (i.e., when the variance is larger than or equal to the mean). However, it is more complicated to model negative binomial variables because they involve a dispersion parameter that needs to be estimated. In this paper, we propose a negative binomial linear discriminant analysis for RNA-Seq data. By Bayes' rule, we construct the classifier by fitting a negative binomial model, and propose some plug-in rules to estimate the unknown parameters in the classifier. The relationship between the negative binomial classifier and the Poisson classifier is explored, with a numerical investigation of the impact of dispersion on the discriminant score. Simulation results show the superiority of our proposed method. We also analyze two real RNA-Seq data sets to demonstrate the advantages of our method in real-world applications. We have developed a new classifier using the negative binomial model for RNA-seq data classification. Our simulation results show that our proposed classifier has a better performance than existing works. The proposed classifier can serve as an effective tool for classifying RNA-seq data. Based on the comparison results, we have provided some guidelines for scientists to decide which method should be used in the discriminant analysis of RNA-Seq data

  1. Human factors evaluation of teletherapy: Function and task analysis. Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    Kaye, R.D.; Henriksen, K.; Jones, R. [Hughes Training, Inc., Falls Church, VA (United States); Morisseau, D.S.; Serig, D.I. [Nuclear Regulatory Commission, Washington, DC (United States). Div. of Systems Technology

    1995-07-01

    As a treatment methodology, teletherapy selectively destroys cancerous and other tissue by exposure to an external beam of ionizing radiation. Sources of radiation are either a radioactive isotope, typically Cobalt-60 (Co-60), or a linear accelerator. Records maintained by the NRC have identified instances of teletherapy misadministration where the delivered radiation dose has differed from the radiation prescription (e.g., instances where fractions were delivered to the wrong patient, to the wrong body part, or were too great or too little with respect to the defined treatment volume). Both human error and machine malfunction have led to misadministrations. Effective and safe treatment requires a concern for precision and consistency of human-human and human-machine interactions throughout the course of therapy. The present study is the first part of a series of human factors evaluations for identifying the root causes that lead to human error in the teletherapy environment. The human factors evaluations included: (1) a function and task analysis of teletherapy activities, (2) an evaluation of the human-system interfaces, (3) an evaluation of procedures used by teletherapy staff, (4) an evaluation of the training and qualifications of treatment staff (excluding the oncologists), (5) an evaluation of organizational practices and policies, and (6) an identification of problems and alternative approaches for NRC and industry attention. The present report addresses the function and task analysis of teletherapy activities and provides the foundation for the conduct of the subsequent evaluations. The report includes sections on background, methodology, a description of the function and task analysis, and use of the task analysis findings for the subsequent tasks. The function and task analysis data base also is included.

  2. Human factors evaluation of teletherapy: Function and task analysis. Volume 2

    International Nuclear Information System (INIS)

    Kaye, R.D.; Henriksen, K.; Jones, R.; Morisseau, D.S.; Serig, D.I.

    1995-07-01

    As a treatment methodology, teletherapy selectively destroys cancerous and other tissue by exposure to an external beam of ionizing radiation. Sources of radiation are either a radioactive isotope, typically Cobalt-60 (Co-60), or a linear accelerator. Records maintained by the NRC have identified instances of teletherapy misadministration where the delivered radiation dose has differed from the radiation prescription (e.g., instances where fractions were delivered to the wrong patient, to the wrong body part, or were too great or too little with respect to the defined treatment volume). Both human error and machine malfunction have led to misadministrations. Effective and safe treatment requires a concern for precision and consistency of human-human and human-machine interactions throughout the course of therapy. The present study is the first part of a series of human factors evaluations for identifying the root causes that lead to human error in the teletherapy environment. The human factors evaluations included: (1) a function and task analysis of teletherapy activities, (2) an evaluation of the human-system interfaces, (3) an evaluation of procedures used by teletherapy staff, (4) an evaluation of the training and qualifications of treatment staff (excluding the oncologists), (5) an evaluation of organizational practices and policies, and (6) an identification of problems and alternative approaches for NRC and industry attention. The present report addresses the function and task analysis of teletherapy activities and provides the foundation for the conduct of the subsequent evaluations. The report includes sections on background, methodology, a description of the function and task analysis, and use of the task analysis findings for the subsequent tasks. The function and task analysis data base also is included

  3. Matrices and linear algebra

    CERN Document Server

    Schneider, Hans

    1989-01-01

    Linear algebra is one of the central disciplines in mathematics. A student of pure mathematics must know linear algebra if he is to continue with modern algebra or functional analysis. Much of the mathematics now taught to engineers and physicists requires it.This well-known and highly regarded text makes the subject accessible to undergraduates with little mathematical experience. Written mainly for students in physics, engineering, economics, and other fields outside mathematics, the book gives the theory of matrices and applications to systems of linear equations, as well as many related t

  4. Real time computer control of a nonlinear Multivariable System via Linearization and Stability Analysis

    International Nuclear Information System (INIS)

    Raza, K.S.M.

    2004-01-01

    This paper demonstrates that if a complicated nonlinear, non-square, state-coupled multi variable system is smartly linearized and subjected to a thorough stability analysis then we can achieve our design objectives via a controller which will be quite simple (in term of resource usage and execution time) and very efficient (in terms of robustness). Further the aim is to implement this controller via computer in a real time environment. Therefore first a nonlinear mathematical model of the system is achieved. An intelligent work is done to decouple the multivariable system. Linearization and stability analysis techniques are employed for the development of a linearized and mathematically sound control law. Nonlinearities like the saturation in actuators are also been catered. The controller is then discretized using Runge-Kutta integration. Finally the discretized control law is programmed in a computer in a real time environment. The programme is done in RT -Linux using GNU C for the real time realization of the control scheme. The real time processes, like sampling and controlled actuation, and the non real time processes, like graphical user interface and display, are programmed as different tasks. The issue of inter process communication, between real time and non real time task is addressed quite carefully. The results of this research pursuit are presented graphically. (author)

  5. Refining and end use study of coal liquids II - linear programming analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lowe, C.; Tam, S.

    1995-12-31

    A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for the petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.

  6. High-throughput quantitative biochemical characterization of algal biomass by NIR spectroscopy; multiple linear regression and multivariate linear regression analysis.

    Science.gov (United States)

    Laurens, L M L; Wolfrum, E J

    2013-12-18

    One of the challenges associated with microalgal biomass characterization and the comparison of microalgal strains and conversion processes is the rapid determination of the composition of algae. We have developed and applied a high-throughput screening technology based on near-infrared (NIR) spectroscopy for the rapid and accurate determination of algal biomass composition. We show that NIR spectroscopy can accurately predict the full composition using multivariate linear regression analysis of varying lipid, protein, and carbohydrate content of algal biomass samples from three strains. We also demonstrate a high quality of predictions of an independent validation set. A high-throughput 96-well configuration for spectroscopy gives equally good prediction relative to a ring-cup configuration, and thus, spectra can be obtained from as little as 10-20 mg of material. We found that lipids exhibit a dominant, distinct, and unique fingerprint in the NIR spectrum that allows for the use of single and multiple linear regression of respective wavelengths for the prediction of the biomass lipid content. This is not the case for carbohydrate and protein content, and thus, the use of multivariate statistical modeling approaches remains necessary.

  7. Factors Predictive of Symptomatic Radiation Injury After Linear Accelerator-Based Stereotactic Radiosurgery for Intracerebral Arteriovenous Malformations

    International Nuclear Information System (INIS)

    Herbert, Christopher; Moiseenko, Vitali; McKenzie, Michael; Redekop, Gary; Hsu, Fred; Gete, Ermias; Gill, Brad; Lee, Richard; Luchka, Kurt; Haw, Charles; Lee, Andrew; Toyota, Brian; Martin, Montgomery

    2012-01-01

    Purpose: To investigate predictive factors in the development of symptomatic radiation injury after treatment with linear accelerator–based stereotactic radiosurgery for intracerebral arteriovenous malformations and relate the findings to the conclusions drawn by Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC). Methods and Materials: Archived plans for 73 patients who were treated at the British Columbia Cancer Agency were studied. Actuarial estimates of freedom from radiation injury were calculated using the Kaplan-Meier method. Univariate and multivariate Cox proportional hazards models were used for analysis of incidence of radiation injury. Log–rank test was used to search for dosimetric parameters associated with freedom from radiation injury. Results: Symptomatic radiation injury was exhibited by 14 of 73 patients (19.2%). Actuarial rate of symptomatic radiation injury was 23.0% at 4 years. Most patients (78.5%) had mild to moderate deficits according to Common Terminology Criteria for Adverse Events, version 4.0. On univariate analysis, lesion volume and diameter, dose to isocenter, and a V x for doses ≥8 Gy showed statistical significance. Only lesion diameter showed statistical significance (p 5 cm 3 and diameters >30 mm were significantly associated with the risk of radiation injury (p 12 also showed strong association with the incidence of radiation injury. Actuarial incidence of radiation injury was 16.8% if V 12 was 3 and 53.2% if >28 cm 3 (log–rank test, p = 0.001). Conclusions: This study confirms that the risk of developing symptomatic radiation injury after radiosurgery is related to lesion diameter and volume and irradiated volume. Results suggest a higher tolerance than proposed by QUANTEC. The widely differing findings reported in the literature, however, raise considerable uncertainties.

  8. Factors Predictive of Symptomatic Radiation Injury After Linear Accelerator-Based Stereotactic Radiosurgery for Intracerebral Arteriovenous Malformations

    Energy Technology Data Exchange (ETDEWEB)

    Herbert, Christopher, E-mail: cherbert@bccancer.bc.ca [Department of Radiation Oncology, British Columbia Cancer Agency, Vancouver, BC (Canada); Moiseenko, Vitali [Department of Medical Physics, British Columbia Cancer Agency, Vancouver, BC (Canada); McKenzie, Michael [Department of Radiation Oncology, British Columbia Cancer Agency, Vancouver, BC (Canada); Redekop, Gary [Division of Neurosurgery, Vancouver General Hospital, University of British Columbia, Vancouver, BC (Canada); Hsu, Fred [Department of Radiation Oncology, British Columbia Cancer Agency, Abbotsford, BC (Canada); Gete, Ermias; Gill, Brad; Lee, Richard; Luchka, Kurt [Department of Medical Physics, British Columbia Cancer Agency, Vancouver, BC (Canada); Haw, Charles [Division of Neurosurgery, Vancouver General Hospital, University of British Columbia, Vancouver, BC (Canada); Lee, Andrew [Department of Neurosurgery, Royal Columbian Hospital, New Westminster, BC (Canada); Toyota, Brian [Division of Neurosurgery, Vancouver General Hospital, University of British Columbia, Vancouver, BC (Canada); Martin, Montgomery [Department of Medical Imaging, British Columbia Cancer Agency, Vancouver, BC (Canada)

    2012-07-01

    Purpose: To investigate predictive factors in the development of symptomatic radiation injury after treatment with linear accelerator-based stereotactic radiosurgery for intracerebral arteriovenous malformations and relate the findings to the conclusions drawn by Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC). Methods and Materials: Archived plans for 73 patients who were treated at the British Columbia Cancer Agency were studied. Actuarial estimates of freedom from radiation injury were calculated using the Kaplan-Meier method. Univariate and multivariate Cox proportional hazards models were used for analysis of incidence of radiation injury. Log-rank test was used to search for dosimetric parameters associated with freedom from radiation injury. Results: Symptomatic radiation injury was exhibited by 14 of 73 patients (19.2%). Actuarial rate of symptomatic radiation injury was 23.0% at 4 years. Most patients (78.5%) had mild to moderate deficits according to Common Terminology Criteria for Adverse Events, version 4.0. On univariate analysis, lesion volume and diameter, dose to isocenter, and a V{sub x} for doses {>=}8 Gy showed statistical significance. Only lesion diameter showed statistical significance (p < 0.05) in a multivariate model. According to the log-rank test, AVM volumes >5 cm{sup 3} and diameters >30 mm were significantly associated with the risk of radiation injury (p < 0.01). The V{sub 12} also showed strong association with the incidence of radiation injury. Actuarial incidence of radiation injury was 16.8% if V{sub 12} was <28 cm{sup 3} and 53.2% if >28 cm{sup 3} (log-rank test, p = 0.001). Conclusions: This study confirms that the risk of developing symptomatic radiation injury after radiosurgery is related to lesion diameter and volume and irradiated volume. Results suggest a higher tolerance than proposed by QUANTEC. The widely differing findings reported in the literature, however, raise considerable uncertainties.

  9. LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table

  10. Structural equation and log-linear modeling: a comparison of methods in the analysis of a study on caregivers' health

    Directory of Open Access Journals (Sweden)

    Rosenbaum Peter L

    2006-10-01

    Full Text Available Abstract Background In this paper we compare the results in an analysis of determinants of caregivers' health derived from two approaches, a structural equation model and a log-linear model, using the same data set. Methods The data were collected from a cross-sectional population-based sample of 468 families in Ontario, Canada who had a child with cerebral palsy (CP. The self-completed questionnaires and the home-based interviews used in this study included scales reflecting socio-economic status, child and caregiver characteristics, and the physical and psychological well-being of the caregivers. Both analytic models were used to evaluate the relationships between child behaviour, caregiving demands, coping factors, and the well-being of primary caregivers of children with CP. Results The results were compared, together with an assessment of the positive and negative aspects of each approach, including their practical and conceptual implications. Conclusion No important differences were found in the substantive conclusions of the two analyses. The broad confirmation of the Structural Equation Modeling (SEM results by the Log-linear Modeling (LLM provided some reassurance that the SEM had been adequately specified, and that it broadly fitted the data.

  11. From stripe to slab confinement for DNA linearization in nanochannels

    Science.gov (United States)

    Cifra, Peter; Benkova, Zuzana; Namer, Pavol

    We investigate suggested advantageous analysis in the linearization experiments with macromolecules confined in a stripe-like channel using Monte Carlo simulations. The enhanced chain extension in a stripe that is due to significant excluded volume interactions between monomers in two dimensions weakens on transition to experimentally feasible slit-like channel. Based on the chain extension-confinement strength dependence and the structure factor behavior for the chain in stripe we infer the excluded volume regime typical for two-dimensional systems. On transition to the slab geometry, the advantageous chain extension decreases and the Gaussian regime is observed for not very long semiflexible chains. The evidence for pseudo-ideality in confined chains is based on indicators such as the extension curves, variation of the extension with the persistence length or the structure factor. The slab behavior is observed when the stripe (originally of monomer thickness) reaches the thickness larger than cca 10nm in the third dimension. This maximum height of the slab to retain the advantage of the stripe is very low and this have implication for DNA linearization experiments. The presented analysis, however, has a broader relevance for confined polymers. Support from Slovak R&D Agency (SRDA-0451-11) is acknowledged.

  12. Social inequality, lifestyles and health - a non-linear canonical correlation analysis based on the approach of Pierre Bourdieu.

    Science.gov (United States)

    Grosse Frie, Kirstin; Janssen, Christian

    2009-01-01

    Based on the theoretical and empirical approach of Pierre Bourdieu, a multivariate non-linear method is introduced as an alternative way to analyse the complex relationships between social determinants and health. The analysis is based on face-to-face interviews with 695 randomly selected respondents aged 30 to 59. Variables regarding socio-economic status, life circumstances, lifestyles, health-related behaviour and health were chosen for the analysis. In order to determine whether the respondents can be differentiated and described based on these variables, a non-linear canonical correlation analysis (OVERALS) was performed. The results can be described on three dimensions; Eigenvalues add up to the fit of 1.444, which can be interpreted as approximately 50 % of explained variance. The three-dimensional space illustrates correspondences between variables and provides a framework for interpretation based on latent dimensions, which can be described by age, education, income and gender. Using non-linear canonical correlation analysis, health characteristics can be analysed in conjunction with socio-economic conditions and lifestyles. Based on Bourdieus theoretical approach, the complex correlations between these variables can be more substantially interpreted and presented.

  13. Linearity improvement on wide-range log signal of neutron measurement system for HANARO

    International Nuclear Information System (INIS)

    Kim, Young-Ki; Tuetken, Jeffrey S.

    1998-01-01

    This paper discusses engineering activities for improving the linearity characteristics of the Log Power signal from the neutron measurement system for HANARO. This neutron measurement system uses a fission chamber based detector which covers 10.3 decade-wide range from 10 -8 % full power(FP) up to 200%FP, The Log Power signal is designed to control the reactor at low power levels where most of the reactor physics tests are carried out. Therefore, the linearity characteristics of the Log Power signal is the major factor for accurate reactor power control. During the commissioning of the neutron measurement system, it was found that the linearity characteristics of the Log Power signal, especially near 10 -2 %FP, were not accurate enough for controlling the reactor during physics testing. Analysis of the system linearity data directly measured with reactor operating determined that the system was not operating per the design characteristics established from previous installations. The linearity data, which were taken as the reactor was increased in power, were sent to manufacturer's engineering group and a follow-up measures based on the analysis were then fed back to the field. Through step by step trouble-shooting activities, which included minor circuit modifications and alignment procedure changes, the linearity characteristics have been successfully improved and now exceed minimum performance requirements. This paper discusses the trouble-shooting techniques applied, the changes in the linearity characteristics, special circumstances in the HANARO application and the final resolution. (author)

  14. Statistical mechanical analysis of the linear vector channel in digital communication

    International Nuclear Information System (INIS)

    Takeda, Koujin; Hatabu, Atsushi; Kabashima, Yoshiyuki

    2007-01-01

    A statistical mechanical framework to analyze linear vector channel models in digital wireless communication is proposed for a large system. The framework is a generalization of that proposed for code-division multiple-access systems in Takeda et al (2006 Europhys. Lett. 76 1193) and enables the analysis of the system in which the elements of the channel transfer matrix are statistically correlated with each other. The significance of the proposed scheme is demonstrated by assessing the performance of an existing model of multi-input multi-output communication systems

  15. Non Linear Step By Step Seismic Response and the Push Over Analysis Comparison of a Reinforced Concrete of Ductile Frames 25 Level Building

    International Nuclear Information System (INIS)

    Avila, Jorge A.; Martinez, Eduardo

    2008-01-01

    Based on a ductile frames 25 level building, a non-linear analysis with increased monotonically lateral loads (Push-Over) was made in order to determine its collapse and its principal responses were compared against the time-history seismic responses determined with the SCT-EW-85 record. The seismic-resistance design and faced to gravitational loads was made according to the Complementary Technical Norms of Concrete Structures Design (NTC-Concrete) and the NTC-Seismic of the Mexico City Code (RDF-04), satisfying the limit service states (relative lateral displacement between story height maximum relations, story drifts ≤0.012) and failure (seismic behavior factor, Q = 3). The compressible (soft) seismic zone III b and the office use type (group B) were considered. The non-lineal responses were determined with nominal and over-resistance effects. The comparison were made with base shear force-roof lateral displacement relations, global distribution of plastic hinges, failure mechanics tendency, lateral displacements and story drift and its distribution along the height of the building, local and global ductility demands, etc. For the non-linear static analysis with increased monotonically lateral loads, was important to select the type of lateral forces distribution

  16. [Linear growth retardation in children under five years of age: a baseline study].

    Science.gov (United States)

    Rissin, Anete; Figueiroa, José Natal; Benício, Maria Helena D'Aquino; Batista Filho, Malaquias

    2011-10-01

    The scope of this study was to describe the prevalence of, and analyze factors associated with, linear growth retardation in children. The baseline study analyzed 2040 children under the age of five, establishing a possible association between growth delay (height/age index non-binary variables, there was a positive association with roof type and number of inhabitants per room and a negative association with income per capita, mother's schooling and birth weight. The adjusted analysis also indicated water supply, visit from the community health agent, birth delivery location, internment for diarrhea, or for pneumonia and birth weight as significant variables. Several risk factors were identified for linear growth retardation pointing to the multi-causal aspects of the problem and highlighting the need for control measures by the various hierarchical government agents.

  17. Linear stability analysis of supersonic axisymmetric jets

    Directory of Open Access Journals (Sweden)

    Zhenhua Wan

    2014-01-01

    Full Text Available Stabilities of supersonic jets are examined with different velocities, momentum thicknesses, and core temperatures. Amplification rates of instability waves at inlet are evaluated by linear stability theory (LST. It is found that increased velocity and core temperature would increase amplification rates substantially and such influence varies for different azimuthal wavenumbers. The most unstable modes in thin momentum thickness cases usually have higher frequencies and azimuthal wavenumbers. Mode switching is observed for low azimuthal wavenumbers, but it appears merely in high velocity cases. In addition, the results provided by linear parabolized stability equations show that the mean-flow divergence affects the spatial evolution of instability waves greatly. The most amplified instability waves globally are sometimes found to be different from that given by LST.

  18. Dynamics of unsymmetric piecewise-linear/non-linear systems using finite elements in time

    Science.gov (United States)

    Wang, Yu

    1995-08-01

    The dynamic response and stability of a single-degree-of-freedom system with unsymmetric piecewise-linear/non-linear stiffness are analyzed using the finite element method in the time domain. Based on a Hamilton's weak principle, this method provides a simple and efficient approach for predicting all possible fundamental and sub-periodic responses. The stability of the steady state response is determined by using Floquet's theory without any special effort for calculating transition matrices. This method is applied to a number of examples, demonstrating its effectiveness even for a strongly non-linear problem involving both clearance and continuous stiffness non-linearities. Close agreement is found between available published findings and the predictions of the finite element in time approach, which appears to be an efficient and reliable alternative technique for non-linear dynamic response and stability analysis of periodic systems.

  19. Topological characterizations of S-Linearity

    Directory of Open Access Journals (Sweden)

    Carfi', David

    2007-10-01

    Full Text Available We give several characterizations of basic concepts of S-linear algebra in terms of weak duality on topological vector spaces. On the way, some classic results of Functional Analysis are reinterpreted in terms of S-linear algebra, by an application-oriented fashion. The results are required in the S-linear algebra formulation of infinite dimensional Decision Theory and in the study of abstract evolution equations in economical and physical Theories.

  20. Weibull and lognormal Taguchi analysis using multiple linear regression

    International Nuclear Information System (INIS)

    Piña-Monarrez, Manuel R.; Ortiz-Yañez, Jesús F.

    2015-01-01

    The paper provides to reliability practitioners with a method (1) to estimate the robust Weibull family when the Taguchi method (TM) is applied, (2) to estimate the normal operational Weibull family in an accelerated life testing (ALT) analysis to give confidence to the extrapolation and (3) to perform the ANOVA analysis to both the robust and the normal operational Weibull family. On the other hand, because the Weibull distribution neither has the normal additive property nor has a direct relationship with the normal parameters (µ, σ), in this paper, the issues of estimating a Weibull family by using a design of experiment (DOE) are first addressed by using an L_9 (3"4) orthogonal array (OA) in both the TM and in the Weibull proportional hazard model approach (WPHM). Then, by using the Weibull/Gumbel and the lognormal/normal relationships and multiple linear regression, the direct relationships between the Weibull and the lifetime parameters are derived and used to formulate the proposed method. Moreover, since the derived direct relationships always hold, the method is generalized to the lognormal and ALT analysis. Finally, the method’s efficiency is shown through its application to the used OA and to a set of ALT data. - Highlights: • It gives the statistical relations and steps to use the Taguchi Method (TM) to analyze Weibull data. • It gives the steps to determine the unknown Weibull family to both the robust TM setting and the normal ALT level. • It gives a method to determine the expected lifetimes and to perform its ANOVA analysis in TM and ALT analysis. • It gives a method to give confidence to the extrapolation in an ALT analysis by using the Weibull family of the normal level.

  1. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    Science.gov (United States)

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  2. Time-Frequency (Wigner Analysis of Linear and Nonlinear Pulse Propagation in Optical Fibers

    Directory of Open Access Journals (Sweden)

    José Azaña

    2005-06-01

    Full Text Available Time-frequency analysis, and, in particular, Wigner analysis, is applied to the study of picosecond pulse propagation through optical fibers in both the linear and nonlinear regimes. The effects of first- and second-order group velocity dispersion (GVD and self-phase modulation (SPM are first analyzed separately. The phenomena resulting from the interplay between GVD and SPM in fibers (e.g., soliton formation or optical wave breaking are also investigated in detail. Wigner analysis is demonstrated to be an extremely powerful tool for investigating pulse propagation dynamics in nonlinear dispersive systems (e.g., optical fibers, providing a clearer and deeper insight into the physical phenomena that determine the behavior of these systems.

  3. Regression and kriging analysis for grid power factor estimation

    Directory of Open Access Journals (Sweden)

    Rajesh Guntaka

    2014-12-01

    Full Text Available The measurement of power factor (PF in electrical utility grids is a mainstay of load balancing and is also a critical element of transmission and distribution efficiency. The measurement of PF dates back to the earliest periods of electrical power distribution to public grids. In the wide-area distribution grid, measurement of current waveforms is trivial and may be accomplished at any point in the grid using a current tap transformer. However, voltage measurement requires reference to ground and so is more problematic and measurements are normally constrained to points that have ready and easy access to a ground source. We present two mathematical analysis methods based on kriging and linear least square estimation (LLSE (regression to derive PF at nodes with unknown voltages that are within a perimeter of sample nodes with ground reference across a selected power grid. Our results indicate an error average of 1.884% that is within acceptable tolerances for PF measurements that are used in load balancing tasks.

  4. The Infinitesimal Jackknife with Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  5. On pole structure assignment in linear systems

    Czech Academy of Sciences Publication Activity Database

    Loiseau, J.-J.; Zagalak, Petr

    2009-01-01

    Roč. 82, č. 7 (2009), s. 1179-1192 ISSN 0020-7179 R&D Projects: GA ČR(CZ) GA102/07/1596 Institutional research plan: CEZ:AV0Z10750506 Keywords : linear systems * linear state feedback * pole structure assignment Subject RIV: BC - Control Systems Theory Impact factor: 1.124, year: 2009 http://library.utia.cas.cz/separaty/2009/AS/zagalak-on pole structure assignment in linear systems.pdf

  6. Linear analysis of signal and noise characteristics of a nonlinear CMOS active-pixel detector for mammography

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Seungman [School of Mechanical Engineering, Pusan National University, Busan 46241 (Korea, Republic of); Kim, Ho Kyung, E-mail: hokyung@pusan.ac.kr [School of Mechanical Engineering, Pusan National University, Busan 46241 (Korea, Republic of); Center for Advanced Medical Engineering Research, Pusan National University, Busan 46241 (Korea, Republic of); Han, Jong Chul; Kam, Soohwa [School of Mechanical Engineering, Pusan National University, Busan 46241 (Korea, Republic of); Youn, Hanbean [Department of Radiation Oncology, Pusan National University Yangsan Hospital, Yangsan, Gyeongsangnam-do 50612 (Korea, Republic of); Cunningham, Ian A. [Robarts Research Institute, Western University, London, Ontario N6A 5C1 (Canada)

    2017-03-01

    The imaging properties of a complementary metal-oxide-semiconductor (CMOS) active-pixel photodiode array coupled to a thin gadolinium-based granular phosphor screen with a fiber-optic faceplate are investigated. It is shown that this system has a nonlinear response at low detector exposure levels (<10 mR), resulting in an over-estimation of the detective quantum efficiency (DQE) by a factor of two in some cases. Errors in performance metrics on this scale make it difficult to compare new technologies with established systems and predict performance benchmarks that can be achieved in practice and help understand performance bottlenecks. It is shown the CMOS response is described by a power-law model that can be used to linearize image data. Linearization removed an unexpected dependence of the DQE on detector exposure level. - Highlights: • A nonlinear response of a CMOS detector at low exposure levels can overestimate DQE. • A power-law form can model the response of a CMOS detector at low exposure levels, and can be used to linearize image data. • Performance evaluation of nonlinear imaging systems must incorporate adequate linearizations.

  7. Linearization: Geometric, Complex, and Conditional

    Directory of Open Access Journals (Sweden)

    Asghar Qadir

    2012-01-01

    Full Text Available Lie symmetry analysis provides a systematic method of obtaining exact solutions of nonlinear (systems of differential equations, whether partial or ordinary. Of special interest is the procedure that Lie developed to transform scalar nonlinear second-order ordinary differential equations to linear form. Not much work was done in this direction to start with, but recently there have been various developments. Here, first the original work of Lie (and the early developments on it, and then more recent developments based on geometry and complex analysis, apart from Lie’s own method of algebra (namely, Lie group theory, are reviewed. It is relevant to mention that much of the work is not linearization but uses the base of linearization.

  8. Use of correspondence analysis partial least squares on linear and unimodal data

    DEFF Research Database (Denmark)

    Frisvad, Jens Christian; Norsker, Merete

    1996-01-01

    Correspondence analysis partial least squares (CA-PLS) has been compared with PLS conceming classification and prediction of unimodal growth temperature data and an example using infrared (IR) spectroscopy for predicting amounts of chemicals in mixtures. CA-PLS was very effective for ordinating...... that could only be seen in two-dimensional plots, and also less effective predictions. PLS was the best method in the linear case treated, with fewer components and a better prediction than CA-PLS....

  9. Left ventricular wall motion abnormalities evaluated by factor analysis as compared with Fourier analysis

    International Nuclear Information System (INIS)

    Hirota, Kazuyoshi; Ikuno, Yoshiyasu; Nishikimi, Toshio

    1986-01-01

    Factor analysis was applied to multigated cardiac pool scintigraphy to evaluate its ability to detect left ventricular wall motion abnormalities in 35 patients with old myocardial infarction (MI), and in 12 control cases with normal left ventriculography. All cases were also evaluated by conventional Fourier analysis. In most cases with normal left ventriculography, the ventricular and atrial factors were extracted by factor analysis. In cases with MI, the third factor was obtained in the left ventricle corresponding to wall motion abnormality. Each case was scored according to the coincidence of findings of ventriculography and those of factor analysis or Fourier analysis. Scores were recorded for three items; the existence, location, and degree of asynergy. In cases of MI, the detection rate of asynergy was 94 % by factor analysis, 83 % by Fourier analysis, and the agreement in respect to location was 71 % and 66 %, respectively. Factor analysis had higher scores than Fourier analysis, but this was not significant. The interobserver error of factor analysis was less than that of Fourier analysis. Factor analysis can display locations and dynamic motion curves of asynergy, and it is regarded as a useful method for detecting and evaluating left ventricular wall motion abnormalities. (author)

  10. Lifted linear phase filter banks and the polyphase-with-advance representation

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, C. M. (Christopher M.); Wohlberg, B. E. (Brendt E.)

    2004-01-01

    A matrix theory is developed for the noncausal polyphase-with-advance representation that underlies the theory of lifted perfect reconstruction filter banks and wavelet transforms as developed by Sweldens and Daubechies. This theory provides the fundamental lifting methodology employed in the ISO/IEC JPEG-2000 still image coding standard, which the authors helped to develop. Lifting structures for polyphase-with-advance filter banks are depicted in Figure 1. In the analysis bank of Figure 1(a), the first lifting step updates x{sub 0} with a filtered version of x{sub 1} and the second step updates x{sub 1} with a filtered version of x{sub 0}; gain factors 1/K and K normalize the lowpass- and highpass-filtered output subbands. Each of these steps is inverted by the corresponding operations in the synthesis bank shown in Figure 1(b). Lifting steps correspond to upper- or lower-triangular matrices, S{sub i}(z), in a cascade-form decomposition of the polyphase analysis matrix, H{sub a}(z). Lifting structures can also be implemented reversibly (i.e., losslessly in fixed-precision arithmetic) by rounding the lifting updates to integer values. Our treatment of the polyphase-with-advance representation develops an extensive matrix algebra framework that goes far beyond the results of. Specifically, we focus on analyzing and implementing linear phase two-channel filter banks via linear phase lifting cascade schemes. Whole-sample symmetric (WS) and half-sample symmetric (HS) linear phase filter banks are characterized completely in terms of the polyphase-with-advance representation. The theory benefits significantly from a number of new group-theoretic structures arising in the polyphase-with-advance matrix algebra from the lifting factorization of linear phase filter banks.

  11. Time-Series Analysis of Continuously Monitored Blood Glucose: The Impacts of Geographic and Daily Lifestyle Factors

    Directory of Open Access Journals (Sweden)

    Sean T. Doherty

    2015-01-01

    Full Text Available Type 2 diabetes is known to be associated with environmental, behavioral, and lifestyle factors. However, the actual impacts of these factors on blood glucose (BG variation throughout the day have remained relatively unexplored. Continuous blood glucose monitors combined with human activity tracking technologies afford new opportunities for exploration in a naturalistic setting. Data from a study of 40 patients with diabetes is utilized in this paper, including continuously monitored BG, food/medicine intake, and patient activity/location tracked using global positioning systems over a 4-day period. Standard linear regression and more disaggregated time-series analysis using autoregressive integrated moving average (ARIMA are used to explore patient BG variation throughout the day and over space. The ARIMA models revealed a wide variety of BG correlating factors related to specific activity types, locations (especially those far from home, and travel modes, although the impacts were highly personal. Traditional variables related to food intake and medications were less often significant. Overall, the time-series analysis revealed considerable patient-by-patient variation in the effects of geographic and daily lifestyle factors. We would suggest that maps of BG spatial variation or an interactive messaging system could provide new tools to engage patients and highlight potential risk factors.

  12. Exploratory factor analysis in Rehabilitation Psychology: a content analysis.

    Science.gov (United States)

    Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N

    2014-11-01

    Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.

  13. Spectral theories for linear differential equations

    International Nuclear Information System (INIS)

    Sell, G.R.

    1976-01-01

    The use of spectral analysis in the study of linear differential equations with constant coefficients is not only a fundamental technique but also leads to far-reaching consequences in describing the qualitative behaviour of the solutions. The spectral analysis, via the Jordan canonical form, will not only lead to a representation theorem for a basis of solutions, but will also give a rather precise statement of the (exponential) growth rates of various solutions. Various attempts have been made to extend this analysis to linear differential equations with time-varying coefficients. The most complete such extensions is the Floquet theory for equations with periodic coefficients. For time-varying linear differential equations with aperiodic coefficients several authors have attempted to ''extend'' the Foquet theory. The precise meaning of such an extension is itself a problem, and we present here several attempts in this direction that are related to the general problem of extending the spectral analysis of equations with constant coefficients. The main purpose of this paper is to introduce some problems of current research. The primary problem we shall examine occurs in the context of linear differential equations with almost periodic coefficients. We call it ''the Floquet problem''. (author)

  14. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  15. Analytic central path, sensitivity analysis and parametric linear programming

    NARCIS (Netherlands)

    A.G. Holder; J.F. Sturm; S. Zhang (Shuzhong)

    1998-01-01

    textabstractIn this paper we consider properties of the central path and the analytic center of the optimal face in the context of parametric linear programming. We first show that if the right-hand side vector of a standard linear program is perturbed, then the analytic center of the optimal face

  16. Normal form analysis of linear beam dynamics in a coupled storage ring

    International Nuclear Information System (INIS)

    Wolski, Andrzej; Woodley, Mark D.

    2004-01-01

    The techniques of normal form analysis, well known in the literature, can be used to provide a straightforward characterization of linear betatron dynamics in a coupled lattice. Here, we consider both the beam distribution and the betatron oscillations in a storage ring. We find that the beta functions for uncoupled motion generalize in a simple way to the coupled case. Defined in the way that we propose, the beta functions remain well behaved (positive and finite) under all circumstances, and have essentially the same physical significance for the beam size and betatron oscillation amplitude as in the uncoupled case. Application of this analysis to the online modeling of the PEP-II rings is also discussed

  17. Comparison between linear quadratic and early time dose models

    International Nuclear Information System (INIS)

    Chougule, A.A.; Supe, S.J.

    1993-01-01

    During the 70s, much interest was focused on fractionation in radiotherapy with the aim of improving tumor control rate without producing unacceptable normal tissue damage. To compare the radiobiological effectiveness of various fractionation schedules, empirical formulae such as Nominal Standard Dose, Time Dose Factor, Cumulative Radiation Effect and Tumour Significant Dose, were introduced and were used despite many shortcomings. It has been claimed that a recent linear quadratic model is able to predict the radiobiological responses of tumours as well as normal tissues more accurately. We compared Time Dose Factor and Tumour Significant Dose models with the linear quadratic model for tumour regression in patients with carcinomas of the cervix. It was observed that the prediction of tumour regression estimated by the Tumour Significant Dose and Time Dose factor concepts varied by 1.6% from that of the linear quadratic model prediction. In view of the lack of knowledge of the precise values of the parameters of the linear quadratic model, it should be applied with caution. One can continue to use the Time Dose Factor concept which has been in use for more than a decade as its results are within ±2% as compared to that predicted by the linear quadratic model. (author). 11 refs., 3 figs., 4 tabs

  18. Dynamic Linear Models with R

    CERN Document Server

    Campagnoli, Patrizia; Petris, Giovanni

    2009-01-01

    State space models have gained tremendous popularity in as disparate fields as engineering, economics, genetics and ecology. Introducing general state space models, this book focuses on dynamic linear models, emphasizing their Bayesian analysis. It illustrates the fundamental steps needed to use dynamic linear models in practice, using R package.

  19. Finite element historical deformation analysis in piecewise linear plasticity by mathematical programming

    International Nuclear Information System (INIS)

    De Donato, O.; Parisi, M.A.

    1977-01-01

    When loads increase proportionally beyond the elastic limit in the presence of elastic-plastic piecewise-linear constitutive laws, the problem of finding the whole evolution of the plastic strain and displacements of structures was recently shown to be amenable to a parametric linear complementary problem (PLCP) in which the parameter is represented by the load factor, the matrix is symmetric positive definite or at least semi-definite (for perfect plasticity) and the variables with a direct mechanical meaning are the plastic multipliers. With reference to plane trusses and frames with elastic-plastic linear work-hardening material behaviour numerical solutions were also fairly efficiently obtained using a recent mathematical programming algorithm (due to R.W. Cottle) which is able to provide the whole deformation history of the structure and, at the same time to rule out local unloadings along the given proportional loading process by means of 'a priori' checks carried out before each pivotal step of the procedure. Hence it becomes possible to use the holonomic (reversible, path-independent) constitutive laws in finite terms and to benefit by all the relevant numerical and computational advantages despite the non-holonomic nature of plastic behaviour. In the present paper the method of solution is re-examined in view to overcome an important drawback of the algorithm deriving from the size of PLCP fully populated matrix when structural problems with large number of variables are considered and, consequently, the updating, the storing or, generally, the handling of the current tableau may become prohibitive. (Auth.)

  20. Effective factors on adoption ofinnovation in organizational IT ...

    African Journals Online (AJOL)

    ... organizational factors and human factors have a positive and significant effect on adoption of new technologies. The results of analysis of regression and simple linear regression revealed that organizational and innovation variables have highest coefficients with most effectiveness in adoption of new technologies in IT.

  1. Lithuanian Population Aging Factors Analysis

    Directory of Open Access Journals (Sweden)

    Agnė Garlauskaitė

    2015-05-01

    Full Text Available The aim of this article is to identify the factors that determine aging of Lithuania’s population and to assess the influence of these factors. The article shows Lithuanian population aging factors analysis, which consists of two main parts: the first describes the aging of the population and its characteristics in theoretical terms. Second part is dedicated to the assessment of trends that influence the aging population and demographic factors and also to analyse the determinants of the aging of the population of Lithuania. After analysis it is concluded in the article that the decline in the birth rate and increase in the number of emigrants compared to immigrants have the greatest impact on aging of the population, so in order to show the aging of the population, a lot of attention should be paid to management of these demographic processes.

  2. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    Science.gov (United States)

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  3. Acceptability criteria for linear dependence in validating UV-spectrophotometric methods of quantitative determination in forensic and toxicological analysis

    Directory of Open Access Journals (Sweden)

    L. Yu. Klimenko

    2014-08-01

    Full Text Available Introduction. This article is the result of authors’ research in the field of development of the approaches to validation of quantitative determination methods for purposes of forensic and toxicological analysis and devoted to the problem of acceptability criteria formation for validation parameter «linearity/calibration model». The aim of research. The purpose of this paper is to analyse the present approaches to acceptability estimation of the calibration model chosen for method description according to the requirements of the international guidances, to form the own approaches to acceptability estimation of the linear dependence when carrying out the validation of UV-spectrophotometric methods of quantitative determination for forensic and toxicological analysis. Materials and methods. UV-spectrophotometric method of doxylamine quantitative determination in blood. Results. The approaches to acceptability estimation of calibration models when carrying out the validation of bioanalytical methods is stated in international papers, namely «Guidance for Industry: Bioanalytical method validation» (U.S. FDA, 2001, «Standard Practices for Method Validation in Forensic Toxicology» (SWGTOX, 2012, «Guidance for the Validation of Analytical Methodology and Calibration of Equipment used for Testing of Illicit Drugs in Seized Materials and Biological Specimens» (UNODC, 2009 and «Guideline on validation of bioanalytical methods» (ЕМА, 2011 have been analysed. It has been suggested to be guided by domestic developments in the field of validation of analysis methods for medicines and, particularly, by the approaches to validation methods in the variant of the calibration curve method for forming the acceptability criteria of the obtained linear dependences when carrying out the validation of UV-spectrophotometric methods of quantitative determination for forensic and toxicological analysis. The choice of the method of calibration curve is

  4. On the relation between flexibility analysis and robust optimization for linear systems

    KAUST Repository

    Zhang, Qi

    2016-03-05

    Flexibility analysis and robust optimization are two approaches to solving optimization problems under uncertainty that share some fundamental concepts, such as the use of polyhedral uncertainty sets and the worst-case approach to guarantee feasibility. The connection between these two approaches has not been sufficiently acknowledged and examined in the literature. In this context, the contributions of this work are fourfold: (1) a comparison between flexibility analysis and robust optimization from a historical perspective is presented; (2) for linear systems, new formulations for the three classical flexibility analysis problems—flexibility test, flexibility index, and design under uncertainty—based on duality theory and the affinely adjustable robust optimization (AARO) approach are proposed; (3) the AARO approach is shown to be generally more restrictive such that it may lead to overly conservative solutions; (4) numerical examples show the improved computational performance from the proposed formulations compared to the traditional flexibility analysis models. © 2016 American Institute of Chemical Engineers AIChE J, 62: 3109–3123, 2016

  5. Linear fixed-field multipass arcs for recirculating linear accelerators

    Directory of Open Access Journals (Sweden)

    V. S. Morozov

    2012-06-01

    Full Text Available Recirculating linear accelerators (RLA’s provide a compact and efficient way of accelerating particle beams to medium and high energies by reusing the same linac for multiple passes. In the conventional scheme, after each pass, the different energy beams coming out of the linac are separated and directed into appropriate arcs for recirculation, with each pass requiring a separate fixed-energy arc. In this paper we present a concept of an RLA return arc based on linear combined-function magnets, in which two and potentially more consecutive passes with very different energies are transported through the same string of magnets. By adjusting the dipole and quadrupole components of the constituting linear combined-function magnets, the arc is designed to be achromatic and to have zero initial and final reference orbit offsets for all transported beam energies. We demonstrate the concept by developing a design for a droplet-shaped return arc for a dogbone RLA capable of transporting two beam passes with momenta different by a factor of 2. We present the results of tracking simulations of the two passes and lay out the path to end-to-end design and simulation of a complete dogbone RLA.

  6. Radii of Solvability and Unsolvability of Linear Systems

    Czech Academy of Sciences Publication Activity Database

    Hladík, M.; Rohn, Jiří

    2016-01-01

    Roč. 503, 15 August (2016), s. 120-134 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * linear equations * linear inequalities * matrix norm Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016

  7. Linear and non-linear Modified Gravity forecasts with future surveys

    Science.gov (United States)

    Casas, Santiago; Kunz, Martin; Martinelli, Matteo; Pettorino, Valeria

    2017-12-01

    Modified Gravity theories generally affect the Poisson equation and the gravitational slip in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin their time dependence in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. In this work we neglect the information from the cross correlation of these observables, and treat them as independent. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further apply a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We extend the analysis to two particular parameterizations of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 2-5% level when using only linear scales (wavevector k < 0 . 15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.

  8. Linearity assumption in soil-to-plant transfer factors of natural uranium and radium in Helianthus annuus L

    International Nuclear Information System (INIS)

    Rodriguez, P. Blanco; Tome, F. Vera; Fernandez, M. Perez; Lozano, J.C.

    2006-01-01

    The linearity assumption of the validation of soil-to-plant transfer factors of natural uranium and 226 Ra was tested using Helianthus annuus L. (sunflower) grown in a hydroponic medium. Transfer of natural uranium and 226 Ra was tested in both the aerial fraction of plants and in the overall seedlings (roots and shoots). The results show that the linearity assumption can be considered valid in the hydroponic growth of sunflowers for the radionuclides studied. The ability of sunflowers to translocate uranium and 226 Ra was also investigated, as well as the feasibility of using sunflower plants to remove uranium and radium from contaminated water, and by extension, their potential for phytoextraction. In this sense, the removal percentages obtained for natural uranium and 226 Ra were 24% and 42%, respectively. Practically all the uranium is accumulated in the roots. However, 86% of the 226 Ra activity concentration in roots was translocated to the aerial part

  9. Linearity assumption in soil-to-plant transfer factors of natural uranium and radium in Helianthus annuus L

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, P. Blanco [Departamento de Fisica, Facultad de Ciencias, Universidad de Extremadura, 06071 Badajoz (Spain); Tome, F. Vera [Departamento de Fisica, Facultad de Ciencias, Universidad de Extremadura, 06071 Badajoz (Spain)]. E-mail: fvt@unex.es; Fernandez, M. Perez [Area de Ecologia, Departamento de Fisica, Facultad de Ciencias, Universidad de Extremadura, 06071 Badajoz (Spain); Lozano, J.C. [Laboratorio de Radiactividad Ambiental, Facultad de Ciencias, Universidad de Salamanca, 37008 Salamanca (Spain)

    2006-05-15

    The linearity assumption of the validation of soil-to-plant transfer factors of natural uranium and {sup 226}Ra was tested using Helianthus annuus L. (sunflower) grown in a hydroponic medium. Transfer of natural uranium and {sup 226}Ra was tested in both the aerial fraction of plants and in the overall seedlings (roots and shoots). The results show that the linearity assumption can be considered valid in the hydroponic growth of sunflowers for the radionuclides studied. The ability of sunflowers to translocate uranium and {sup 226}Ra was also investigated, as well as the feasibility of using sunflower plants to remove uranium and radium from contaminated water, and by extension, their potential for phytoextraction. In this sense, the removal percentages obtained for natural uranium and {sup 226}Ra were 24% and 42%, respectively. Practically all the uranium is accumulated in the roots. However, 86% of the {sup 226}Ra activity concentration in roots was translocated to the aerial part.

  10. Spatial Bayesian latent factor regression modeling of coordinate-based meta-analysis data.

    Science.gov (United States)

    Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D; Nichols, Thomas E

    2018-03-01

    Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the article are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to (i) identify areas of consistent activation; and (ii) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterized as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. © 2017, The International Biometric Society.

  11. Spatial Bayesian Latent Factor Regression Modeling of Coordinate-based Meta-analysis Data

    Science.gov (United States)

    Montagna, Silvia; Wager, Tor; Barrett, Lisa Feldman; Johnson, Timothy D.; Nichols, Thomas E.

    2017-01-01

    Summary Now over 20 years old, functional MRI (fMRI) has a large and growing literature that is best synthesised with meta-analytic tools. As most authors do not share image data, only the peak activation coordinates (foci) reported in the paper are available for Coordinate-Based Meta-Analysis (CBMA). Neuroimaging meta-analysis is used to 1) identify areas of consistent activation; and 2) build a predictive model of task type or cognitive process for new studies (reverse inference). To simultaneously address these aims, we propose a Bayesian point process hierarchical model for CBMA. We model the foci from each study as a doubly stochastic Poisson process, where the study-specific log intensity function is characterised as a linear combination of a high-dimensional basis set. A sparse representation of the intensities is guaranteed through latent factor modeling of the basis coefficients. Within our framework, it is also possible to account for the effect of study-level covariates (meta-regression), significantly expanding the capabilities of the current neuroimaging meta-analysis methods available. We apply our methodology to synthetic data and neuroimaging meta-analysis datasets. PMID:28498564

  12. Factor Economic Analysis at Forestry Enterprises

    Directory of Open Access Journals (Sweden)

    M.Yu. Chik

    2018-03-01

    Full Text Available The article studies the importance of economic analysis according to the results of research of scientific works of domestic and foreign scientists. The calculation of the influence of factors on the change in the cost of harvesting timber products by cost items has been performed. The results of the calculation of the influence of factors on the change of costs on 1 UAH are determined using the full cost of sold products. The variable and fixed costs and their distribution are allocated that influences the calculation of the impact of factors on cost changes on 1 UAH of sold products. The paper singles out the general results of calculating the influence of factors on cost changes on 1 UAH of sold products. According to the results of the analysis, the list of reserves for reducing the cost of production at forest enterprises was proposed. The main sources of reserves for reducing the prime cost of forest products at forest enterprises are investigated based on the conducted factor analysis.

  13. Simplified 3D Finite Element Analysis of Linear Inductor Motor for Integrated Magnetic Suspension/Propulsion Applications

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Sang Sub; Jang Seok Myeong [Chungnam National University(Korea)

    2000-06-01

    The 4-pole linear homopolar synchronous motor (LHSM), so called linear inductor motor, is composed of the figure-of-eight shaped 3-phase armature windings, DC field windings, and the segmented secondary with the transverse bar track. To reduce the calculation time, the simplified 3D finite element model with equivalent reluctance and/or permanent magnet is presented. To obtain a clear understanding, propriety and usefulness of the developed mode., we compare with the results of simplified 3D FEA and test. Consequently, the results of simplified and 3D FEM analysis are nearly identical, but much larger than that of static test at d-axis armature excitation. Therefore the improved FEA model, such as full model with half slot, is needed for the precise analysis. (author). refs., figs., tabs.

  14. An automated land-use mapping comparison of the Bayesian maximum likelihood and linear discriminant analysis algorithms

    Science.gov (United States)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    The Bayesian maximum likelihood parametric classifier has been tested against the data-based formulation designated 'linear discrimination analysis', using the 'GLIKE' decision and "CLASSIFY' classification algorithms in the Landsat Mapping System. Identical supervised training sets, USGS land use/land cover classes, and various combinations of Landsat image and ancilliary geodata variables, were used to compare the algorithms' thematic mapping accuracy on a single-date summer subscene, with a cellularized USGS land use map of the same time frame furnishing the ground truth reference. CLASSIFY, which accepts a priori class probabilities, is found to be more accurate than GLIKE, which assumes equal class occurrences, for all three mapping variable sets and both levels of detail. These results may be generalized to direct accuracy, time, cost, and flexibility advantages of linear discriminant analysis over Bayesian methods.

  15. Analysis of magnetohydrodynamic flow in linear induction EM pump

    International Nuclear Information System (INIS)

    Geun Jong Yoo; Choi, H.K.; Eun, J.J.; Bae, Y.S.

    2005-01-01

    Numerical analysis is performed for magnetic and magnetohydrodynamic (MHD) flow fields in linear induction type electromagnetic (EM) pump. A finite volume method is applied to solve magnetic field governing equations and the Navier-Stokes equations. Vector and scalar potential methods are adopted to obtain the electric and magnetic fields and the resulting Lorentz force in solving Maxwell equations. The magnetic field and velocity distributions are found to be influenced by the phase of applied electric current. Computational results indicate that the magnetic flux distribution with changing phase of input electric current is characterized by pairs of counter-rotating closed loops. The velocity distributions are affected by the intensity of Lorentz force. The governing equations for the magnetic and flow fields are only semi-coupled in this study, therefore, further study with fully-coupled governing equations are required. (authors)

  16. Analysis of factors that inhibiting implementation of Information Security Management System (ISMS) based on ISO 27001

    Science.gov (United States)

    Tatiara, R.; Fajar, A. N.; Siregar, B.; Gunawan, W.

    2018-03-01

    The purpose of this research is to determine multi factors that inhibiting the implementation of the ISMS based on ISO 2700. It is also to propose a follow-up recommendation on the factors that inhibit the implementation of the ISMS. Data collection is derived from questionnaires to 182 respondents from users in data center operation (DCO) at bca, Indonesian telecommunication international (telin), and data centre division at Indonesian Ministry of Health. We analysing data collection with multiple linear regression analysis and paired t-test. The results are multiple factors which inhibiting the implementation of the ISMS from the three organizations which has implement and operate the ISMS, ISMS documentation management, and continual improvement. From this research, we concluded that the processes of implementation in ISMS is the necessity of the role of all parties in succeeding the implementation of the ISMS continuously.

  17. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  18. Linear Programming and Network Flows

    CERN Document Server

    Bazaraa, Mokhtar S; Sherali, Hanif D

    2011-01-01

    The authoritative guide to modeling and solving complex problems with linear programming-extensively revised, expanded, and updated The only book to treat both linear programming techniques and network flows under one cover, Linear Programming and Network Flows, Fourth Edition has been completely updated with the latest developments on the topic. This new edition continues to successfully emphasize modeling concepts, the design and analysis of algorithms, and implementation strategies for problems in a variety of fields, including industrial engineering, management science, operations research

  19. Single Particle Linear and Nonlinear Dynamics

    International Nuclear Information System (INIS)

    Cai, Y

    2004-01-01

    I will give a comprehensive review of existing particle tracking tools to assess long-term particle stability for small and large accelerators in the presence of realistic magnetic imperfections and machine misalignments. The emphasis will be on the tracking and analysis tools based upon the differential algebra, Lie operator, and ''polymorphism''. Using these tools, a uniform linear and non-linear analysis will be outlined as an application of the normal form

  20. Diagnostics for generalized linear hierarchical models in network meta-analysis.

    Science.gov (United States)

    Zhao, Hong; Hodges, James S; Carlin, Bradley P

    2017-09-01

    Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.

  1. A Secondary Antibody-Detecting Molecular Weight Marker with Mouse and Rabbit IgG Fc Linear Epitopes for Western Blot Analysis.

    Science.gov (United States)

    Lin, Wen-Wei; Chen, I-Ju; Cheng, Ta-Chun; Tung, Yi-Ching; Chu, Pei-Yu; Chuang, Chih-Hung; Hsieh, Yuan-Chin; Huang, Chien-Chiao; Wang, Yeng-Tseng; Kao, Chien-Han; Roffler, Steve R; Cheng, Tian-Lu

    2016-01-01

    Molecular weight markers that can tolerate denaturing conditions and be auto-detected by secondary antibodies offer great efficacy and convenience for Western Blotting. Here, we describe M&R LE protein markers which contain linear epitopes derived from the heavy chain constant regions of mouse and rabbit immunoglobulin G (IgG Fc LE). These markers can be directly recognized and stained by a wide range of anti-mouse and anti-rabbit secondary antibodies. We selected three mouse (M1, M2 and M3) linear IgG1 and three rabbit (R1, R2 and R3) linear IgG heavy chain epitope candidates based on their respective crystal structures. Western blot analysis indicated that M2 and R2 linear epitopes are effectively recognized by anti-mouse and anti-rabbit secondary antibodies, respectively. We fused the M2 and R2 epitopes (M&R LE) and incorporated the polypeptide in a range of 15-120 kDa auto-detecting markers (M&R LE protein marker). The M&R LE protein marker can be auto-detected by anti-mouse and anti-rabbit IgG secondary antibodies in standard immunoblots. Linear regression analysis of the M&R LE protein marker plotted as gel mobility versus the log of the marker molecular weights revealed good linearity with a correlation coefficient R2 value of 0.9965, indicating that the M&R LE protein marker displays high accuracy for determining protein molecular weights. This accurate, regular and auto-detected M&R LE protein marker may provide a simple, efficient and economical tool for protein analysis.

  2. LINEAR LATTICE AND TRAJECTORY RECONSTRUCTION AND CORRECTION AT FAST LINEAR ACCELERATOR

    Energy Technology Data Exchange (ETDEWEB)

    Romanov, A. [Fermilab; Edstrom, D. [Fermilab; Halavanau, A. [Northern Illinois U.

    2017-07-16

    The low energy part of the FAST linear accelerator based on 1.3 GHz superconducting RF cavities was successfully commissioned [1]. During commissioning, beam based model dependent methods were used to correct linear lattice and trajectory. Lattice correction algorithm is based on analysis of beam shape from profile monitors and trajectory responses to dipole correctors. Trajectory responses to field gradient variations in quadrupoles and phase variations in superconducting RF cavities were used to correct bunch offsets in quadrupoles and accelerating cavities relative to their magnetic axes. Details of used methods and experimental results are presented.

  3. First course in factor analysis

    CERN Document Server

    Comrey, Andrew L

    2013-01-01

    The goal of this book is to foster a basic understanding of factor analytic techniques so that readers can use them in their own research and critically evaluate their use by other researchers. Both the underlying theory and correct application are emphasized. The theory is presented through the mathematical basis of the most common factor analytic models and several methods used in factor analysis. On the application side, considerable attention is given to the extraction problem, the rotation problem, and the interpretation of factor analytic results. Hence, readers are given a background of

  4. Vibration Analysis and Experimental Research of the Linear-Motor-Driven Water Piston Pump Used in the Naval Ship

    Directory of Open Access Journals (Sweden)

    Ye-qing Huang

    2016-01-01

    Full Text Available Aiming at the existing problems of traditional water piston pump used in the naval ship, such as low efficiency, high noise, large vibration, and nonintelligent control, a new type of linear-motor-driven water piston pump is developed and its vibration characteristics are analyzed in this research. Based on the 3D model of the structure, the simulation analyses including static stress analysis, modal analysis, and harmonic response analysis are conducted. The simulation results reveal that the mode shape under low frequency stage is mainly associated with the eccentricity swing of the piston rod. The vibration experiment results show that the resonance frequency of linear-motor-driven water piston pump is concentrated upon 500 Hz and 800 Hz in the low frequency range. The dampers can change the resonance frequency of the system to a certain extent. The vibration under triangular motion curve is much better than that of S curve, which is consistent with the simulation conclusion. This research provides an effective method to detect the vibration characteristics and a reference for design and optimization of the linear-motor-driven water piston pump.

  5. Analysis of the effect of meteorological factors on dewfall

    International Nuclear Information System (INIS)

    Xiao, Huijie; Meissner, Ralph; Seeger, Juliane; Rupp, Holger; Borg, Heinz; Zhang, Yuqing

    2013-01-01

    To get an insight into when dewfall will occur and how much to expect we carried out extensive calculations with the energy balance equation for a crop surface to 1) identify the meteorological factors which determine dewfall, 2) establish the relationship between dewfall and each of them, and 3) analyse how these relationships are influenced by changes in these factors. The meteorological factors which determine dewfall were found to be air temperature (T a ), cloud cover (N), wind speed (u), soil heat flux (G), and relative humidity (h r ). Net radiation is also a relevant factor. We did not consider it explicitly, but indirectly through the effect of temperature on the night-time radiation balance. The temperature of the surface (T s ) where dew forms on is also important. However, it is not a meteorological factor, but determined by the aforementioned parameters. All other conditions being equal our study revealed that dewfall increases linearly with decreasing N or G, and with increasing h r . The effect of T a and u on dewfall is non-linear: dewfall initially increases with increasing T a or u, and then decreases. All five meteorological factors can lead to variations in dewfall between 0 and 25 W m −2 over the range of their values we studied. The magnitude of the variation due to one factor depends on the value of the others. Dewfall is highest at N = 0, G = 0, and h r = 1. T a at which dewfall is highest depends on u and vice versa. The change in dewfall for a unit change in N, G or h r is not affected by the value of N, G or h r , but increases as T a or u increase. The change in dewfall for a unit change in T a or u depends on the value of the other four meteorological factors. - Highlights: • Process of dewfall is examined for a wide range of meteorological conditions. • Effect of meteorological factors on dewfall is individually elucidated. • Interaction between factors and their combined effect on dewfall is assessed. • Extensive

  6. Linear accelerator-breeder (LAB): a preliminary analysis and proposal

    International Nuclear Information System (INIS)

    1976-01-01

    The development and demonstration of a Linear Accelerator-Breeder (LAB) is proposed. This would be a machine which would use a powerful linear accelerator to produce an intense beam of protons or deuterons impinging on a target of a heavy element, to produce spallation neutrons. These neutrons would in turn be absorbed in fertile 238 U or 232 Th to produce fissile 239 Pu or 233 U. Though a Linear Accelerator-Breeder is not visualized as competitive to a fast breeder such as the LMFBR, it would offer definite benefits in improved flexibility of options, and it could probably be developed more rapidly than the LMFBR if fuel cycle problems made this desirable. It is estimated that at a beam power of 300 MW a Linear Accelerator-Breeder could produce about 1100 kg/year of fissile 239 Pu or 233 U, which would be adequate to fuel from 2,650 to 15,000 MW(e) of fission reactor capacity depending on the fuel cycle used. A two-year design study is proposed, and various cost estimates are presented. The concept of the Linear Accelerator-Breeder is not new, having been the basis for a major AEC project (MTA) a number of years ago. It has also been pursued in Canada starting from the proposal for an Intense Neutron Generator (ING) several years ago. The technical basis for a reasonable design has only recently been achieved. The concept offers an opportunity to fill an important gap that may develop between the short-term and long-term energy options for energy security of the nation

  7. Numerical Modal Analysis of Vibrations in a Three-Phase Linear Switched Reluctance Actuator

    Directory of Open Access Journals (Sweden)

    José Salvado

    2017-01-01

    Full Text Available This paper addresses the problem of vibrations produced by switched reluctance actuators, focusing on the linear configuration of this type of machines, aiming at its characterization regarding the structural vibrations. The complexity of the mechanical system and the number of parts used put serious restrictions on the effectiveness of analytical approaches. We build the 3D model of the actuator and use finite element method (FEM to find its natural frequencies. The focus is on frequencies within the range up to nearly 1.2 kHz which is considered relevant, based on preliminary simulations and experiments. Spectral analysis results of audio signals from experimental modal excitation are also shown and discussed. The obtained data support the characterization of the linear actuator regarding the excited modes, its vibration frequencies, and mode shapes, with high potential of excitation due to the regular operation regimes of the machine. The results reveal abundant modes and harmonics and the symmetry characteristics of the actuator, showing that the vibration modes can be excited for different configurations of the actuator. The identification of the most critical modes is of great significance for the actuator’s control strategies. This analysis also provides significant information to adopt solutions to reduce the vibrations at the design.

  8. Single Particle Linear and Nonlinear Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Y

    2004-06-25

    I will give a comprehensive review of existing particle tracking tools to assess long-term particle stability for small and large accelerators in the presence of realistic magnetic imperfections and machine misalignments. The emphasis will be on the tracking and analysis tools based upon the differential algebra, Lie operator, and ''polymorphism''. Using these tools, a uniform linear and non-linear analysis will be outlined as an application of the normal form.

  9. A Bivariate Generalized Linear Item Response Theory Modeling Framework to the Analysis of Responses and Response Times.

    Science.gov (United States)

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-01-01

    A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.

  10. Analysis of fractional non-linear diffusion behaviors based on Adomian polynomials

    Directory of Open Access Journals (Sweden)

    Wu Guo-Cheng

    2017-01-01

    Full Text Available A time-fractional non-linear diffusion equation of two orders is considered to investigate strong non-linearity through porous media. An equivalent integral equation is established and Adomian polynomials are adopted to linearize non-linear terms. With the Taylor expansion of fractional order, recurrence formulae are proposed and novel numerical solutions are obtained to depict the diffusion behaviors more accurately. The result shows that the method is suitable for numerical simulation of the fractional diffusion equations of multi-orders.

  11. Bayes linear statistics, theory & methods

    CERN Document Server

    Goldstein, Michael

    2007-01-01

    Bayesian methods combine information available from data with any prior information available from expert knowledge. The Bayes linear approach follows this path, offering a quantitative structure for expressing beliefs, and systematic methods for adjusting these beliefs, given observational data. The methodology differs from the full Bayesian methodology in that it establishes simpler approaches to belief specification and analysis based around expectation judgements. Bayes Linear Statistics presents an authoritative account of this approach, explaining the foundations, theory, methodology, and practicalities of this important field. The text provides a thorough coverage of Bayes linear analysis, from the development of the basic language to the collection of algebraic results needed for efficient implementation, with detailed practical examples. The book covers:The importance of partial prior specifications for complex problems where it is difficult to supply a meaningful full prior probability specification...

  12. Quasi-likelihood generalized linear regression analysis of fatality risk data.

    Science.gov (United States)

    2009-01-01

    Transportation-related fatality risks is a function of many interacting human, vehicle, and environmental factors. Statistically valid analysis of such data is challenged both by the complexity of plausible structural models relating fatality rates t...

  13. Linear Malignant Melanoma In Situ: Reports and Review of Cutaneous Malignancies Presenting as Linear Skin Cancer.

    Science.gov (United States)

    Cohen, Philip R

    2017-09-18

    Melanomas usually present as oval lesions in which the borders may be irregular. Other morphological features of melanoma include clinical asymmetry, variable color, diameter greater than 6 mm and evolving lesions. Two males whose melanoma in situ presented as linear skin lesions are described and cutaneous malignancies that may appear linear in morphology are summarized in this report. A medical literature search engine, PubMed, was used to search the following terms: cancer, cutaneous, in situ, linear, malignant, malignant melanoma, melanoma in situ, neoplasm, and skin. The 25 papers that were generated by the search and their references, were reviewed; 10 papers were selected for inclusion. The cancer of the skin typically presents as round lesions. However, basal cell carcinoma and squamous cell carcinoma may arise from primary skin conditions or benign skin neoplasms such as linear epidermal nevus and linear porokeratosis. In addition, linear tumors such as basal cell carcinoma can occur. The development of linear cutaneous neoplasms may occur secondary to skin tension line or embryonal growth patterns (as reflected by the lines of Langer and lines of Blaschko) or exogenous factors such as prior radiation therapy. Cutaneous neoplasms and specifically melanoma in situ can be added to the list of linear skin lesions.

  14. Advanced non-linear flow-induced vibration and fretting-wear analysis capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Toorani, M.; Pan, L.; Li, R.; Idvorian, N. [Babcock and Wilcox Canada Ltd., Cambridge, Ontario (Canada); Vincent, B.

    2009-07-01

    Fretting wear is a potentially significant degradation mechanism in nuclear steam generators and other shell and tube heat transfer equipment as well. This paper presents an overview of the recently developed code FIVDYNA which is used for the non-linear flow-induced vibration and fretting wear analysis for operating steam generators (OTSG and RSG) and shell-and-tube heat exchangers. FIVDYNA is a non-linear time-history Flow-Induced Vibration (FIV) analysis computer program that has been developed by Babcock and Wilcox Canada to advance the understanding of tube vibration and tube to tube-support interaction. In addition to the dynamic fluid induced forces the program takes into account other tube static forces due to axial and lateral tube preload and thermal interaction loads. The program is capable of predicting the location where the fretting wear is most likely to occur and its magnitude taking into account the support geometry including gaps. FIVDYNA uses the general purpose finite element computer code ABAQUS as its solver. Using ABAQUS gives the user the flexibility to add additional forces to the tube ranging from tube preloads and the support offsets to thermal loads. The forces currently being modeled in FIVDYNA are the random turbulence, steady drag force, fluid-elastic forces, support offset and pre-strain force (axial loads). This program models the vibration of tubes and calculates the structural dynamic characteristics, and interaction forces between the tube and the tube supports. These interaction forces are then used to calculate the work rate at the support and eventually the predicted depth of wear scar on the tube. A very good agreement is found with experiments and also other computer codes. (author)

  15. Multiple factor analysis by example using R

    CERN Document Server

    Pagès, Jérôme

    2014-01-01

    Multiple factor analysis (MFA) enables users to analyze tables of individuals and variables in which the variables are structured into quantitative, qualitative, or mixed groups. Written by the co-developer of this methodology, Multiple Factor Analysis by Example Using R brings together the theoretical and methodological aspects of MFA. It also includes examples of applications and details of how to implement MFA using an R package (FactoMineR).The first two chapters cover the basic factorial analysis methods of principal component analysis (PCA) and multiple correspondence analysis (MCA). The

  16. Changes of platelet GMP-140 in diabetic nephropathy and its multi-factor regression analysis

    International Nuclear Information System (INIS)

    Wang Zizheng; Du Tongxin; Wang Shukui

    2001-01-01

    The relation of platelet GMP-140 and its related factors with diabetic nephropathy was studied. 144 patients of diabetic mellitus without nephropathy (group without DN, mean suffering duration of 25.5 +- 18.6 months); 80 with diabetic nephropathy (group DN, mean suffering duration of 58.7 +- 31.6 months) and 50 normal controls were chosen in the research. Platelet GMP-140, plasma α 1 -MG, β 2 -MG, and 24 hour urine albumin (ALB), IgG, α 1 -MG, β 2 -MG were detected by RIA, while HBA 1 C via chromatographic separation and FBG, PBG, Ch, TG, HDL, FG via biochemical methods. All the data had been processed with software on computer with t-test and linear regression, and multi-factor analysis were done also. The levels of platelet GMP-140, FG, DBP, TG, HBA 1 C and PBG in group DN were significantly higher than those of group without DN and normal control (P 0.05), while they were higher than those of normal controls. Multi-factor analysis of platelet GMP-140 with TG, DBP and HBA 1 C were performed in 80 patients with DN (P 1 C are the independent factors enhancing the activation of platelets. The disturbance of lipid metabolism in type II diabetic mellitus may also enhance the activation of platelets. Elevation of blood pressure may accelerate the initiation and deterioration of DN in which change of platelet GMP-140 is an independent factor. Elevation of HBA 1 C and blood glucose are related closely to the diabetic nephropathy

  17. Integrated structural analysis tool using the linear matching method part 1 – Software development

    International Nuclear Information System (INIS)

    Ure, James; Chen, Haofeng; Tipping, David

    2014-01-01

    A number of direct methods based upon the Linear Matching Method (LMM) framework have been developed to address structural integrity issues for components subjected to cyclic thermal and mechanical load conditions. This paper presents a new integrated structural analysis tool using the LMM framework for the assessment of load carrying capacity, shakedown limit, ratchet limit and steady state cyclic response of structures. First, the development of the LMM for the evaluation of design limits in plasticity is introduced. Second, preliminary considerations for the development of the LMM into a tool which can be used on a regular basis by engineers are discussed. After the re-structuring of the LMM subroutines for multiple central processing unit (CPU) solution, the LMM software tool for the assessment of design limits in plasticity is implemented by developing an Abaqus CAE plug-in with graphical user interfaces. Further demonstration of this new LMM analysis tool including practical application and verification is presented in an accompanying paper. - Highlights: • A new structural analysis tool using the Linear Matching Method (LMM) is developed. • The software tool is able to evaluate the design limits in plasticity. • Able to assess limit load, shakedown, ratchet limit and steady state cyclic response. • Re-structuring of the LMM subroutines for multiple CPU solution is conducted. • The software tool is implemented by developing an Abaqus CAE plug-in with GUI

  18. Linear integral equations and soliton systems

    International Nuclear Information System (INIS)

    Quispel, G.R.W.

    1983-01-01

    A study is presented of classical integrable dynamical systems in one temporal and one spatial dimension. The direct linearizations are given of several nonlinear partial differential equations, for example the Korteweg-de Vries equation, the modified Korteweg-de Vries equation, the sine-Gordon equation, the nonlinear Schroedinger equation, and the equation of motion for the isotropic Heisenberg spin chain; the author also discusses several relations between these equations. The Baecklund transformations of these partial differential equations are treated on the basis of a singular transformation of the measure (or equivalently of the plane-wave factor) occurring in the corresponding linear integral equations, and the Baecklund transformations are used to derive the direct linearization of a chain of so-called modified partial differential equations. Finally it is shown that the singular linear integral equations lead in a natural way to the direct linearizations of various nonlinear difference-difference equations. (Auth.)

  19. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    Science.gov (United States)

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Balanced truncation for linear switched systems

    DEFF Research Database (Denmark)

    Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef

    2013-01-01

    In this paper, we present a theoretical analysis of the model reduction algorithm for linear switched systems from Shaker and Wisniewski (2011, 2009) and . This algorithm is a reminiscence of the balanced truncation method for linear parameter varying systems (Wood et al., 1996) [3]. Specifically...

  1. Linear Fixed-Field Multi-Pass Arcs for Recirculating Linear Accelerators

    International Nuclear Information System (INIS)

    Morozov, V.S.; Bogacz, S.A.; Roblin, Y.R.; Beard, K.B.

    2012-01-01

    Recirculating Linear Accelerators (RLA's) provide a compact and efficient way of accelerating particle beams to medium and high energies by reusing the same linac for multiple passes. In the conventional scheme, after each pass, the different energy beams coming out of the linac are separated and directed into appropriate arcs for recirculation, with each pass requiring a separate fixed-energy arc. In this paper we present a concept of an RLA return arc based on linear combined-function magnets, in which two and potentially more consecutive passes with very different energies are transported through the same string of magnets. By adjusting the dipole and quadrupole components of the constituting linear combined-function magnets, the arc is designed to be achromatic and to have zero initial and final reference orbit offsets for all transported beam energies. We demonstrate the concept by developing a design for a droplet-shaped return arc for a dog-bone RLA capable of transporting two beam passes with momenta different by a factor of two. We present the results of tracking simulations of the two passes and lay out the path to end-to-end design and simulation of a complete dog-bone RLA.

  2. Linear morphoea follows Blaschko's lines.

    Science.gov (United States)

    Weibel, L; Harper, J I

    2008-07-01

    The aetiology of morphoea (or localized scleroderma) remains unknown. It has previously been suggested that lesions of linear morphoea may follow Blaschko's lines and thus reflect an embryological development. However, the distribution of linear morphoea has never been accurately evaluated. We aimed to identify common patterns of clinical presentation in children with linear morphoea and to establish whether linear morphoea follows the lines of Blaschko. A retrospective chart review of 65 children with linear morphoea was performed. According to clinical photographs the skin lesions of these patients were plotted on to standardized head and body charts. With the aid of Adobe Illustrator a final figure was produced including an overlay of all individual lesions which was used for comparison with the published lines of Blaschko. Thirty-four (53%) patients had the en coup de sabre subtype, 27 (41%) presented with linear morphoea on the trunk and/or limbs and four (6%) children had a combination of the two. In 55 (85%) children the skin lesions were confined to one side of the body, showing no preference for either left or right side. On comparing the overlays of all body and head lesions with the original lines of Blaschko there was an excellent correlation. Our data indicate that linear morphoea follows the lines of Blaschko. We hypothesize that in patients with linear morphoea susceptible cells are present in a mosaic state and that exposure to some trigger factor may result in the development of this condition.

  3. Mehar Methods for Fuzzy Optimal Solution and Sensitivity Analysis of Fuzzy Linear Programming with Symmetric Trapezoidal Fuzzy Numbers

    Directory of Open Access Journals (Sweden)

    Sukhpreet Kaur Sidhu

    2014-01-01

    Full Text Available The drawbacks of the existing methods to obtain the fuzzy optimal solution of such linear programming problems, in which coefficients of the constraints are represented by real numbers and all the other parameters as well as variables are represented by symmetric trapezoidal fuzzy numbers, are pointed out, and to resolve these drawbacks, a new method (named as Mehar method is proposed for the same linear programming problems. Also, with the help of proposed Mehar method, a new method, much easy as compared to the existing methods, is proposed to deal with the sensitivity analysis of the same type of linear programming problems.

  4. Menu-Driven Solver Of Linear-Programming Problems

    Science.gov (United States)

    Viterna, L. A.; Ferencz, D.

    1992-01-01

    Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).

  5. Design and analysis of tubular permanent magnet linear generator for small-scale wave energy converter

    Science.gov (United States)

    Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young

    2017-05-01

    This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.

  6. INTRANS. A computer code for the non-linear structural response analysis of reactor internals under transient loads

    International Nuclear Information System (INIS)

    Ramani, D.T.

    1977-01-01

    The 'INTRANS' system is a general purpose computer code, designed to perform linear and non-linear structural stress and deflection analysis of impacting or non-impacting nuclear reactor internals components coupled with reactor vessel, shield building and external as well as internal gapped spring support system. This paper describes in general a unique computational procedure for evaluating the dynamic response of reactor internals, descretised as beam and lumped mass structural system and subjected to external transient loads such as seismic and LOCA time-history forces. The computational procedure is outlined in the INTRANS code, which computes component flexibilities of a discrete lumped mass planar model of reactor internals by idealising an assemblage of finite elements consisting of linear elastic beams with bending, torsional and shear stiffnesses interacted with external or internal linear as well as non-linear multi-gapped spring support system. The method of analysis is based on the displacement method and the code uses the fourth-order Runge-Kutta numerical integration technique as a basis for solution of dynamic equilibrium equations of motion for the system. During the computing process, the dynamic response of each lumped mass is calculated at specific instant of time using well-known step-by-step procedure. At any instant of time then, the transient dynamic motions of the system are held stationary and based on the predicted motions and internal forces of the previous instant. From which complete response at any time-step of interest may then be computed. Using this iterative process, the relationship between motions and internal forces is satisfied step by step throughout the time interval

  7. Assessment of behavior factor of eccentrically braced frame with ...

    African Journals Online (AJOL)

    Assessment of behavior factor of eccentrically braced frame with vertical link in cyclic loading. ... Journal of Fundamental and Applied Sciences ... In order to understand the behavior of these structures using non-linear static and dynamic analysis of building's behavior factor, eccentric and exocentric systems were calculated ...

  8. Analysis of mineral phases in coal utilizing factor analysis

    International Nuclear Information System (INIS)

    Roscoe, B.A.; Hopke, P.K.

    1982-01-01

    The mineral phase inclusions of coal are discussed. The contribution of these to a coal sample are determined utilizing several techniques. Neutron activation analysis in conjunction with coal washability studies have produced some information on the general trends of elemental variation in the mineral phases. These results have been enhanced by the use of various statistical techniques. The target transformation factor analysis is specifically discussed and shown to be able to produce elemental profiles of the mineral phases in coal. A data set consisting of physically fractionated coal samples was generated. These samples were analyzed by neutron activation analysis and then their elemental concentrations examined using TTFA. Information concerning the mineral phases in coal can thus be acquired from factor analysis even with limited data. Additional data may permit the resolution of additional mineral phases as well as refinement of theose already identified

  9. Dynamic Response Analysis of Linear Pulse Motor with Closed Loop Control

    OpenAIRE

    山本, 行雄; 山田, 一

    1989-01-01

    A linear pulse motor can translate digital signals into linear positions without a gear system. It is important to predict a dynamic response in order to the motor that has the good performance. In this report the maximum pulse rate and the maximum speed on the linear pulse motor are obtained by using the sampling theory.

  10. Use of multivariate extensions of generalized linear models in the analysis of data from clinical trials

    OpenAIRE

    ALONSO ABAD, Ariel; Rodriguez, O.; TIBALDI, Fabian; CORTINAS ABRAHANTES, Jose

    2002-01-01

    In medical studies the categorical endpoints are quite often. Even though nowadays some models for handling this multicategorical variables have been developed their use is not common. This work shows an application of the Multivariate Generalized Linear Models to the analysis of Clinical Trials data. After a theoretical introduction models for ordinal and nominal responses are applied and the main results are discussed. multivariate analysis; multivariate logistic regression; multicategor...

  11. Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data

    Directory of Open Access Journals (Sweden)

    Mingwu Jin

    2012-01-01

    Full Text Available Local canonical correlation analysis (CCA is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM, a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.

  12. Quantifying the linear and nonlinear relations between the urban form fragmentation and the carbon emission distribution

    Science.gov (United States)

    Zuo, S.; Dai, S.; Ren, Y.; Yu, Z.

    2017-12-01

    Scientifically revealing the spatial heterogeneity and the relationship between the fragmentation of urban landscape and the direct carbon emissions are of great significance to land management and urban planning. In fact, the linear and nonlinear effects among the various factors resulted in the carbon emission spatial map. However, there is lack of the studies on the direct and indirect relations between the carbon emission and the city functional spatial form changes, which could not be reflected by the land use change. The linear strength and direction of the single factor could be calculated through the correlation and Geographically Weighted Regression (GWR) analysis, the nonlinear power of one factor and the interaction power of each two factors could be quantified by the Geodetector analysis. Therefore, we compared the landscape fragmentation metrics of the urban land cover and functional district patches to characterize the landscape form and then revealed the relations between the landscape fragmentation level and the direct the carbon emissions based on the three methods. The results showed that fragmentation decreased and the fragmented patches clustered at the coarser resolution. The direct CO2 emission density and the population density increased when the fragmentation level aggregated. The correlation analysis indicated the weak linear relation between them. The spatial variation of GWR output indicated the fragmentation indicator (MESH) had the positive influence on the carbon emission located in the relatively high emission region, and the negative effects regions accounted for the small part of the area. The Geodetector which explores the nonlinear relation identified the DIVISION and MESH as the most powerful direct factor for the land cover patches, NP and PD for the functional district patches, and the interactions between fragmentation indicator (MESH) and urban sprawl metrics (PUA and DIS) had the greatly increased explanation powers on the

  13. Effects of dual-energy CT with non-linear blending on abdominal CT angiography

    International Nuclear Information System (INIS)

    Li, Sulan; Wang, Chaoqin; Jiang, Xiao Chen; Xu, Ge

    2014-01-01

    To determine whether non-linear blending technique for arterial-phase dual-energy abdominal CT angiography (CTA) could improve image quality compared to the linear blending technique and conventional 120 kVp imaging. This study included 118 patients who had accepted dual-energy abdominal CTA in the arterial phase. They were assigned to Sn140/80 kVp protocol (protocol A, n = 40) if body mass index (BMI) < 25 or Sn140/100 kVp protocol (protocol B, n = 41) if BMI ≥ 25. Non-linear blending images and linear blending images with a weighting factor of 0.5 in each protocol were generated and compared with the conventional 120 kVp images (protocol C, n = 37). The abdominal vascular enhancements, image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and radiation dose were assessed. Statistical analysis was performed using one-way analysis of variance test, independent t test, Mann-Whitney U test, and Kruskal-Wallis test. Mean vascular attenuation, CNR, SNR and subjective image quality score for the non-linear blending images in each protocol were all higher compared to the corresponding linear blending images and 120 kVp images (p values ranging from < 0.001 to 0.007) except for when compared to non-linear blending images for protocol B and 120 kVp images in CNR and SNR. No significant differences were found in image noise among the three kinds of images and the same kind of images in different protocols, but the lowest radiation dose was shown in protocol A. Non-linear blending technique of dual-energy CT can improve the image quality of arterial-phase abdominal CTA, especially with the Sn140/80 kVp scanning.

  14. Effects of dual-energy CT with non-linear blending on abdominal CT angiography

    Energy Technology Data Exchange (ETDEWEB)

    Li, Sulan; Wang, Chaoqin; Jiang, Xiao Chen; Xu, Ge [Dept. of Radiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou (China)

    2014-08-15

    To determine whether non-linear blending technique for arterial-phase dual-energy abdominal CT angiography (CTA) could improve image quality compared to the linear blending technique and conventional 120 kVp imaging. This study included 118 patients who had accepted dual-energy abdominal CTA in the arterial phase. They were assigned to Sn140/80 kVp protocol (protocol A, n = 40) if body mass index (BMI) < 25 or Sn140/100 kVp protocol (protocol B, n = 41) if BMI ≥ 25. Non-linear blending images and linear blending images with a weighting factor of 0.5 in each protocol were generated and compared with the conventional 120 kVp images (protocol C, n = 37). The abdominal vascular enhancements, image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and radiation dose were assessed. Statistical analysis was performed using one-way analysis of variance test, independent t test, Mann-Whitney U test, and Kruskal-Wallis test. Mean vascular attenuation, CNR, SNR and subjective image quality score for the non-linear blending images in each protocol were all higher compared to the corresponding linear blending images and 120 kVp images (p values ranging from < 0.001 to 0.007) except for when compared to non-linear blending images for protocol B and 120 kVp images in CNR and SNR. No significant differences were found in image noise among the three kinds of images and the same kind of images in different protocols, but the lowest radiation dose was shown in protocol A. Non-linear blending technique of dual-energy CT can improve the image quality of arterial-phase abdominal CTA, especially with the Sn140/80 kVp scanning.

  15. Analysis of human factors on urban heat island and simulation of urban thermal environment in Lanzhou city, China

    Science.gov (United States)

    Pan, Jinghu

    2015-01-01

    Urban heat island (UHI) effect is a global phenomenon caused by urbanization. Because of the number and complexity of factors contributing to the urban thermal environment, traditional statistical methods are insufficient for acquiring data and analyzing the impact of human activities on the thermal environment, especially for identifying which factors are dominant. The UHI elements were extracted using thermal infrared remote sensing data to retrieve the land surface temperatures of Lanzhou city, and then adopting an object-oriented fractal net evolution approach to create an image segmentation of the land surface temperature (LST). The effects of urban expansion on the urban thermal environment were quantitatively analyzed. A comprehensive evaluation system of the urban thermal environment was constructed, the spatial pattern of the urban thermal environment in Lanzhou was assessed, and principal influencing factors were identified using spatial principal component analysis (SPCA) and multisource spatial data. We found that in the last 20 years, the UHI effect in Lanzhou city has been strengthened, as the UHI ratio index has increased from 0.385 in 1993 to 0.579 in 2001 and to 0.653 in 2011. The UHI expansion had a spatiotemporal consistency with the urban expansion. The four major factors that affect the spatial pattern of the urban thermal environment in Lanzhou can be ranked in the following order: landscape configuration, anthropogenic heat release, urban construction, and gradient from man-made to natural land cover. These four together accounted for 91.27% of the variance. A linear model was thus successfully constructed, implying that SPCA is helpful in identifying major contributors to UHI. Regression analysis indicated that the instantaneous LST and the simulated thermal environment have a good linear relationship, the correlation coefficient between the two reached 0.8011, highly significant at a confidence level of 0.001.

  16. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  17. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  18. Polarized electron sources for linear colliders

    International Nuclear Information System (INIS)

    Clendenin, J.E.; Ecklund, S.D.; Miller, R.H.; Schultz, D.C.; Sheppard, J.C.

    1992-07-01

    Linear colliders require high peak current beams with low duty factors. Several methods to produce polarized e - beams for accelerators have been developed. The SLC, the first linear collider, utilizes a photocathode gun with a GaAs cathode. Although photocathode sources are probably the only practical alternative for the next generation of linear colliders, several problems remain to be solved, including high voltage breakdown which poisons the cathode, charge limitations that are associated with the condition of the semiconductor cathode, and a relatively low polarization of ≤5O%. Methods to solve or at least greatly reduce the impact of each of these problems are at hand

  19. From elementary flux modes to elementary flux vectors: Metabolic pathway analysis with arbitrary linear flux constraints

    Science.gov (United States)

    Klamt, Steffen; Gerstl, Matthias P.; Jungreuthmayer, Christian; Mahadevan, Radhakrishnan; Müller, Stefan

    2017-01-01

    Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks. PMID:28406903

  20. Analysis of factors influencing the integrated bolus peak timing in contrast-enhanced brain computed tomographic angiography

    International Nuclear Information System (INIS)

    Son, Soon Yong; Choi, Kwan Woo; Jeong, Hoi Woun; Jang, Seo Goo; Jung, Jae Young; Yun, Jung Soo; Kim, Ki Won; Lee, Young Ah; Son, Jin Hyun; Min, Jung Whan

    2016-01-01

    The objective of this study was to analyze the factors influencing integrated bolus peak timing in contrast- enhanced computed tomographic angiography (CTA) and to determine a method of calculating personal peak time. The optimal time was calculated by performing multiple linear regression analysis, after finding the influence factors through correlation analysis between integrated peak time of contrast medium and personal measured value by monitoring CTA scans. The radiation exposure dose in CTA was 716.53 mGy·cm and the radiation exposure dose in monitoring scan was 15.52 mGy (2 - 34 mGy). The results were statistically significant (p < .01). Regression analysis revealed, a -0.160 times decrease with a one-step increase in heart rate in male, and -0.004, -0.174, and 0.006 times decrease with one-step in DBP, heart rate, and blood sugar, respectively, in female. In a consistency test of peak time by calculating measured peak time and peak time by using the regression equation, the consistency was determined to be very high for male and female. This study could prevent unnecessary dose exposure by encouraging in clinic calculation of personal integrated peak time of contrast medium prior to examination

  1. Source Apportionment and Influencing Factor Analysis of Residential Indoor PM2.5 in Beijing

    Science.gov (United States)

    Yang, Yibing; Liu, Liu; Xu, Chunyu; Li, Na; Liu, Zhe; Wang, Qin; Xu, Dongqun

    2018-01-01

    In order to identify the sources of indoor PM2.5 and to check which factors influence the concentration of indoor PM2.5 and chemical elements, indoor concentrations of PM2.5 and its related elements in residential houses in Beijing were explored. Indoor and outdoor PM2.5 samples that were monitored continuously for one week were collected. Indoor and outdoor concentrations of PM2.5 and 15 elements (Al, As, Ca, Cd, Cu, Fe, K, Mg, Mn, Na, Pb, Se, Tl, V, Zn) were calculated and compared. The median indoor concentration of PM2.5 was 57.64 μg/m3. For elements in indoor PM2.5, Cd and As may be sensitive to indoor smoking, Zn, Ca and Al may be related to indoor sources other than smoking, Pb, V and Se may mainly come from outdoor. Five factors were extracted for indoor PM2.5 by factor analysis, explained 76.8% of total variance, outdoor sources contributed more than indoor sources. Multiple linear regression analysis for indoor PM2.5, Cd and Pb was performed. Indoor PM2.5 was influenced by factors including outdoor PM2.5, smoking during sampling, outdoor temperature and time of air conditioner use. Indoor Cd was affected by factors including smoking during sampling, outdoor Cd and building age. Indoor Pb concentration was associated with factors including outdoor Pb and time of window open per day, building age and RH. In conclusion, indoor PM2.5 mainly comes from outdoor sources, and the contributions of indoor sources also cannot be ignored. Factors associated indoor and outdoor air exchange can influence the concentrations of indoor PM2.5 and its constituents. PMID:29621164

  2. An Analysis of the Factors Influencing the Adoption of Activity Based Costing (ABC in the Financial Sector in Jamaica

    Directory of Open Access Journals (Sweden)

    Phillip C. James

    2013-07-01

    Full Text Available Financial institutions are increasingly operating in a highly competitively environment and therefore cost management has become an imperative. This paper investigates the factors influencing the adoption of activity-based costing (ABC methodology within the financial sector in Jamaica. Quantitative analysis was done using the generalized linear logistic regression model. The results show that there are three main factors that are statistically significant in the decision to implement an ABC system, these are: companies perception of the ability of ABC to assist in cost control, the proportion of overhead to total cost and finally, the action of competitors, that is, whether a competitor adopts the ABC methodology

  3. Linearity analysis and comparison study on the epoc® point-of-care blood analysis system in cardiopulmonary bypass patients

    Directory of Open Access Journals (Sweden)

    Jianing Chen

    2016-03-01

    Full Text Available The epoc® blood analysis system (Epocal Inc., Ottawa, Ontario, Canada is a newly developed in vitro diagnostic hand-held analyzer for testing whole blood samples at point-of-care, which provides blood gas, electrolytes, ionized calcium, glucose, lactate, and hematocrit/calculated hemoglobin rapidly. The analytical performance of the epoc® system was evaluated in a tertiary hospital, see related research article “Analytical evaluation of the epoc® point-of-care blood analysis system in cardiopulmonary bypass patients” [1]. Data presented are the linearity analysis for 9 parameters and the comparison study in 40 cardiopulmonary bypass patients on 3 epoc® meters, Instrumentation Laboratory GEM4000, Abbott iSTAT, Nova CCX, and Roche Accu-Chek Inform II and Performa glucose meters.

  4. Electromagnetic linear machines with dual Halbach array design and analysis

    CERN Document Server

    Yan, Liang; Peng, Juanjuan; Zhang, Lei; Jiao, Zongxia

    2017-01-01

    This book extends the conventional two-dimensional (2D) magnet arrangement into 3D pattern for permanent magnet linear machines for the first time, and proposes a novel dual Halbach array. It can not only effectively increase the radial component of magnetic flux density and output force of tubular linear machines, but also significantly reduce the axial flux density, radial force and thus system vibrations and noises. The book is also the first to address the fundamentals and provide a summary of conventional arrays, as well as novel concepts for PM pole design in electric linear machines. It covers theoretical study, numerical simulation, design optimization and experimental works systematically. The design concept and analytical approaches can be implemented to other linear and rotary machines with similar structures. The book will be of interest to academics, researchers, R&D engineers and graduate students in electronic engineering and mechanical engineering who wish to learn the core principles, met...

  5. Linear Algebra and Smarandache Linear Algebra

    OpenAIRE

    Vasantha, Kandasamy

    2003-01-01

    The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...

  6. Linearized method: A new approach for kinetic analysis of central dopamine D2 receptor specific binding

    International Nuclear Information System (INIS)

    Watabe, Hiroshi; Hatazawa, Jun; Ishiwata, Kiichi; Ido, Tatsuo; Itoh, Masatoshi; Iwata, Ren; Nakamura, Takashi; Takahashi, Toshihiro; Hatano, Kentaro

    1995-01-01

    The authors proposed a new method (Linearized method) to analyze neuroleptic ligand-receptor specific binding in a human brain using positron emission tomography (PET). They derived the linear equation to solve four rate constants, k 3 , k 4 , k 5 , k 6 from PET data. This method does not demand radioactivity curve in plasma as an input function to brain, and can do fast calculations in order to determine rate constants. They also tested Nonlinearized method including nonlinear equations which is conventional analysis using plasma radioactivity corrected for ligand metabolites as an input function. The authors applied these methods to evaluate dopamine D 2 receptor specific binding of [ 11 C] YM-09151-2. The value of B max /K d = k 3 k 4 obtained by Linearized method was 5.72 ± 3.1 which was consistent with the value of 5.78 ± 3.4 obtained by Nonlinearized method

  7. Assessing risk factors for periodontitis using regression

    Science.gov (United States)

    Lobo Pereira, J. A.; Ferreira, Maria Cristina; Oliveira, Teresa

    2013-10-01

    Multivariate statistical analysis is indispensable to assess the associations and interactions between different factors and the risk of periodontitis. Among others, regression analysis is a statistical technique widely used in healthcare to investigate and model the relationship between variables. In our work we study the impact of socio-demographic, medical and behavioral factors on periodontal health. Using regression, linear and logistic models, we can assess the relevance, as risk factors for periodontitis disease, of the following independent variables (IVs): Age, Gender, Diabetic Status, Education, Smoking status and Plaque Index. The multiple linear regression analysis model was built to evaluate the influence of IVs on mean Attachment Loss (AL). Thus, the regression coefficients along with respective p-values will be obtained as well as the respective p-values from the significance tests. The classification of a case (individual) adopted in the logistic model was the extent of the destruction of periodontal tissues defined by an Attachment Loss greater than or equal to 4 mm in 25% (AL≥4mm/≥25%) of sites surveyed. The association measures include the Odds Ratios together with the correspondent 95% confidence intervals.

  8. STABILITY, BIFURCATIONS AND CHAOS IN UNEMPLOYMENT NON-LINEAR DYNAMICS

    Directory of Open Access Journals (Sweden)

    Pagliari Carmen

    2013-07-01

    Full Text Available The traditional analysis of unemployment in relation to real output dynamics is based on some empirical evidences deducted from Okun’s studies. In particular the so called Okun’s Law is expressed in a linear mathematical formulation, which cannot explain the fluctuation of the variables involved. Linearity is an heavy limit for macroeconomic analysis and especially for every economic growth study which would consider the unemployment rate among the endogenous variables. This paper deals with an introductive study about the role of non-linearity in the investigation of unemployment dynamics. The main idea is the existence of a non-linear relation between the unemployment rate and the gap of GDP growth rate from its trend. The macroeconomic motivation of this idea moves from the consideration of two concatenate effects caused by a variation of the unemployment rate on the real output growth rate. These two effects are concatenate because there is a first effect that generates a secondary one on the same variable. When the unemployment rate changes, the first effect is the variation in the level of production in consequence of the variation in the level of such an important factor as labour force; the secondary effect is a consecutive variation in the level of production caused by the variation in the aggregate demand in consequence of the change of the individual disposal income originated by the previous variation of production itself. In this paper the analysis of unemployment dynamics is carried out by the use of the logistic map and the conditions for the existence of bifurcations (cycles are determined. The study also allows to find the range of variability of some characteristic parameters that might be avoided for not having an absolute unpredictability of unemployment dynamics (deterministic chaos: unpredictability is equivalent to uncontrollability because of the total absence of information about the future value of the variable to

  9. Correlation and simple linear regression.

    Science.gov (United States)

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  10. Detection of non-milk fat in milk fat by gas chromatography and linear discriminant analysis.

    Science.gov (United States)

    Gutiérrez, R; Vega, S; Díaz, G; Sánchez, J; Coronado, M; Ramírez, A; Pérez, J; González, M; Schettino, B

    2009-05-01

    Gas chromatography was utilized to determine triacylglycerol profiles in milk and non-milk fat. The values of triacylglycerol were subjected to linear discriminant analysis to detect and quantify non-milk fat in milk fat. Two groups of milk fat were analyzed: A) raw milk fat from the central region of Mexico (n = 216) and B) ultrapasteurized milk fat from 3 industries (n = 36), as well as pork lard (n = 2), bovine tallow (n = 2), fish oil (n = 2), peanut (n = 2), corn (n = 2), olive (n = 2), and soy (n = 2). The samples of raw milk fat were adulterated with non-milk fats in proportions of 0, 5, 10, 15, and 20% to form 5 groups. The first function obtained from the linear discriminant analysis allowed the correct classification of 94.4% of the samples with levels <10% of adulteration. The triacylglycerol values of the ultrapasteurized milk fats were evaluated with the discriminant function, demonstrating that one industry added non-milk fat to its product in 80% of the samples analyzed.

  11. Linear least-squares method for global luminescent oil film skin friction field analysis

    Science.gov (United States)

    Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu

    2018-06-01

    A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.

  12. Non-linear analysis and the design of Pumpkin Balloons: stress, stability and viscoelasticity

    Science.gov (United States)

    Rand, J. L.; Wakefield, D. S.

    Tensys have a long-established background in the shape generation and load analysis of architectural stressed membrane structures Founded upon their inTENS finite element analysis suite these activities have broadened to encompass lighter than air structures such as aerostats hybrid air-vehicles and stratospheric balloons Winzen Engineering couple many years of practical balloon design and fabrication experience with both academic and practical knowledge of the characterisation of the non-linear viscoelastic response of the polymeric films typically used for high-altitude scientific balloons Both companies have provided consulting services to the NASA Ultra Long Duration Balloon ULDB Program Early implementations of pumpkin balloons have shown problems of geometric instability characterised by improper deployment and these difficulties have been reproduced numerically using inTENS The solution lies in both the shapes of the membrane lobes and also the need to generate a biaxial stress field in order to mobilise in-plane shear stiffness Balloons undergo significant temperature and pressure variations in flight The different thermal characteristics between tendons and film can lead to significant meridional stress Fabrication tolerances can lead to significant local hoop stress concentrations particularly adjacent to the base and apex end fittings The non-linear viscoelastic response of the envelope film acts positively to help dissipate stress concentrations However creep over time may produce lobe geometry variations that may

  13. Advanced linear algebra for engineers with Matlab

    CERN Document Server

    Dianat, Sohail A

    2009-01-01

    Matrices, Matrix Algebra, and Elementary Matrix OperationsBasic Concepts and NotationMatrix AlgebraElementary Row OperationsSolution of System of Linear EquationsMatrix PartitionsBlock MultiplicationInner, Outer, and Kronecker ProductsDeterminants, Matrix Inversion and Solutions to Systems of Linear EquationsDeterminant of a MatrixMatrix InversionSolution of Simultaneous Linear EquationsApplications: Circuit AnalysisHomogeneous Coordinates SystemRank, Nu

  14. Quantum linear Boltzmann equation

    International Nuclear Information System (INIS)

    Vacchini, Bassano; Hornberger, Klaus

    2009-01-01

    We review the quantum version of the linear Boltzmann equation, which describes in a non-perturbative fashion, by means of scattering theory, how the quantum motion of a single test particle is affected by collisions with an ideal background gas. A heuristic derivation of this Lindblad master equation is presented, based on the requirement of translation-covariance and on the relation to the classical linear Boltzmann equation. After analyzing its general symmetry properties and the associated relaxation dynamics, we discuss a quantum Monte Carlo method for its numerical solution. We then review important limiting forms of the quantum linear Boltzmann equation, such as the case of quantum Brownian motion and pure collisional decoherence, as well as the application to matter wave optics. Finally, we point to the incorporation of quantum degeneracies and self-interactions in the gas by relating the equation to the dynamic structure factor of the ambient medium, and we provide an extension of the equation to include internal degrees of freedom.

  15. The effect of choosing three different C factor formulae derived from NDVI on a fully raster-based erosion modelling

    Science.gov (United States)

    Sulistyo, Bambang

    2016-11-01

    The research was aimed at studying the efect of choosing three different C factor formulae derived from NDVI on a fully raster-based erosion modelling of The USLE using remote sensing data and GIS technique. Methods applied was by analysing all factors affecting erosion such that all data were in the form of raster. Those data were R, K, LS, C and P factors. Monthly R factor was evaluated based on formula developed by Abdurachman. K factor was determined using modified formula used by Ministry of Forestry based on soil samples taken in the field. LS factor was derived from Digital Elevation Model. Three C factors used were all derived from NDVI and developed by Suriyaprasit (non-linear) and by Sulistyo (linear and non-linear). P factor was derived from the combination between slope data and landcover classification interpreted from Landsat 7 ETM+. Another analysis was the creation of map of Bulk Density used to convert erosion unit. To know the model accuracy, model validation was done by applying statistical analysis and by comparing Emodel with Eactual. A threshold value of ≥ 0.80 or ≥ 80% was chosen to justify. The research result showed that all Emodel using three formulae of C factors have coeeficient of correlation value of > 0.8. The results of analysis of variance showed that there was significantly difference between Emodel and Eactual when using C factor formula developed by Suriyaprasit and Sulistyo (non-linear). Among the three formulae, only Emodel using C factor formula developed by Sulistyo (linear) reached the accuracy of 81.13% while the other only 56.02% as developed by Sulistyo (nonlinear) and 4.70% as developed by Suriyaprasit, respectively.

  16. Non-linear thermal analysis of light concrete hollow brick walls by the finite element method and experimental validation

    International Nuclear Information System (INIS)

    Diaz del Coz, J.J.; Nieto, P.J. Garcia; Rodriguez, A. Martin; Martinez-Luengas, A. Lozano; Biempica, C. Betegon

    2006-01-01

    The finite element method (FEM) is applied to the non-linear complex heat transfer analysis of light concrete hollow brick walls. The non-linearity is due to the radiation boundary condition inside the inner holes of the bricks. The conduction and convection phenomena are taking into account in this study for three different values of the conductivity mortar and two values for the brick. Finally, the numerical and experimental results are compared and a good agreement is shown

  17. Non-linear thermal analysis of light concrete hollow brick walls by the finite element method and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Del Coz Diaz, J.J.; Rodriguez, A. Martin; Martinez-Luengas, A. Lozano; Biempica, C. Betegon [Department of Construction, University of Oviedo, Edificio Departamental Viesques No 7, Dpcho. 7.1.02 Campus de Viesques, 33204 Gijon, Asturias (Spain); Nieto, P.J. Garcia [Departamento de Matematicas, Facultad de Ciencias, C/Calvo Sotelo s/n, 33007 Oviedo, Asturias (Spain)

    2006-06-15

    The finite element method (FEM) is applied to the non-linear complex heat transfer analysis of light concrete hollow brick walls. The non-linearity is due to the radiation boundary condition inside the inner holes of the bricks. The conduction and convection phenomena are taking into account in this study for three different values of the conductivity mortar and two values for the brick. Finally, the numerical and experimental results are compared and a good agreement is shown. [Author].

  18. Non-linear thermal analysis of light concrete hollow brick walls by the finite element method and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Diaz del Coz, J.J. [Department of Construction, University of Oviedo, Edificio Departamental Viesques No 7, Dpcho. 7.1.02 Campus de Viesques, 33204 Gijon, Asturias (Spain)]. E-mail: juanjo@constru.uniovi.es; Nieto, P.J. Garcia [Departamento de Matematicas, Facultad de Ciencias, C/Calvo Sotelo s/n, 33007 Oviedo, Asturias (Spain); Rodriguez, A. Martin [Department of Construction, University of Oviedo, Edificio Departamental Viesques No 7, Dpcho. 7.1.02 Campus de Viesques, 33204 Gijon, Asturias (Spain); Martinez-Luengas, A. Lozano [Department of Construction, University of Oviedo, Edificio Departamental Viesques No 7, Dpcho. 7.1.02 Campus de Viesques, 33204 Gijon, Asturias (Spain); Biempica, C. Betegon [Department of Construction, University of Oviedo, Edificio Departamental Viesques No 7, Dpcho. 7.1.02 Campus de Viesques, 33204 Gijon, Asturias (Spain)

    2006-06-15

    The finite element method (FEM) is applied to the non-linear complex heat transfer analysis of light concrete hollow brick walls. The non-linearity is due to the radiation boundary condition inside the inner holes of the bricks. The conduction and convection phenomena are taking into account in this study for three different values of the conductivity mortar and two values for the brick. Finally, the numerical and experimental results are compared and a good agreement is shown.

  19. Frequency prediction by linear stability analysis around mean flow

    Science.gov (United States)

    Bengana, Yacine; Tuckerman, Laurette

    2017-11-01

    The frequency of certain limit cycles resulting from a Hopf bifurcation, such as the von Karman vortex street, can be predicted by linear stability analysis around their mean flows. Barkley (2006) has shown this to yield an eigenvalue whose real part is zero and whose imaginary part matches the nonlinear frequency. This property was named RZIF by Turton et al. (2015); moreover they found that the traveling waves (TW) of thermosolutal convection have the RZIF property. They explained this as a consequence of the fact that the temporal Fourier spectrum is dominated by the mean flow and first harmonic. We could therefore consider that only the first mode is important in the saturation of the mean flow as presented in the Self-Consistent Model (SCM) of Mantic-Lugo et al. (2014). We have implemented a full Newton's method to solve the SCM for thermosolutal convection. We show that while the RZIF property is satisfied far from the threshold, the SCM model reproduces the exact frequency only very close to the threshold. Thus, the nonlinear interaction of only the first mode with itself is insufficiently accurate to estimate the mean flow. Our next step will be to take into account higher harmonics and to apply this analysis to the standing waves, for which RZIF does not hold.

  20. Development of an efficient iterative solver for linear systems in FE structural analysis

    International Nuclear Information System (INIS)

    Saint-Georges, P.; Warzee, G.; Beauwens, R.; Notay, Y.

    1993-01-01

    The preconditioned conjugate gradient is a well-known and powerful method to solve sparse symmetric positive definite systems of linear equations. Such systems are generated by the finite element discretization in structural analysis but users of finite element in this context generally still rely on direct methods. It is our purpose in the present paper to highlight the improvement brought forward by some new preconditioning techniques and show that the preconditioned conjugate gradient method is more performant than any direct method. (author)

  1. Diagnosis and prognosis of Ostheoarthritis by texture analysis using sparse linear models

    DEFF Research Database (Denmark)

    Marques, Joselene; Clemmensen, Line Katrine Harder; Dam, Erik

    We present a texture analysis methodology that combines uncommitted machine-learning techniques and sparse feature transformation methods in a fully automatic framework. We compare the performances of a partial least squares (PLS) forward feature selection strategy to a hard threshold sparse PLS...... algorithm and a sparse linear discriminant model. The texture analysis framework was applied to diagnosis of knee osteoarthritis (OA) and prognosis of cartilage loss. For this investigation, a generic texture feature bank was extracted from magnetic resonance images of tibial knee bone. The features were...... used as input to the sparse algorithms, which dened the best features to retain in the model. To cope with the limited number of samples, the data was evaluated using 10 fold cross validation (CV). The diagnosis evaluation using sparse PLS reached a generalization area-under-the-ROC curve (AUC) of 0...

  2. Applied linear algebra

    CERN Document Server

    Olver, Peter J

    2018-01-01

    This textbook develops the essential tools of linear algebra, with the goal of imparting technique alongside contextual understanding. Applications go hand-in-hand with theory, each reinforcing and explaining the other. This approach encourages students to develop not only the technical proficiency needed to go on to further study, but an appreciation for when, why, and how the tools of linear algebra can be used across modern applied mathematics. Providing an extensive treatment of essential topics such as Gaussian elimination, inner products and norms, and eigenvalues and singular values, this text can be used for an in-depth first course, or an application-driven second course in linear algebra. In this second edition, applications have been updated and expanded to include numerical methods, dynamical systems, data analysis, and signal processing, while the pedagogical flow of the core material has been improved. Throughout, the text emphasizes the conceptual connections between each application and the un...

  3. Noise analysis and performance of a selfscanned linear InSb detector array

    International Nuclear Information System (INIS)

    Finger, G.; Meyer, M.; Moorwood, A.F.M.

    1987-01-01

    A noise model for detectors operated in the capacitive discharge mode is presented. It is used to analyze the noise performance of the ESO nested timing readout technique applied to a linear 32-element InSb array which is multiplexed by a silicon switched-FET shift register. Analysis shows that KTC noise of the videoline is the major noise contribution; it can be eliminated by weighted double-correlated sampling. Best noise performance of this array is achieved at the smallest possible reverse bias voltage (not more than 20 mV) whereas excess noise is observed at higher reverse bias voltages. 5 references

  4. Robust best linear estimation for regression analysis using surrogate and instrumental variables.

    Science.gov (United States)

    Wang, C Y

    2012-04-01

    We investigate methods for regression analysis when covariates are measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies the classical measurement error model, but it may not have repeated measurements. In addition to the surrogate variables that are available among the subjects in the calibration sample, we assume that there is an instrumental variable (IV) that is available for all study subjects. An IV is correlated with the unobserved true exposure variable and hence can be useful in the estimation of the regression coefficients. We propose a robust best linear estimator that uses all the available data, which is the most efficient among a class of consistent estimators. The proposed estimator is shown to be consistent and asymptotically normal under very weak distributional assumptions. For Poisson or linear regression, the proposed estimator is consistent even if the measurement error from the surrogate or IV is heteroscedastic. Finite-sample performance of the proposed estimator is examined and compared with other estimators via intensive simulation studies. The proposed method and other methods are applied to a bladder cancer case-control study.

  5. Evaluation of site effects on ground motions based on equivalent linear site response analysis and liquefaction potential in Chennai, south India

    Science.gov (United States)

    Nampally, Subhadra; Padhy, Simanchal; Trupti, S.; Prabhakar Prasad, P.; Seshunarayana, T.

    2018-05-01

    We study local site effects with detailed geotechnical and geophysical site characterization to evaluate the site-specific seismic hazard for the seismic microzonation of the Chennai city in South India. A Maximum Credible Earthquake (MCE) of magnitude 6.0 is considered based on the available seismotectonic and geological information of the study area. We synthesized strong ground motion records for this target event using stochastic finite-fault technique, based on a dynamic corner frequency approach, at different sites in the city, with the model parameters for the source, site, and path (attenuation) most appropriately selected for this region. We tested the influence of several model parameters on the characteristics of ground motion through simulations and found that stress drop largely influences both the amplitude and frequency of ground motion. To minimize its influence, we estimated stress drop after finite bandwidth correction, as expected from an M6 earthquake in Indian peninsula shield for accurately predicting the level of ground motion. Estimates of shear wave velocity averaged over the top 30 m of soil (V S30) are obtained from multichannel analysis of surface wave (MASW) at 210 sites at depths of 30 to 60 m below the ground surface. Using these V S30 values, along with the available geotechnical information and synthetic ground motion database obtained, equivalent linear one-dimensional site response analysis that approximates the nonlinear soil behavior within the linear analysis framework was performed using the computer program SHAKE2000. Fundamental natural frequency, Peak Ground Acceleration (PGA) at surface and rock levels, response spectrum at surface level for different damping coefficients, and amplification factors are presented at different sites of the city. Liquefaction study was done based on the V S30 and PGA values obtained. The major findings suggest show that the northeast part of the city is characterized by (i) low V S30 values

  6. Factor Analysis of the Aggregated Electric Vehicle Load Based on Data Mining

    Directory of Open Access Journals (Sweden)

    Yao Wang

    2012-06-01

    Full Text Available Electric vehicles (EVs and the related infrastructure are being developed rapidly. In order to evaluate the impact of factors on the aggregated EV load and to coordinate charging, a model is established to capture the relationship between the charging load and important factors based on data mining. The factors can be categorized as internal and external. The internal factors include the EV battery size, charging rate at different places, penetration of the charging infrastructure, and charging habits. The external factor is the time-of-use pricing (TOU policy. As a massive input data is necessary for data mining, an algorithm is implemented to generate a massive sample as input data which considers real-world travel patterns based on a historical travel dataset. With the input data, linear regression was used to build a linear model whose inputs were the internal factors. The impact of the internal factors on the EV load can be quantified by analyzing the sign, value, and temporal distribution of the model coefficients. The results showed that when no TOU policy is implemented, the rate of charging at home and range anxiety exerts the greatest influence on EV load. For the external factor, a support vector regression technique was used to build a relationship between the TOU policy and EV load. Then, an optimization model based on the relationship was proposed to devise a TOU policy that levels the load. The results suggest that implementing a TOU policy reduces the difference between the peak and valley loads remarkably.

  7. Effects of collisions on linear and non-linear spectroscopic line shapes

    International Nuclear Information System (INIS)

    Berman, P.R.

    1978-01-01

    A fundamental physical problem is the determination of atom-atom, atom-molecule and molecule-molecule differential and total scattering cross sections. In this work, a technique for studying atomic and molecular collisions using spectroscopic line shape analysis is discussed. Collisions occurring within an atomic or molecular sample influence the sample's absorptive or emissive properties. Consequently the line shapes associated with the linear or non-linear absorption of external fields by an atomic system reflect the collisional processes occurring in the gas. Explicit line shape expressions are derived characterizing linear or saturated absorption by two-or three-level 'active' atoms which are undergoing collisions with perturber atoms. The line shapes may be broadened, shifted, narrowed, or distorted as a result of collisions which may be 'phase-interrupting' or 'velocity-changing' in nature. Systematic line shape studies can be used to obtain information on both the differential and total active atom-perturber scattering cross sections. (Auth.)

  8. Experimental Analysis of Linear Induction Motor under Variable Voltage Variable Frequency (VVVF Power Supply

    Directory of Open Access Journals (Sweden)

    Prasenjit D. Wakode

    2016-07-01

    Full Text Available This paper presents the complete analysis of Linear Induction Motor (LIM under VVVF. The complete variation of LIM air gap flux under ‘blocked Linor’ condition and starting force is analyzed and presented when LIM is given VVVF supply. The analysis of this data is important in further understanding of the equivalent circuit parameters of LIM and to study the magnetic circuit of LIM. The variation of these parameters is important to know the LIM response at different frequencies. The simulation and application of different control strategies such as vector control thus becomes quite easy to apply and understand motor’s response under such strategy of control.

  9. Reduction of Linear Programming to Linear Approximation

    OpenAIRE

    Vaserstein, Leonid N.

    2006-01-01

    It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.

  10. The non-linear power spectrum of the Lyman alpha forest

    International Nuclear Information System (INIS)

    Arinyo-i-Prats, Andreu; Miralda-Escudé, Jordi; Viel, Matteo; Cen, Renyue

    2015-01-01

    The Lyman alpha forest power spectrum has been measured on large scales by the BOSS survey in SDSS-III at z∼ 2.3, has been shown to agree well with linear theory predictions, and has provided the first measurement of Baryon Acoustic Oscillations at this redshift. However, the power at small scales, affected by non-linearities, has not been well examined so far. We present results from a variety of hydrodynamic simulations to predict the redshift space non-linear power spectrum of the Lyα transmission for several models, testing the dependence on resolution and box size. A new fitting formula is introduced to facilitate the comparison of our simulation results with observations and other simulations. The non-linear power spectrum has a generic shape determined by a transition scale from linear to non-linear anisotropy, and a Jeans scale below which the power drops rapidly. In addition, we predict the two linear bias factors of the Lyα forest and provide a better physical interpretation of their values and redshift evolution. The dependence of these bias factors and the non-linear power on the amplitude and slope of the primordial fluctuations power spectrum, the temperature-density relation of the intergalactic medium, and the mean Lyα transmission, as well as the redshift evolution, is investigated and discussed in detail. A preliminary comparison to the observations shows that the predicted redshift distortion parameter is in good agreement with the recent determination of Blomqvist et al., but the density bias factor is lower than observed. We make all our results publicly available in the form of tables of the non-linear power spectrum that is directly obtained from all our simulations, and parameters of our fitting formula

  11. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    Science.gov (United States)

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  12. Observation and analysis of oscillations in linear accelerators

    International Nuclear Information System (INIS)

    Seeman, J.T.

    1991-11-01

    This report discusses the following on oscillation in linear accelerators: Betatron Oscillations; Betatron Oscillations at High Currents; Transverse Profile Oscillations; Transverse Profile Oscillations at High Currents.; Oscillation and Profile Transient Jitter; and Feedback on Transverse Oscillations

  13. Large power factor and anomalous Hall effect and their correlation with observed linear magneto resistance in Co-doped Bi2Se3 3D topological insulator

    Science.gov (United States)

    Singh, Rahul; Shukla, K. K.; Kumar, A.; Okram, G. S.; Singh, D.; Ganeshan, V.; Lakhani, Archana; Ghosh, A. K.; Chatterjee, Sandip

    2016-09-01

    Magnetoresistance (MR), thermo power, magnetization and Hall effect measurements have been performed on Co-doped Bi2Se3 topological insulators. The undoped sample shows that the maximum MR as a destructive interference due to a π-Berry phase leads to a decrease of MR. As the Co is doped, the linearity in MR is increased. The observed MR of Bi2Se3 can be explained with the classical model. The low temperature MR behavior of Co doped samples cannot be explained with the same model, but can be explained with the quantum linear MR model. Magnetization behavior indicates the establishment of ferromagnetic ordering with Co doping. Hall effect data also supports the establishment of ferromagnetic ordering in Co-doped Bi2Se3 samples by showing the anomalous Hall effect. Furthermore, when spectral weight suppression is insignificant, Bi2Se3 behaves as a dilute magnetic semiconductor. Moreover, the maximum power factor is observed when time reversal symmetry (TRS) is maintained. As the TRS is broken the power factor value is decreased, which indicates that with the rise of Dirac cone above the Fermi level the anomalous Hall effect and linearity in MR increase and the power factor decreases.

  14. Estimating linear effects in ANOVA designs: the easy way.

    Science.gov (United States)

    Pinhas, Michal; Tzelgov, Joseph; Ganor-Stern, Dana

    2012-09-01

    Research in cognitive science has documented numerous phenomena that are approximated by linear relationships. In the domain of numerical cognition, the use of linear regression for estimating linear effects (e.g., distance and SNARC effects) became common following Fias, Brysbaert, Geypens, and d'Ydewalle's (1996) study on the SNARC effect. While their work has become the model for analyzing linear effects in the field, it requires statistical analysis of individual participants and does not provide measures of the proportions of variability accounted for (cf. Lorch & Myers, 1990). In the present methodological note, using both the distance and SNARC effects as examples, we demonstrate how linear effects can be estimated in a simple way within the framework of repeated measures analysis of variance. This method allows for estimating effect sizes in terms of both slope and proportions of variability accounted for. Finally, we show that our method can easily be extended to estimate linear interaction effects, not just linear effects calculated as main effects.

  15. Applicability of hybrid linear ion trap-high resolution mass spectrometry and quadrupole-linear ion trap-mass spectrometry for mycotoxin analysis in baby food.

    Science.gov (United States)

    Rubert, Josep; James, Kevin J; Mañes, Jordi; Soler, Carla

    2012-02-03

    Recent developments in mass spectrometers have created a paradoxical situation; different mass spectrometers are available, each of them with their specific strengths and drawbacks. Hybrid instruments try to unify several advantages in one instrument. In this study two of wide-used hybrid instruments were compared: hybrid quadrupole-linear ion trap-mass spectrometry (QTRAP®) and the hybrid linear ion trap-high resolution mass spectrometry (LTQ-Orbitrap®). Both instruments were applied to detect the presence of 18 selected mycotoxins in baby food. Analytical parameters were validated according to 2002/657/CE. Limits of quantification (LOQs) obtained by QTRAP® instrument ranged from 0.45 to 45 μg kg⁻¹ while lower limits of quantification (LLOQs) values were obtained by LTQ-Orbitrap®: 7-70 μg kg⁻¹. The correlation coefficients (r) in both cases were upper than 0.989. These values highlighted that both instruments were complementary for the analysis of mycotoxin in baby food; while QTRAP® reached best sensitivity and selectivity, LTQ-Orbitrap® allowed the identification of non-target and unknowns compounds. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Assessment of Poisson, logit, and linear models for genetic analysis of clinical mastitis in Norwegian Red cows.

    Science.gov (United States)

    Vazquez, A I; Gianola, D; Bates, D; Weigel, K A; Heringstad, B

    2009-02-01

    Clinical mastitis is typically coded as presence/absence during some period of exposure, and records are analyzed with linear or binary data models. Because presence includes cows with multiple episodes, there is loss of information when a count is treated as a binary response. The Poisson model is designed for counting random variables, and although it is used extensively in epidemiology of mastitis, it has rarely been used for studying the genetics of mastitis. Many models have been proposed for genetic analysis of mastitis, but they have not been formally compared. The main goal of this study was to compare linear (Gaussian), Bernoulli (with logit link), and Poisson models for the purpose of genetic evaluation of sires for mastitis in dairy cattle. The response variables were clinical mastitis (CM; 0, 1) and number of CM cases (NCM; 0, 1, 2, ..). Data consisted of records on 36,178 first-lactation daughters of 245 Norwegian Red sires distributed over 5,286 herds. Predictive ability of models was assessed via a 3-fold cross-validation using mean squared error of prediction (MSEP) as the end-point. Between-sire variance estimates for NCM were 0.065 in Poisson and 0.007 in the linear model. For CM the between-sire variance was 0.093 in logit and 0.003 in the linear model. The ratio between herd and sire variances for the models with NCM response was 4.6 and 3.5 for Poisson and linear, respectively, and for model for CM was 3.7 in both logit and linear models. The MSEP for all cows was similar. However, within healthy animals, MSEP was 0.085 (Poisson), 0.090 (linear for NCM), 0.053 (logit), and 0.056 (linear for CM). For mastitic animals the MSEP values were 1.206 (Poisson), 1.185 (linear for NCM response), 1.333 (logit), and 1.319 (linear for CM response). The models for count variables had a better performance when predicting diseased animals and also had a similar performance between them. Logit and linear models for CM had better predictive ability for healthy

  17. The effects of oestrogens on linear bone growth

    DEFF Research Database (Denmark)

    Juul, A

    2001-01-01

    Regulation of linear bone growth in children and adolescents comprises a complex interaction of hormones and growth factors. Growth hormone (GH) is considered to be the key hormone regulator of linear growth in childhood. The pubertal increase in growth velocity associated with GH has traditionally...... been attributed to testicular androgen secretion in boys, and to oestrogens or adrenal androgen secretion in girls. Research data indicating that oestrogen may be the principal hormone stimulating the pubertal growth spurt in boys as well as girls is reviewed. Such an action is mediated by oestrogen...... female growth spurt despite lack of androgen action. Oestrogens may also influence linear bone growth indirectly via modulation of the GH-insulin-like growth factor-I (IGF-I) axis. Thus, ER blockade diminishes endogenous GH secretion, androgen receptor (AR) blockade increases GH secretion in peripubertal...

  18. Non-linear thermal and structural analysis of a typical spent fuel silo

    International Nuclear Information System (INIS)

    Alvarez, L.M.; Mancini, G.R.; Spina, O.A.F.; Sala, G.; Paglia, F.

    1993-01-01

    A numerical method for the non-linear structural analysis of a typical reinforced concrete spent fuel silo under thermal loads is proposed. The numerical time integration was performed by means of a time explicit axisymmetric finite-difference numerical operator. An analysis was made of influences by heat, viscoelasticity and cracking upon the concrete behaviour between concrete pouring stage and the first period of the silo's normal operation. The following parameters were considered for the heat generation and transmission process: Heat generated during the concrete's hardening stage, Solar radiation effects, Natural convection, Spent-fuel heat generation. For the modelling of the reinforced concrete behaviour, use was made of a simplified formulation of: Visco-elastic effects, Thermal cracking, Steel reinforcement. A comparison between some experimental temperature characteristic values obtained from the numerical integration process and empirical data obtained from a 1:1 scaled prototype was also carried out. (author)

  19. Logistic regression analysis of risk factors for postoperative recurrence of spinal tumors and analysis of prognostic factors.

    Science.gov (United States)

    Zhang, Shanyong; Yang, Lili; Peng, Chuangang; Wu, Minfei

    2018-02-01

    The aim of the present study was to investigate the risk factors for postoperative recurrence of spinal tumors by logistic regression analysis and analysis of prognostic factors. In total, 77 male and 48 female patients with spinal tumor were selected in our hospital from January, 2010 to December, 2015 and divided into the benign (n=76) and malignant groups (n=49). All the patients underwent microsurgical resection of spinal tumors and were reviewed regularly 3 months after operation. The McCormick grading system was used to evaluate the postoperative spinal cord function. Data were subjected to statistical analysis. Of the 125 cases, 63 cases showed improvement after operation, 50 cases were stable, and deterioration was found in 12 cases. The improvement rate of patients with cervical spine tumor, which reached 56.3%, was the highest. Fifty-two cases of sensory disturbance, 34 cases of pain, 30 cases of inability to exercise, 26 cases of ataxia, and 12 cases of sphincter disorders were found after operation. Seventy-two cases (57.6%) underwent total resection, 18 cases (14.4%) received subtotal resection, 23 cases (18.4%) received partial resection, and 12 cases (9.6%) were only treated with biopsy/decompression. Postoperative recurrence was found in 57 cases (45.6%). The mean recurrence time of patients in the malignant group was 27.49±6.09 months, and the mean recurrence time of patients in the benign group was 40.62±4.34. The results were significantly different (Pregression analysis of total resection-related factors showed that total resection should be the preferred treatment for patients with benign tumors, thoracic and lumbosacral tumors, and lower McCormick grade, as well as patients without syringomyelia and intramedullary tumors. Logistic regression analysis of recurrence-related factors revealed that the recurrence rate was relatively higher in patients with malignant, cervical, thoracic and lumbosacral, intramedullary tumors, and higher Mc

  20. Axial displacement of external and internal implant-abutment connection evaluated by linear mixed model analysis.

    Science.gov (United States)

    Seol, Hyon-Woo; Heo, Seong-Joo; Koak, Jai-Young; Kim, Seong-Kyun; Kim, Shin-Koo

    2015-01-01

    To analyze the axial displacement of external and internal implant-abutment connection after cyclic loading. Three groups of external abutments (Ext group), an internal tapered one-piece-type abutment (Int-1 group), and an internal tapered two-piece-type abutment (Int-2 group) were prepared. Cyclic loading was applied to implant-abutment assemblies at 150 N with a frequency of 3 Hz. The amount of axial displacement, the Periotest values (PTVs), and the removal torque values(RTVs) were measured. Both a repeated measures analysis of variance and pattern analysis based on the linear mixed model were used for statistical analysis. Scanning electron microscopy (SEM) was used to evaluate the surface of the implant-abutment connection. The mean axial displacements after 1,000,000 cycles were 0.6 μm in the Ext group, 3.7 μm in the Int-1 group, and 9.0 μm in the Int-2 group. Pattern analysis revealed a breakpoint at 171 cycles. The Ext group showed no declining pattern, and the Int-1 group showed no declining pattern after the breakpoint (171 cycles). However, the Int-2 group experienced continuous axial displacement. After cyclic loading, the PTV decreased in the Int-2 group, and the RTV decreased in all groups. SEM imaging revealed surface wear in all groups. Axial displacement and surface wear occurred in all groups. The PTVs remained stable, but the RTVs decreased after cyclic loading. Based on linear mixed model analysis, the Ext and Int-1 groups' axial displacements plateaued after little cyclic loading. The Int-2 group's rate of axial displacement slowed after 100,000 cycles.