WorldWideScience

Sample records for linear factor analysis

  1. Identification of noise in linear data sets by factor analysis

    International Nuclear Information System (INIS)

    Roscoe, B.A.; Hopke, Ph.K.

    1982-01-01

    A technique which has the ability to identify bad data points, after the data has been generated, is classical factor analysis. The ability of classical factor analysis to identify two different types of data errors make it ideally suited for scanning large data sets. Since the results yielded by factor analysis indicate correlations between parameters, one must know something about the nature of the data set and the analytical techniques used to obtain it to confidentially isolate errors. (author)

  2. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    Science.gov (United States)

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  3. [Multiple linear regression and ROC curve analysis of the factors of lumbar spine bone mineral density].

    Science.gov (United States)

    Zhang, Xiaodong; Zhao, Yinxia; Hu, Shaoyong; Hao, Shuai; Yan, Jiewen; Zhang, Lingyan; Zhao, Jing; Li, Shaolin

    2015-09-01

    To investigate the correlation between the lumbar vertebra bone mineral density (BMD) and age, gender, height, weight, body mass index, waistline, hipline, bone marrow and abdomen fat, and to explore the key factor affecting the BMD. A total of 72 cases were randomly recruited. All the subjects underwent a spectroscopic examination of the third lumber vertebra with single-voxel method in 1.5T MR. Lipid fractions (FF%) were measured. Quantitative CT were also performed to get the BMD of L3 and the corresponding abdomen subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT). The statistical analysis were performed by SPSS 19.0. Multiple linear regression showed except the age and FF% showed significant difference (P0.05). The correlation of age and FF% with BMD was statistically negatively significant (r=-0.830, -0.521, P<0.05). The ROC curve analysis showed that the sensitivety and specificity of predicting osteoporosis were 81.8% and 86.9%, with a threshold of 58.5 years old. And it showed that the sensitivety and specificity of predicting osteoporosis were 90.9% and 55.7%, with a threshold of 52.8% for FF%. The lumbar vertebra BMD was significantly and negatively correlated with age and bone marrow FF%, but it was not significantly correlated with gender, height, weight, BMI, waistline, hipline, SAT and VAT. And age was the critical factor.

  4. Linear model analysis of the influencing factors of boar longevity in Southern China.

    Science.gov (United States)

    Wang, Chao; Li, Jia-Lian; Wei, Hong-Kui; Zhou, Yuan-Fei; Jiang, Si-Wen; Peng, Jian

    2017-04-15

    This study aimed to investigate the factors influencing the boar herd life month (BHLM) in Southern China. A total of 1630 records of culling boars from nine artificial insemination centers were collected from January 2013 to May 2016. A logistic regression model and two linear models were used to analyze the effects of breed, housing type, age at herd entry, and seed stock herd on boar removal reason and BHLM, respectively. Boar breed and the age at herd entry had significant effects on the removal reasons (P linear models (with or without removal reason including) showed boars raised individually in stalls exhibited shorter BHLM than those raised in pens (P introduction. Copyright © 2017. Published by Elsevier Inc.

  5. Assessing the discriminating power of item and test scores in the linear factor-analysis model

    Directory of Open Access Journals (Sweden)

    Pere J. Ferrando

    2012-01-01

    Full Text Available Las propuestas rigurosas y basadas en un modelo psicométrico para estudiar el impreciso concepto de "capacidad discriminativa" son escasas y generalmente limitadas a los modelos no-lineales para items binarios. En este artículo se propone un marco general para evaluar la capacidad discriminativa de las puntuaciones en ítems y tests que son calibrados mediante el modelo de un factor común. La propuesta se organiza en torno a tres criterios: (a tipo de puntuación, (b rango de discriminación y (c aspecto específico que se evalúa. Dentro del marco propuesto: (a se discuten las relaciones entre 16 medidas, de las cuales 6 parecen ser nuevas, y (b se estudian las relaciones entre ellas. La utilidad de la propuesta en las aplicaciones psicométricas que usan el modelo factorial se ilustra mediante un ejemplo empírico.

  6. Generalized Linear Covariance Analysis

    Science.gov (United States)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  7. Generalized Linear Mixed Model Analysis of Urban-Rural Differences in Social and Behavioral Factors for Colorectal Cancer Screening

    Science.gov (United States)

    Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin

    2017-09-27

    Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (pregression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License

  8. Foundations of factor analysis

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Introduction Factor Analysis and Structural Theories Brief History of Factor Analysis as a Linear Model Example of Factor AnalysisMathematical Foundations for Factor Analysis Introduction Scalar AlgebraVectorsMatrix AlgebraDeterminants Treatment of Variables as Vectors Maxima and Minima of FunctionsComposite Variables and Linear Transformations Introduction Composite Variables Unweighted Composite VariablesDifferentially Weighted Composites Matrix EquationsMulti

  9. Linear Regression Analysis

    CERN Document Server

    Seber, George A F

    2012-01-01

    Concise, mathematically clear, and comprehensive treatment of the subject.* Expanded coverage of diagnostics and methods of model fitting.* Requires no specialized knowledge beyond a good grasp of matrix algebra and some acquaintance with straight-line regression and simple analysis of variance models.* More than 200 problems throughout the book plus outline solutions for the exercises.* This revision has been extensively class-tested.

  10. Analysis of linear energy transfers and quality factors of charged particles produced by spontaneous fission neutrons from 252Cf and 244Pu in the human body

    International Nuclear Information System (INIS)

    Endo, A.; Sato, T.

    2013-01-01

    Absorbed doses, linear energy transfers (LETs) and quality factors of secondary charged particles in organs and tissues, generated via the interactions of the spontaneous fission neutrons from. 252 Cf and. 244 Pu within the human body, were studied using the Particle and Heavy Ion Transport Code System (PHITS) coupled with the ICRP Reference Phantom. Both the absorbed doses and the quality factors in target organs generally decrease with increasing distance from the source organ. The analysis of LET distributions of secondary charged particles led to the identification of the relationship between LET spectra and target-source organ locations. A comparison between human body-averaged mean quality factors and fluence-averaged radiation weighting factors showed that the current numerical conventions for the radiation weighting factors of neutrons, updated in ICRP103, and the quality factors for internal exposure are valid. (authors)

  11. Multiple Linear Regression Analysis of Factors Affecting Real Property Price Index From Case Study Research In Istanbul/Turkey

    Science.gov (United States)

    Denli, H. H.; Koc, Z.

    2015-12-01

    Estimation of real properties depending on standards is difficult to apply in time and location. Regression analysis construct mathematical models which describe or explain relationships that may exist between variables. The problem of identifying price differences of properties to obtain a price index can be converted into a regression problem, and standard techniques of regression analysis can be used to estimate the index. Considering regression analysis for real estate valuation, which are presented in real marketing process with its current characteristics and quantifiers, the method will help us to find the effective factors or variables in the formation of the value. In this study, prices of housing for sale in Zeytinburnu, a district in Istanbul, are associated with its characteristics to find a price index, based on information received from a real estate web page. The associated variables used for the analysis are age, size in m2, number of floors having the house, floor number of the estate and number of rooms. The price of the estate represents the dependent variable, whereas the rest are independent variables. Prices from 60 real estates have been used for the analysis. Same price valued locations have been found and plotted on the map and equivalence curves have been drawn identifying the same valued zones as lines.

  12. Linear factor copula models and their properties

    KAUST Repository

    Krupskii, Pavel; Genton, Marc G.

    2018-01-01

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  13. Linear factor copula models and their properties

    KAUST Repository

    Krupskii, Pavel

    2018-04-25

    We consider a special case of factor copula models with additive common factors and independent components. These models are flexible and parsimonious with O(d) parameters where d is the dimension. The linear structure allows one to obtain closed form expressions for some copulas and their extreme‐value limits. These copulas can be used to model data with strong tail dependencies, such as extreme data. We study the dependence properties of these linear factor copula models and derive the corresponding limiting extreme‐value copulas with a factor structure. We show how parameter estimates can be obtained for these copulas and apply one of these copulas to analyse a financial data set.

  14. Linear Accelerator Stereotactic Radiosurgery of Central Nervous System Arteriovenous Malformations: A 15-Year Analysis of Outcome-Related Factors in a Single Tertiary Center.

    Science.gov (United States)

    Thenier-Villa, José Luis; Galárraga-Campoverde, Raúl Alejandro; Martínez Rolán, Rosa María; De La Lama Zaragoza, Adolfo Ramón; Martínez Cueto, Pedro; Muñoz Garzón, Víctor; Salgado Fernández, Manuel; Conde Alonso, Cesáreo

    2017-07-01

    Linear accelerator stereotactic radiosurgery is one of the modalities available for the treatment of central nervous system arteriovenous malformations (AVMs). The aim of this study was to describe our 15-year experience with this technique in a single tertiary center and the analysis of outcome-related factors. From 1998 to 2013, 195 patients were treated with linear accelerator-based radiosurgery; we conducted a retrospective study collecting patient- and AVM-related variables. Treatment outcomes were obliteration, posttreatment hemorrhage, symptomatic radiation-induced changes, and 3-year neurologic status. We also analyzed prognostic factors of each outcome and predictability analysis of 5 scales: Spetzler-Martin grade, Lawton-Young supplementary and Lawton combined scores, radiosurgery-based AVM score, Virginia Radiosurgery AVM Scale, and Heidelberg score. Overall obliteration rate was 81%. Nidus diameter and venous drainage were predictive of obliteration (P linear accelerator-based radiosurgery is a useful, valid, effective, and safe modality for treatment of brain AVMs. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Orthogonal sparse linear discriminant analysis

    Science.gov (United States)

    Liu, Zhonghua; Liu, Gang; Pu, Jiexin; Wang, Xiaohong; Wang, Haijun

    2018-03-01

    Linear discriminant analysis (LDA) is a linear feature extraction approach, and it has received much attention. On the basis of LDA, researchers have done a lot of research work on it, and many variant versions of LDA were proposed. However, the inherent problem of LDA cannot be solved very well by the variant methods. The major disadvantages of the classical LDA are as follows. First, it is sensitive to outliers and noises. Second, only the global discriminant structure is preserved, while the local discriminant information is ignored. In this paper, we present a new orthogonal sparse linear discriminant analysis (OSLDA) algorithm. The k nearest neighbour graph is first constructed to preserve the locality discriminant information of sample points. Then, L2,1-norm constraint on the projection matrix is used to act as loss function, which can make the proposed method robust to outliers in data points. Extensive experiments have been performed on several standard public image databases, and the experiment results demonstrate the performance of the proposed OSLDA algorithm.

  16. Linear Algebraic Method for Non-Linear Map Analysis

    International Nuclear Information System (INIS)

    Yu, L.; Nash, B.

    2009-01-01

    We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.

  17. Analysis of key factors influencing the evaporation performances of an oriented linear cutting copper fiber sintered felt

    Science.gov (United States)

    Pan, Minqiang; Zhong, Yujian

    2018-01-01

    Porous structure can effectively enhance the heat transfer efficiency. A kind of micro vaporizer using the oriented linear cutting copper fiber sintered felt is proposed in this work. Multiple long cutting copper fibers are firstly fabricated with a multi-tooth tool and then sintered together in parallel to form uniform thickness metal fiber sintered felts that provided a characteristic of oriented microchannels. The temperature rise response and thermal conversion efficiency are experimentally investigated to evaluate the influences of porosity, surface structure, feed flow rate and input power on the evaporation characteristics. It is indicated that the temperature rise response of water is mainly affected by input power and feed flow rate. High input power and low feed flow rate present better temperature rise response of water. Porosity rather than surface structure plays an important role in the temperature rise response of water at a relatively high input power. The thermal conversion efficiency is dominated by the input power and surface structure. The oriented linear cutting copper fiber sintered felts for three kinds of porosities show better thermal conversion efficiency than that of the oriented linear copper wire sintered felt when the input power is less than 115 W. All the sintered felts have almost the same performance of thermal conversion at a high input power.

  18. Generalised linear mixed models analysis of risk factors for contamination of Danish broiler flocks with Salmonella typhimurium

    DEFF Research Database (Denmark)

    Chriél, Mariann; Stryhn, H.; Dauphin, G.

    1999-01-01

    are the broiler flocks (about 4000 flocks) which are clustered within producers. Broiler flocks with ST-infected parent stocks show increased risk of salmonella infection, and also the hatchery affects the salmonella status significantly. Among the rearing factors, only the use of medicine as well as the time......We present a retrospective observational study of risk factors associated with the occurrence of Salmonella typhimurium (ST) in Danish broiler flocks. The study is based on recordings from 1994 in the ante-mortem database maintained by the Danish Poultry Council. The epidemiological units...

  19. Confirmatory Factor Analysis and Multiple Linear Regression of the Neck Disability Index: Assessment If Subscales Are Equally Relevant in Whiplash and Nonspecific Neck Pain.

    Science.gov (United States)

    Croft, Arthur C; Milam, Bryce; Meylor, Jade; Manning, Richard

    2016-06-01

    Because of previously published recommendations to modify the Neck Disability Index (NDI), we evaluated the responsiveness and dimensionality of the NDI within a population of adult whiplash-injured subjects. The purpose of the present study was to evaluate the responsiveness and dimensionality of the NDI within a population of adult whiplash-injured subjects. Subjects who had sustained whiplash injuries of grade 2 or higher completed an NDI questionnaire. There were 123 subjects (55% female, of which 36% had recovered and 64% had chronic symptoms. NDI subscales were analyzed using confirmatory factor analysis, considering only the subscales and, secondly, using sex as an 11th variable. The subscales were also tested with multiple linear regression modeling using the total score as a target variable. When considering only the 10 NDI subscales, only a single factor emerged, with an eigenvalue of 5.4, explaining 53.7% of the total variance. Strong correlation (> .55) (P factor model of the NDI is not justified based on our results, and in this population of whiplash subjects, the NDI was unidimensional, demonstrating high internal consistency and supporting the original validation study of Vernon and Mior.

  20. Factor analysis

    CERN Document Server

    Gorsuch, Richard L

    2013-01-01

    Comprehensive and comprehensible, this classic covers the basic and advanced topics essential for using factor analysis as a scientific tool in psychology, education, sociology, and related areas. Emphasizing the usefulness of the techniques, it presents sufficient mathematical background for understanding and sufficient discussion of applications for effective use. This includes not only theory but also the empirical evaluations of the importance of mathematical distinctions for applied scientific analysis.

  1. A multiple linear regression analysis of factors affecting the simulated Basic Life Support (BLS) performance with Automated External Defibrillator (AED) in Flemish lifeguards.

    Science.gov (United States)

    Iserbyt, Peter; Schouppe, Gilles; Charlier, Nathalie

    2015-04-01

    Research investigating lifeguards' performance of Basic Life Support (BLS) with Automated External Defibrillator (AED) is limited. Assessing simulated BLS/AED performance in Flemish lifeguards and identifying factors affecting this performance. Six hundred and sixteen (217 female and 399 male) certified Flemish lifeguards (aged 16-71 years) performed BLS with an AED on a Laerdal ResusciAnne manikin simulating an adult victim of drowning. Stepwise multiple linear regression analysis was conducted with BLS/AED performance as outcome variable and demographic data as explanatory variables. Mean BLS/AED performance for all lifeguards was 66.5%. Compression rate and depth adhered closely to ERC 2010 guidelines. Ventilation volume and flow rate exceeded the guidelines. A significant regression model, F(6, 415)=25.61, p<.001, ES=.38, explained 27% of the variance in BLS performance (R2=.27). Significant predictors were age (beta=-.31, p<.001), years of certification (beta=-.41, p<.001), time on duty per year (beta=-.25, p<.001), practising BLS skills (beta=.11, p=.011), and being a professional lifeguard (beta=-.13, p=.029). 71% of lifeguards reported not practising BLS/AED. Being young, recently certified, few days of employment per year, practising BLS skills and not being a professional lifeguard are factors associated with higher BLS/AED performance. Measures should be taken to prevent BLS/AED performances from decaying with age and longer certification. Refresher courses could include a formal skills test and lifeguards should be encouraged to practise their BLS/AED skills. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Linear and logistic regression analysis

    NARCIS (Netherlands)

    Tripepi, G.; Jager, K. J.; Dekker, F. W.; Zoccali, C.

    2008-01-01

    In previous articles of this series, we focused on relative risks and odds ratios as measures of effect to assess the relationship between exposure to risk factors and clinical outcomes and on control for confounding. In randomized clinical trials, the random allocation of patients is hoped to

  3. Linear Algebra and Analysis Masterclasses

    Indian Academy of Sciences (India)

    ematical physics, computer science, numerical analysis, and statistics. ... search and has been used in mathematical physics, computer science, ... concrete examples of the spaces, enabling application of the theory to a variety of problems.

  4. Applied linear algebra and matrix analysis

    CERN Document Server

    Shores, Thomas S

    2018-01-01

    In its second edition, this textbook offers a fresh approach to matrix and linear algebra. Its blend of theory, computational exercises, and analytical writing projects is designed to highlight the interplay between these aspects of an application. This approach places special emphasis on linear algebra as an experimental science that provides tools for solving concrete problems. The second edition’s revised text discusses applications of linear algebra like graph theory and network modeling methods used in Google’s PageRank algorithm. Other new materials include modeling examples of diffusive processes, linear programming, image processing, digital signal processing, and Fourier analysis. These topics are woven into the core material of Gaussian elimination and other matrix operations; eigenvalues, eigenvectors, and discrete dynamical systems; and the geometrical aspects of vector spaces. Intended for a one-semester undergraduate course without a strict calculus prerequisite, Applied Linear Algebra and M...

  5. The analysis and design of linear circuits

    CERN Document Server

    Thomas, Roland E; Toussaint, Gregory J

    2009-01-01

    The Analysis and Design of Linear Circuits, 6e gives the reader the opportunity to not only analyze, but also design and evaluate linear circuits as early as possible. The text's abundance of problems, applications, pedagogical tools, and realistic examples helps engineers develop the skills needed to solve problems, design practical alternatives, and choose the best design from several competing solutions. Engineers searching for an accessible introduction to resistance circuits will benefit from this book that emphasizes the early development of engineering judgment.

  6. Perturbation analysis of linear control problems

    International Nuclear Information System (INIS)

    Petkov, Petko; Konstantinov, Mihail

    2017-01-01

    The paper presents a brief overview of the technique of splitting operators, proposed by the authors and intended for perturbation analysis of control problems involving unitary and orthogonal matrices. Combined with the technique of Lyapunov majorants and the implementation of the Banach and Schauder fixed point principles, it allows to obtain rigorous non-local perturbation bounds for a set of sensitivity analysis problems. Among them are the reduction of linear systems into orthogonal canonical forms, the feedback synthesis problem and pole assignment problem in particular, as well as other important problems in control theory and linear algebra. Key words: perturbation analysis, canonical forms, feedback synthesis

  7. Analysis of Linear Hybrid Systems in CLP

    DEFF Research Database (Denmark)

    Banda, Gourinath; Gallagher, John Patrick

    2009-01-01

    In this paper we present a procedure for representing the semantics of linear hybrid automata (LHAs) as constraint logic programs (CLP); flexible and accurate analysis and verification of LHAs can then be performed using generic CLP analysis and transformation tools. LHAs provide an expressive...

  8. Linear discriminant analysis for welding fault detection

    International Nuclear Information System (INIS)

    Li, X.; Simpson, S.W.

    2010-01-01

    This work presents a new method for real time welding fault detection in industry based on Linear Discriminant Analysis (LDA). A set of parameters was calculated from one second blocks of electrical data recorded during welding and based on control data from reference welds under good conditions, as well as faulty welds. Optimised linear combinations of the parameters were determined with LDA and tested with independent data. Short arc welds in overlap joints were studied with various power sources, shielding gases, wire diameters, and process geometries. Out-of-position faults were investigated. Application of LDA fault detection to a broad range of welding procedures was investigated using a similarity measure based on Principal Component Analysis. The measure determines which reference data are most similar to a given industrial procedure and the appropriate LDA weights are then employed. Overall, results show that Linear Discriminant Analysis gives an effective and consistent performance in real-time welding fault detection.

  9. Preoperative factors affecting cost and length of stay for isolated off-pump coronary artery bypass grafting: hierarchical linear model analysis.

    Science.gov (United States)

    Shinjo, Daisuke; Fushimi, Kiyohide

    2015-11-17

    To determine the effect of preoperative patient and hospital factors on resource use, cost and length of stay (LOS) among patients undergoing off-pump coronary artery bypass grafting (OPCAB). Observational retrospective study. Data from the Japanese Administrative Database. Patients who underwent isolated, elective OPCAB between April 2011 and March 2012. The primary outcomes of this study were inpatient cost and LOS associated with OPCAB. A two-level hierarchical linear model was used to examine the effects of patient and hospital characteristics on inpatient costs and LOS. The independent variables were patient and hospital factors. We identified 2491 patients who underwent OPCAB at 268 hospitals. The mean cost of OPCAB was $40 665 ±7774, and the mean LOS was 23.4±8.2 days. The study found that select patient factors and certain comorbidities were associated with a high cost and long LOS. A high hospital OPCAB volume was associated with a low cost (-6.6%; p=0.024) as well as a short LOS (-17.6%, pcost and LOS. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  10. Analysis of Nonlinear Dynamics in Linear Compressors Driven by Linear Motors

    Science.gov (United States)

    Chen, Liangyuan

    2018-03-01

    The analysis of dynamic characteristics of the mechatronics system is of great significance for the linear motor design and control. Steady-state nonlinear response characteristics of a linear compressor are investigated theoretically based on the linearized and nonlinear models. First, the influence factors considering the nonlinear gas force load were analyzed. Then, a simple linearized model was set up to analyze the influence on the stroke and resonance frequency. Finally, the nonlinear model was set up to analyze the effects of piston mass, spring stiffness, driving force as an example of design parameter variation. The simulating results show that the stroke can be obtained by adjusting the excitation amplitude, frequency and other adjustments, the equilibrium position can be adjusted by adjusting the DC input, and to make the more efficient operation, the operating frequency must always equal to the resonance frequency.

  11. Signals and transforms in linear systems analysis

    CERN Document Server

    Wasylkiwskyj, Wasyl

    2013-01-01

    Signals and Transforms in Linear Systems Analysis covers the subject of signals and transforms, particularly in the context of linear systems theory. Chapter 2 provides the theoretical background for the remainder of the text. Chapter 3 treats Fourier series and integrals. Particular attention is paid to convergence properties at step discontinuities. This includes the Gibbs phenomenon and its amelioration via the Fejer summation techniques. Special topics include modulation and analytic signal representation, Fourier transforms and analytic function theory, time-frequency analysis and frequency dispersion. Fundamentals of linear system theory for LTI analogue systems, with a brief account of time-varying systems, are covered in Chapter 4 . Discrete systems are covered in Chapters 6 and 7.  The Laplace transform treatment in Chapter 5 relies heavily on analytic function theory as does Chapter 8 on Z -transforms. The necessary background on complex variables is provided in Appendix A. This book is intended to...

  12. Linear Covariance Analysis for a Lunar Lander

    Science.gov (United States)

    Jang, Jiann-Woei; Bhatt, Sagar; Fritz, Matthew; Woffinden, David; May, Darryl; Braden, Ellen; Hannan, Michael

    2017-01-01

    A next-generation lunar lander Guidance, Navigation, and Control (GNC) system, which includes a state-of-the-art optical sensor suite, is proposed in a concept design cycle. The design goal is to allow the lander to softly land within the prescribed landing precision. The achievement of this precision landing requirement depends on proper selection of the sensor suite. In this paper, a robust sensor selection procedure is demonstrated using a Linear Covariance (LinCov) analysis tool developed by Draper.

  13. Linear functional analysis for scientists and engineers

    CERN Document Server

    Limaye, Balmohan V

    2016-01-01

    This book provides a concise and meticulous introduction to functional analysis. Since the topic draws heavily on the interplay between the algebraic structure of a linear space and the distance structure of a metric space, functional analysis is increasingly gaining the attention of not only mathematicians but also scientists and engineers. The purpose of the text is to present the basic aspects of functional analysis to this varied audience, keeping in mind the considerations of applicability. A novelty of this book is the inclusion of a result by Zabreiko, which states that every countably subadditive seminorm on a Banach space is continuous. Several major theorems in functional analysis are easy consequences of this result. The entire book can be used as a textbook for an introductory course in functional analysis without having to make any specific selection from the topics presented here. Basic notions in the setting of a metric space are defined in terms of sequences. These include total boundedness, c...

  14. Domination spaces and factorization of linear and multilinear ...

    African Journals Online (AJOL)

    It is well known that not every summability property for multilinear operators leads to a factorization theorem. In this paper we undertake a detailed study of factorization schemes for summing linear and nonlinear operators. Our aim is to integrate under the same theory a wide family of classes of mappings for which a Pietsch ...

  15. The Linear Time Frequency Analysis Toolbox

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel; Torrésani, Bruno; Balazs, Peter

    2011-01-01

    The Linear Time Frequency Analysis Toolbox is a Matlab/Octave toolbox for computational time-frequency analysis. It is intended both as an educational and computational tool. The toolbox provides the basic Gabor, Wilson and MDCT transform along with routines for constructing windows (lter...... prototypes) and routines for manipulating coe cients. It also provides a bunch of demo scripts devoted either to demonstrating the main functions of the toolbox, or to exemplify their use in specic signal processing applications. In this paper we describe the used algorithms, their mathematical background...

  16. Linear Covariance Analysis and Epoch State Estimators

    Science.gov (United States)

    Markley, F. Landis; Carpenter, J. Russell

    2014-01-01

    This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.

  17. Common pitfalls in statistical analysis: Linear regression analysis

    Directory of Open Access Journals (Sweden)

    Rakesh Aggarwal

    2017-01-01

    Full Text Available In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis.

  18. On the null distribution of Bayes factors in linear regression

    Science.gov (United States)

    We show that under the null, the 2 log (Bayes factor) is asymptotically distributed as a weighted sum of chi-squared random variables with a shifted mean. This claim holds for Bayesian multi-linear regression with a family of conjugate priors, namely, the normal-inverse-gamma prior, the g-prior, and...

  19. Airfoil stall interpreted through linear stability analysis

    Science.gov (United States)

    Busquet, Denis; Juniper, Matthew; Richez, Francois; Marquet, Olivier; Sipp, Denis

    2017-11-01

    Although airfoil stall has been widely investigated, the origin of this phenomenon, which manifests as a sudden drop of lift, is still not clearly understood. In the specific case of static stall, multiple steady solutions have been identified experimentally and numerically around the stall angle. We are interested here in investigating the stability of these steady solutions so as to first model and then control the dynamics. The study is performed on a 2D helicopter blade airfoil OA209 at low Mach number, M 0.2 and high Reynolds number, Re 1.8 ×106 . Steady RANS computation using a Spalart-Allmaras model is coupled with continuation methods (pseudo-arclength and Newton's method) to obtain steady states for several angles of incidence. The results show one upper branch (high lift), one lower branch (low lift) connected by a middle branch, characterizing an hysteresis phenomenon. A linear stability analysis performed around these equilibrium states highlights a mode responsible for stall, which starts with a low frequency oscillation. A bifurcation scenario is deduced from the behaviour of this mode. To shed light on the nonlinear behavior, a low order nonlinear model is created with the same linear stability behavior as that observed for that airfoil.

  20. Linear stability analysis of heated parallel channels

    International Nuclear Information System (INIS)

    Nourbakhsh, H.P.; Isbin, H.S.

    1982-01-01

    An analyis is presented of thermal hydraulic stability of flow in parallel channels covering the range from inlet subcooling to exit superheat. The model is based on a one-dimensional drift velocity formulation of the two phase flow conservation equations. The system of equations is linearized by assuming small disturbances about the steady state. The dynamic response of the system to an inlet flow perturbation is derived yielding the characteristic equation which predicts the onset of instabilities. A specific application is carried out for homogeneous and regional uniformly heated systems. The particular case of equal characteristic frequencies of two-phase and single phase vapor region is studied in detail. The D-partition method and the Mikhailov stability criterion are used for determining the marginal stability boundary. Stability predictions from the present analysis are compared with the experimental data from the solar test facility. 8 references

  1. Form factors in the projected linear chiral sigma model

    International Nuclear Information System (INIS)

    Alberto, P.; Coimbra Univ.; Bochum Univ.; Ruiz Arriola, E.; Fiolhais, M.; Urbano, J.N.; Coimbra Univ.; Goeke, K.; Gruemmer, F.; Bochum Univ.

    1990-01-01

    Several nucleon form factors are computed within the framework of the linear chiral soliton model. To this end variational means and projection techniques applied to generalized hedgehog quark-boson Fock states are used. In this procedure the Goldberger-Treiman relation and a virial theorem for the pion-nucleon form factor are well fulfilled demonstrating the consistency of the treatment. Both proton and neutron charge form factors are correctly reproduced, as well as the proton magnetic one. The shapes of the neutron magnetic and of the axial form factors are good but their absolute values at the origin are too large. The slopes of all the form factors at zero momentum transfer are in good agreement with the experimental data. The pion-nucleon form factor exhibits to great extent a monopole shape with a cut-off mass of Λ=690 MeV. Electromagnetic form factors for the vertex γNΔ and the nucleon spin distribution are also evaluated and discussed. (orig.)

  2. Incomplete factorization technique for positive definite linear systems

    International Nuclear Information System (INIS)

    Manteuffel, T.A.

    1980-01-01

    This paper describes a technique for solving the large sparse symmetric linear systems that arise from the application of finite element methods. The technique combines an incomplete factorization method called the shifted incomplete Cholesky factorization with the method of generalized conjugate gradients. The shifted incomplete Cholesky factorization produces a splitting of the matrix A that is dependent upon a parameter α. It is shown that if A is positive definite, then there is some α for which this splitting is possible and that this splitting is at least as good as the Jacobi splitting. The method is shown to be more efficient on a set of test problems than either direct methods or explicit iteration schemes

  3. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    OpenAIRE

    Zuidwijk, Rob

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an optimal solution are investigated, and the optimal solution is studied on a so-called critical range of the initial data, in which certain properties such as the optimal basis in linear programming are ...

  4. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

    Science.gov (United States)

    Zeb, Salman; Yousaf, Muhammad

    2017-01-01

    In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

  5. Advanced analysis technique for the evaluation of linear alternators and linear motors

    Science.gov (United States)

    Holliday, Jeffrey C.

    1995-01-01

    A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.

  6. Linear stability analysis of supersonic axisymmetric jets

    Directory of Open Access Journals (Sweden)

    Zhenhua Wan

    2014-01-01

    Full Text Available Stabilities of supersonic jets are examined with different velocities, momentum thicknesses, and core temperatures. Amplification rates of instability waves at inlet are evaluated by linear stability theory (LST. It is found that increased velocity and core temperature would increase amplification rates substantially and such influence varies for different azimuthal wavenumbers. The most unstable modes in thin momentum thickness cases usually have higher frequencies and azimuthal wavenumbers. Mode switching is observed for low azimuthal wavenumbers, but it appears merely in high velocity cases. In addition, the results provided by linear parabolized stability equations show that the mean-flow divergence affects the spatial evolution of instability waves greatly. The most amplified instability waves globally are sometimes found to be different from that given by LST.

  7. Normal mode analysis for linear resistive magnetohydrodynamics

    International Nuclear Information System (INIS)

    Kerner, W.; Lerbinger, K.; Gruber, R.; Tsunematsu, T.

    1984-10-01

    The compressible, resistive MHD equations are linearized around an equilibrium with cylindrical symmetry and solved numerically as a complex eigenvalue problem. This normal mode code allows to solve for very small resistivity eta proportional 10 -10 . The scaling of growthrates and layer width agrees very well with analytical theory. Especially, both the influence of current and pressure on the instabilities is studied in detail; the effect of resistivity on the ideally unstable internal kink is analyzed. (orig.)

  8. Comparison of Linear and Non-linear Regression Analysis to Determine Pulmonary Pressure in Hyperthyroidism.

    Science.gov (United States)

    Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan

    2017-01-01

    This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second

  9. Linear Parametric Sensitivity Analysis of the Constraint Coefficient Matrix in Linear Programs

    NARCIS (Netherlands)

    R.A. Zuidwijk (Rob)

    2005-01-01

    textabstractSensitivity analysis is used to quantify the impact of changes in the initial data of linear programs on the optimal value. In particular, parametric sensitivity analysis involves a perturbation analysis in which the effects of small changes of some or all of the initial data on an

  10. Non-linear seismic analysis of structures coupled with fluid

    International Nuclear Information System (INIS)

    Descleve, P.; Derom, P.; Dubois, J.

    1983-01-01

    This paper presents a method to calculate non-linear structure behaviour under horizontal and vertical seismic excitation, making possible the full non-linear seismic analysis of a reactor vessel. A pseudo forces method is used to introduce non linear effects and the problem is solved by superposition. Two steps are used in the method: - Linear calculation of the complete model. - Non linear analysis of thin shell elements and calculation of seismic induced pressure originating from linear and non linear effects, including permanent loads and thermal stresses. Basic aspects of the mathematical formulation are developed. It has been applied to axi-symmetric shell element using a Fourier series solution. For the fluid interaction effect, a comparison is made with a dynamic test. In an example of application, the displacement and pressure time history are given. (orig./GL)

  11. Basic methods of linear functional analysis

    CERN Document Server

    Pryce, John D

    2011-01-01

    Introduction to the themes of mathematical analysis, geared toward advanced undergraduate and graduate students. Topics include operators, function spaces, Hilbert spaces, and elementary Fourier analysis. Numerous exercises and worked examples.1973 edition.

  12. Enhanced linear-array photoacoustic beamforming using modified coherence factor.

    Science.gov (United States)

    Mozaffarzadeh, Moein; Yan, Yan; Mehrmohammadi, Mohammad; Makkiabadi, Bahador

    2018-02-01

    Photoacoustic imaging (PAI) is a promising medical imaging modality providing the spatial resolution of ultrasound imaging and the contrast of optical imaging. For linear-array PAI, a beamformer can be used as the reconstruction algorithm. Delay-and-sum (DAS) is the most prevalent beamforming algorithm in PAI. However, using DAS beamformer leads to low-resolution images as well as high sidelobes due to nondesired contribution of off-axis signals. Coherence factor (CF) is a weighting method in which each pixel of the reconstructed image is weighted, based on the spatial spectrum of the aperture, to mainly improve the contrast. We demonstrate that the numerator of the formula of CF contains a DAS algebra and propose the use of a delay-multiply-and-sum beamformer instead of the available DAS on the numerator. The proposed weighting technique, modified CF (MCF), has been evaluated numerically and experimentally compared to CF. It was shown that MCF leads to lower sidelobes and better detectable targets. The quantitative results of the experiment (using wire targets) show that MCF leads to for about 45% and 40% improvement, in comparison with CF, in the terms of signal-to-noise ratio and full-width-half-maximum, respectively. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  13. Enhanced linear-array photoacoustic beamforming using modified coherence factor

    Science.gov (United States)

    Mozaffarzadeh, Moein; Yan, Yan; Mehrmohammadi, Mohammad; Makkiabadi, Bahador

    2018-02-01

    Photoacoustic imaging (PAI) is a promising medical imaging modality providing the spatial resolution of ultrasound imaging and the contrast of optical imaging. For linear-array PAI, a beamformer can be used as the reconstruction algorithm. Delay-and-sum (DAS) is the most prevalent beamforming algorithm in PAI. However, using DAS beamformer leads to low-resolution images as well as high sidelobes due to nondesired contribution of off-axis signals. Coherence factor (CF) is a weighting method in which each pixel of the reconstructed image is weighted, based on the spatial spectrum of the aperture, to mainly improve the contrast. We demonstrate that the numerator of the formula of CF contains a DAS algebra and propose the use of a delay-multiply-and-sum beamformer instead of the available DAS on the numerator. The proposed weighting technique, modified CF (MCF), has been evaluated numerically and experimentally compared to CF. It was shown that MCF leads to lower sidelobes and better detectable targets. The quantitative results of the experiment (using wire targets) show that MCF leads to for about 45% and 40% improvement, in comparison with CF, in the terms of signal-to-noise ratio and full-width-half-maximum, respectively.

  14. Sequentially linear analysis for simulating brittle failure

    NARCIS (Netherlands)

    van de Graaf, A.V.

    2017-01-01

    The numerical simulation of brittle failure at structural level with nonlinear finite
    element analysis (NLFEA) remains a challenge due to robustness issues. We attribute these problems to the dimensions of real-world structures combined with softening behavior and negative tangent stiffness at

  15. Linear and nonlinear stability analysis, associated to experimental fast reactors

    International Nuclear Information System (INIS)

    Amorim, E.S. do; Moura Neto, C. de; Rosa, M.A.P.

    1980-07-01

    Phenomena associated to the physics of fast neutrons were analysed by linear and nonlinear Kinetics with arbitrary feedback. The theoretical foundations of linear kinetics and transfer functions aiming at the analysis of fast reactors stability, are established. These stability conditions were analitically proposed and investigated by digital and analogic programs. (E.G.) [pt

  16. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    Science.gov (United States)

    Luo, Wen; Azen, Razia

    2013-01-01

    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  17. Error Analysis on Plane-to-Plane Linear Approximate Coordinate ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, the error analysis has been done for the linear approximate transformation between two tangent planes in celestial sphere in a simple case. The results demonstrate that the error from the linear transformation does not meet the requirement of high-precision astrometry under some conditions, so the ...

  18. Two Paradoxes in Linear Regression Analysis

    Science.gov (United States)

    FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong

    2016-01-01

    Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214

  19. Non-linear finite element analysis in structural mechanics

    CERN Document Server

    Rust, Wilhelm

    2015-01-01

    This monograph describes the numerical analysis of non-linearities in structural mechanics, i.e. large rotations, large strain (geometric non-linearities), non-linear material behaviour, in particular elasto-plasticity as well as time-dependent behaviour, and contact. Based on that, the book treats stability problems and limit-load analyses, as well as non-linear equations of a large number of variables. Moreover, the author presents a wide range of problem sets and their solutions. The target audience primarily comprises advanced undergraduate and graduate students of mechanical and civil engineering, but the book may also be beneficial for practising engineers in industry.

  20. Effect of Genetic and Environmental Factors on Linear Udder ...

    African Journals Online (AJOL)

    The effects of evaluators, sex of calf, breed, sire, parity, month of calving and season of lactation on linear udder conformation traits and milk yield was investigated in the dairy herd of the National Animal Production Research Institute, Shika, Zaria, Nigeria. Seven linear udder conformation traits coupled with milk yield of 25 ...

  1. Factor analysis and scintigraphy

    International Nuclear Information System (INIS)

    Di Paola, R.; Penel, C.; Bazin, J.P.; Berche, C.

    1976-01-01

    The goal of factor analysis is usually to achieve reduction of a large set of data, extracting essential features without previous hypothesis. Due to the development of computerized systems, the use of largest sampling, the possibility of sequential data acquisition and the increase of dynamic studies, the problem of data compression can be encountered now in routine. Thus, results obtained for compression of scintigraphic images were first presented. Then possibilities given by factor analysis for scan processing were discussed. At last, use of this analysis for multidimensional studies and specially dynamic studies were considered for compression and processing [fr

  2. Use of linear discriminant function analysis in seed morphotype ...

    African Journals Online (AJOL)

    Use of linear discriminant function analysis in seed morphotype relationship study in 31 ... Data were collected on 100-seed weight, seed length and seed width. ... to the Mesoamerican gene pool, comprising the cultigroups Sieva-Big Lima, ...

  3. Linear and nonlinear analysis of high-power rf amplifiers

    International Nuclear Information System (INIS)

    Puglisi, M.

    1983-01-01

    After a survey of the state variable analysis method the final amplifier for the CBA is analyzed taking into account the real beam waveshape. An empirical method for checking the stability of a non-linear system is also considered

  4. Controllability analysis of decentralised linear controllers for polymeric fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Serra, Maria; Aguado, Joaquin; Ansede, Xavier; Riera, Jordi [Institut de Robotica i Informatica Industrial, Universitat Politecnica de Catalunya - Consejo Superior de Investigaciones Cientificas, C. Llorens i Artigas 4, 08028 Barcelona (Spain)

    2005-10-10

    This work deals with the control of polymeric fuel cells. It includes a linear analysis of the system at different operating points, the comparison and selection of different control structures, and the validation of the controlled system by simulation. The work is based on a complex non linear model which has been linearised at several operating points. The linear analysis tools used are the Morari resiliency index, the condition number, and the relative gain array. These techniques are employed to compare the controllability of the system with different control structures and at different operating conditions. According to the results, the most promising control structures are selected and their performance with PI based diagonal controllers is evaluated through simulations with the complete non linear model. The range of operability of the examined control structures is compared. Conclusions indicate good performance of several diagonal linear controllers. However, very few have a wide operability range. (author)

  5. A comparison between linear and non-linear analysis of flexible pavements

    Energy Technology Data Exchange (ETDEWEB)

    Soleymani, H.R.; Berthelot, C.F.; Bergan, A.T. [Saskatchewan Univ., Saskatoon, SK (Canada). Dept. of Mechanical Engineering

    1995-12-31

    Computer pavement analysis programs, which are based on mathematical simulation models, were compared. The programs included in the study were: ELSYM5, an Elastic Linear (EL) pavement analysis program, MICH-PAVE, a Finite Element Non-Linear (FENL) and Finite Element Linear (FEL) pavement analysis program. To perform the analysis different tire pressures, pavement material properties and asphalt layer thicknesses were selected. Evaluation criteria used in the analysis were tensile strain in bottom of the asphalt layer, vertical compressive strain at the top of the subgrade and surface displacement. Results showed that FENL methods predicted more strain and surface deflection than the FEL and EL analysis methods. Analyzing pavements with FEL does not offer many advantages over the EL method. Differences in predicted strains between the three methods of analysis in some cases was found to be close to 100% It was suggested that these programs require more calibration and validation both theoretically and empirically to accurately correlate with field observations. 19 refs., 4 tabs., 9 figs.

  6. Analysis of Linear MHD Power Generators

    Energy Technology Data Exchange (ETDEWEB)

    Witalis, E A

    1965-02-15

    The finite electrode size effects on the performance of an infinitely long MHD power generation duct are calculated by means of conformal mapping. The general conformal transformation is deduced and applied in a graphic way. The analysis includes variations in the segmentation degree, the Hall parameter of the gas and the electrode/insulator length ratio as well as the influence of the external circuitry and loading. A general criterion for a minimum of the generator internal resistance is given. The same criterion gives the conditions for the occurrence of internal current leakage between adjacent electrodes. It is also shown that the highest power output at a prescribed efficiency is always obtained when the current is made to flow between exactly opposed electrodes. Curves are presented showing the power-efficiency relations and other generator properties as depending on the segmentation degree and the Hall parameter in the cases of axial and transverse power extraction. The implications of limiting the current to flow between a finite number of identical electrodes are introduced and combined with the condition for current flow between opposed electrodes. The characteristics of generators with one or a few external loads can then be determined completely and examples are given in a table. It is shown that the performance of such generators must not necessarily be inferior to that of segmented generators with many independent loads. However, the problems of channel end losses and off-design loading have not been taken into consideration.

  7. Nonparallel linear stability analysis of unconfined vortices

    Science.gov (United States)

    Herrada, M. A.; Barrero, A.

    2004-10-01

    Parabolized stability equations [F. P. Bertolotti, Th. Herbert, and P. R. Spalart, J. Fluid. Mech. 242, 441 (1992)] have been used to study the stability of a family of swirling jets at high Reynolds numbers whose velocity and pressure fields decay far from the axis as rm-2 and r2(m-2), respectively [M. Pérez-Saborid, M. A. Herrada, A. Gómez-Barea, and A. Barrero, J. Fluid. Mech. 471, 51 (2002)]; r is the radial distance and m is a real number in the interval 0analysis shows the convective nature of these instabilities. Therefore, a criterion based on the transition from convective to absolute instabilities cannot be applied to predict the vortex breakdown of this kind of swirling jets. On the contrary, the failure of the quasicylindrical approximation used to compute the downstream evolution of the basic flow gives a clear breakdown criterion based on the catastrophic transition between slender and nonslender flows.

  8. Linear mixed-effects modeling approach to FMRI group analysis.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  9. Local hyperspectral data multisharpening based on linear/linear-quadratic nonnegative matrix factorization by integrating lidar data

    Science.gov (United States)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2015-10-01

    In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.

  10. Non-linear analysis of solid propellant burning rate behavior

    Energy Technology Data Exchange (ETDEWEB)

    Junye Wang [Zhejiang Univ. of Technology, College of Mechanical and Electrical Engineering, Hanzhou (China)

    2000-07-01

    The parametric analysis of the thermal wave model of the non-steady combustion of solid propellants is carried out under a sudden compression. First, to observe non-linear effects, solutions are obtained using a computer under prescribed pressure variations. Then, the effects of rearranging the spatial mesh, additional points, and the time step on numerical solutions are evaluated. Finally, the behaviour of the thermal wave combustion model is examined under large heat releases (H) and a dynamic factor ({beta}). The numerical predictions show that (1) the effect of a dynamic factor ({beta}), related to the magnitude of dp/dt, on the peak burning rate increases as the value of beta increases. However, unsteady burning rate 'runaway' does not appear and will return asymptotically to ap{sup n}, when {beta}{>=}10.0. The burning rate 'runaway' is a numerical difficulty, not a solution to the models. (2) At constant beta and m, the amplitude of the burning rate increases with increasing H. However, the increase in the burning rate amplitude is stepwise, and there is no apparent intrinsic instability limit. A damped oscillation of burning rate occurs when the value of H is less. However, when H>1.0, the state of an intrinsically unstable model is composed of repeated, amplitude spikes, i.e. an undamped oscillation occurs. (3) The effect of the time step on the peak burning rate increases as H increases. (Author)

  11. Evaluation of beach cleanup effects using linear system analysis.

    Science.gov (United States)

    Kataoka, Tomoya; Hinata, Hirofumi

    2015-02-15

    We established a method for evaluating beach cleanup effects (BCEs) based on a linear system analysis, and investigated factors determining BCEs. Here we focus on two BCEs: decreasing the total mass of toxic metals that could leach into a beach from marine plastics and preventing the fragmentation of marine plastics on the beach. Both BCEs depend strongly on the average residence time of marine plastics on the beach (τ(r)) and the period of temporal variability of the input flux of marine plastics (T). Cleanups on the beach where τ(r) is longer than T are more effective than those where τ(r) is shorter than T. In addition, both BCEs are the highest near the time when the remnants of plastics reach the local maximum (peak time). Therefore, it is crucial to understand the following three factors for effective cleanups: the average residence time, the plastic input period and the peak time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Virtual Estimator for Piecewise Linear Systems Based on Observability Analysis

    Science.gov (United States)

    Morales-Morales, Cornelio; Adam-Medina, Manuel; Cervantes, Ilse; Vela-Valdés and, Luis G.; García Beltrán, Carlos Daniel

    2013-01-01

    This article proposes a virtual sensor for piecewise linear systems based on observability analysis that is in function of a commutation law related with the system's outpu. This virtual sensor is also known as a state estimator. Besides, it presents a detector of active mode when the commutation sequences of each linear subsystem are arbitrary and unknown. For the previous, this article proposes a set of virtual estimators that discern the commutation paths of the system and allow estimating their output. In this work a methodology in order to test the observability for piecewise linear systems with discrete time is proposed. An academic example is presented to show the obtained results. PMID:23447007

  13. CFORM- LINEAR CONTROL SYSTEM DESIGN AND ANALYSIS: CLOSED FORM SOLUTION AND TRANSIENT RESPONSE OF THE LINEAR DIFFERENTIAL EQUATION

    Science.gov (United States)

    Jamison, J. W.

    1994-01-01

    CFORM was developed by the Kennedy Space Center Robotics Lab to assist in linear control system design and analysis using closed form and transient response mechanisms. The program computes the closed form solution and transient response of a linear (constant coefficient) differential equation. CFORM allows a choice of three input functions: the Unit Step (a unit change in displacement); the Ramp function (step velocity); and the Parabolic function (step acceleration). It is only accurate in cases where the differential equation has distinct roots, and does not handle the case for roots at the origin (s=0). Initial conditions must be zero. Differential equations may be input to CFORM in two forms - polynomial and product of factors. In some linear control analyses, it may be more appropriate to use a related program, Linear Control System Design and Analysis (KSC-11376), which uses root locus and frequency response methods. CFORM was written in VAX FORTRAN for a VAX 11/780 under VAX VMS 4.7. It has a central memory requirement of 30K. CFORM was developed in 1987.

  14. Optimal choice of basis functions in the linear regression analysis

    International Nuclear Information System (INIS)

    Khotinskij, A.M.

    1988-01-01

    Problem of optimal choice of basis functions in the linear regression analysis is investigated. Step algorithm with estimation of its efficiency, which holds true at finite number of measurements, is suggested. Conditions, providing the probability of correct choice close to 1 are formulated. Application of the step algorithm to analysis of decay curves is substantiated. 8 refs

  15. Non linear stability analysis of parallel channels with natural circulation

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Ashish Mani; Singh, Suneet, E-mail: suneet.singh@iitb.ac.in

    2016-12-01

    Highlights: • Nonlinear instabilities in natural circulation loop are studied. • Generalized Hopf points, Sub and Supercritical Hopf bifurcations are identified. • Bogdanov–Taken Point (BT Point) is observed by nonlinear stability analysis. • Effect of parameters on stability of system is studied. - Abstract: Linear stability analysis of two-phase flow in natural circulation loop is quite extensively studied by many researchers in past few years. It can be noted that linear stability analysis is limited to the small perturbations only. It is pointed out that such systems typically undergo Hopf bifurcation. If the Hopf bifurcation is subcritical, then for relatively large perturbation, the system has unstable limit cycles in the (linearly) stable region in the parameter space. Hence, linear stability analysis capturing only infinitesimally small perturbations is not sufficient. In this paper, bifurcation analysis is carried out to capture the non-linear instability of the dynamical system and both subcritical and supercritical bifurcations are observed. The regions in the parameter space for which subcritical and supercritical bifurcations exist are identified. These regions are verified by numerical simulation of the time-dependent, nonlinear ODEs for the selected points in the operating parameter space using MATLAB ODE solver.

  16. Lattice Boltzmann methods for global linear instability analysis

    Science.gov (United States)

    Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis

    2017-12-01

    Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.

  17. Spatial Analysis of Linear Structures in the Exploration of Groundwater

    Directory of Open Access Journals (Sweden)

    Abdramane Dembele

    2017-11-01

    Full Text Available The analysis of linear structures on major geological formations plays a crucial role in resource exploration in the Inner Niger Delta. Highlighting and mapping of the large lithological units were carried out using image fusion, spectral bands (RGB coding, Principal Component Analysis (PCA, and band ratio methods. The automatic extraction method of linear structures has permitted the obtaining of a structural map with 82,659 linear structures, distributed on different stratigraphic stages. The intensity study shows an accentuation in density over 12.52% of the total area, containing 22.02% of the linear structures. The density and nodes (intersections of fractures formed by the linear structures on the different lithologies allowed to observe the behavior of the region’s aquifers in the exploration of subsoil resources. The central density, in relation to the hydrographic network of the lowlands, shows the conditioning of the flow and retention of groundwater in the region, and in-depth fluids. The node areas and high-density linear structures, have shown an ability to have rejections in deep (pores that favor the formation of structural traps for oil resources.

  18. Improved Methods for Pitch Synchronous Linear Prediction Analysis of Speech

    OpenAIRE

    劉, 麗清

    2015-01-01

    Linear prediction (LP) analysis has been applied to speech system over the last few decades. LP technique is well-suited for speech analysis due to its ability to model speech production process approximately. Hence LP analysis has been widely used for speech enhancement, low-bit-rate speech coding in cellular telephony, speech recognition, characteristic parameter extraction (vocal tract resonances frequencies, fundamental frequency called pitch) and so on. However, the performance of the co...

  19. Seismic analysis of equipment system with non-linearities such as gap and friction using equivalent linearization method

    International Nuclear Information System (INIS)

    Murakami, H.; Hirai, T.; Nakata, M.; Kobori, T.; Mizukoshi, K.; Takenaka, Y.; Miyagawa, N.

    1989-01-01

    Many of the equipment systems of nuclear power plants contain a number of non-linearities, such as gap and friction, due to their mechanical functions. It is desirable to take such non-linearities into account appropriately for the evaluation of the aseismic soundness. However, in usual design works, linear analysis method with rough assumptions is applied from engineering point of view. An equivalent linearization method is considered to be one of the effective analytical techniques to evaluate non-linear responses, provided that errors to a certain extent are tolerated, because it has greater simplicity in analysis and economization in computing time than non-linear analysis. The objective of this paper is to investigate the applicability of the equivalent linearization method to evaluate the maximum earthquake response of equipment systems such as the CANDU Fuelling Machine which has multiple non- linearities

  20. Linear analysis of rotationally invariant, radially variant tomographic imaging systems

    International Nuclear Information System (INIS)

    Huesmann, R.H.

    1990-01-01

    This paper describes a method to analyze the linear imaging characteristics of rotationally invariant, radially variant tomographic imaging systems using singular value decomposition (SVD). When the projection measurements from such a system are assumed to be samples from independent and identically distributed multi-normal random variables, the best estimate of the emission intensity is given by the unweighted least squares estimator. The noise amplification of this estimator is inversely proportional to the singular values of the normal matrix used to model projection and backprojection. After choosing an acceptable noise amplification, the new method can determine the number of parameters and hence the number of pixels that should be estimated from data acquired from an existing system with a fixed number of angles and projection bins. Conversely, for the design of a new system, the number of angles and projection bins necessary for a given number of pixels and noise amplification can be determined. In general, computing the SVD of the projection normal matrix has cubic computational complexity. However, the projection normal matrix for this class of rotationally invariant, radially variant systems has a block circulant form. A fast parallel algorithm to compute the SVD of this block circulant matrix makes the singular value analysis practical by asymptotically reducing the computation complexity of the method by a multiplicative factor equal to the number of angles squared

  1. Linear and nonlinear models for predicting fish bioconcentration factors for pesticides.

    Science.gov (United States)

    Yuan, Jintao; Xie, Chun; Zhang, Ting; Sun, Jinfang; Yuan, Xuejie; Yu, Shuling; Zhang, Yingbiao; Cao, Yunyuan; Yu, Xingchen; Yang, Xuan; Yao, Wu

    2016-08-01

    This work is devoted to the applications of the multiple linear regression (MLR), multilayer perceptron neural network (MLP NN) and projection pursuit regression (PPR) to quantitative structure-property relationship analysis of bioconcentration factors (BCFs) of pesticides tested on Bluegill (Lepomis macrochirus). Molecular descriptors of a total of 107 pesticides were calculated with the DRAGON Software and selected by inverse enhanced replacement method. Based on the selected DRAGON descriptors, a linear model was built by MLR, nonlinear models were developed using MLP NN and PPR. The robustness of the obtained models was assessed by cross-validation and external validation using test set. Outliers were also examined and deleted to improve predictive power. Comparative results revealed that PPR achieved the most accurate predictions. This study offers useful models and information for BCF prediction, risk assessment, and pesticide formulation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Mathematical modelling and linear stability analysis of laser fusion cutting

    International Nuclear Information System (INIS)

    Hermanns, Torsten; Schulz, Wolfgang; Vossen, Georg; Thombansen, Ulrich

    2016-01-01

    A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.

  3. Algorithm for Non-proportional Loading in Sequentially Linear Analysis

    NARCIS (Netherlands)

    Yu, C.; Hoogenboom, P.C.J.; Rots, J.G.; Saouma, V.; Bolander, J.; Landis, E.

    2016-01-01

    Sequentially linear analysis (SLA) is an alternative to the Newton-Raphson method for analyzing the nonlinear behavior of reinforced concrete and masonry structures. In this paper SLA is extended to load cases that are applied one after the other, for example first dead load and then wind load. It

  4. CFD analysis of linear compressors considering load conditions

    Science.gov (United States)

    Bae, Sanghyun; Oh, Wonsik

    2017-08-01

    This paper is a study on computational fluid dynamics (CFD) analysis of linear compressor considering load conditions. In the conventional CFD analysis of the linear compressor, the load condition was not considered in the behaviour of the piston. In some papers, behaviour of piston is assumed as sinusoidal motion provided by user defined function (UDF). In the reciprocating type compressor, the stroke of the piston is restrained by the rod, while the stroke of the linear compressor is not restrained, and the stroke changes depending on the load condition. The greater the pressure difference between the discharge refrigerant and the suction refrigerant, the more the centre point of the stroke is pushed backward. And the behaviour of the piston is not a complete sine wave. For this reason, when the load condition changes in the CFD analysis of the linear compressor, it may happen that the ANSYS code is changed or unfortunately the modelling is changed. In addition, a separate analysis or calculation is required to find a stroke that meets the load condition, which may contain errors. In this study, the coupled mechanical equations and electrical equations are solved using the UDF, and the behaviour of the piston is solved considering the pressure difference across the piston. Using the above method, the stroke of the piston with respect to the motor specification of the analytical model can be calculated according to the input voltage, and the piston behaviour can be realized considering the thrust amount due to the pressure difference.

  5. Mathematical modelling and linear stability analysis of laser fusion cutting

    Energy Technology Data Exchange (ETDEWEB)

    Hermanns, Torsten; Schulz, Wolfgang [RWTH Aachen University, Chair for Nonlinear Dynamics, Steinbachstr. 15, 52047 Aachen (Germany); Vossen, Georg [Niederrhein University of Applied Sciences, Chair for Applied Mathematics and Numerical Simulations, Reinarzstr.. 49, 47805 Krefeld (Germany); Thombansen, Ulrich [RWTH Aachen University, Chair for Laser Technology, Steinbachstr. 15, 52047 Aachen (Germany)

    2016-06-08

    A model for laser fusion cutting is presented and investigated by linear stability analysis in order to study the tendency for dynamic behavior and subsequent ripple formation. The result is a so called stability function that describes the correlation of the setting values of the process and the process’ amount of dynamic behavior.

  6. Stability Analysis for Multi-Parameter Linear Periodic Systems

    DEFF Research Database (Denmark)

    Seyranian, A.P.; Solem, Frederik; Pedersen, Pauli

    1999-01-01

    This paper is devoted to stability analysis of general linear periodic systems depending on real parameters. The Floquet method and perturbation technique are the basis of the development. We start out with the first and higher-order derivatives of the Floquet matrix with respect to problem...

  7. Linear discriminant analysis of structure within African eggplant 'Shum'

    African Journals Online (AJOL)

    A MANOVA preceded linear discriminant analysis, to model each of 61 variables, as predicted by clusters and experiment to filter out non-significant traits. Four distinct clusters emerged, with a cophenetic relation coefficient of 0.87 (P<0.01). Canonical variates that best predicted the observed clusters include petiole length, ...

  8. Linear stability analysis of collective neutrino oscillations without spurious modes

    Science.gov (United States)

    Morinaga, Taiki; Yamada, Shoichi

    2018-01-01

    Collective neutrino oscillations are induced by the presence of neutrinos themselves. As such, they are intrinsically nonlinear phenomena and are much more complex than linear counterparts such as the vacuum or Mikheyev-Smirnov-Wolfenstein oscillations. They obey integro-differential equations, for which it is also very challenging to obtain numerical solutions. If one focuses on the onset of collective oscillations, on the other hand, the equations can be linearized and the technique of linear analysis can be employed. Unfortunately, however, it is well known that such an analysis, when applied with discretizations of continuous angular distributions, suffers from the appearance of so-called spurious modes: unphysical eigenmodes of the discretized linear equations. In this paper, we analyze in detail the origin of these unphysical modes and present a simple solution to this annoying problem. We find that the spurious modes originate from the artificial production of pole singularities instead of a branch cut on the Riemann surface by the discretizations. The branching point singularities on the Riemann surface for the original nondiscretized equations can be recovered by approximating the angular distributions with polynomials and then performing the integrals analytically. We demonstrate for some examples that this simple prescription does remove the spurious modes. We also propose an even simpler method: a piecewise linear approximation to the angular distribution. It is shown that the same methodology is applicable to the multienergy case as well as to the dispersion relation approach that was proposed very recently.

  9. This research is to study the factors which influence the business success of small business ‘processed rotan’. The data employed in the study are primary data within the period of July to August 2013, 30 research observations through census method. Method of analysis used in the study is multiple linear regressions. The results of analysis showed that the factors of labor, innovation and promotion have positive and significant influence on the business success of small business ‘processed rotan’ simultaneously. The analysis also showed that partially labor has positive and significant influence on the business success, yet innovation and promotion have insignificant and positive influence on the business success.

    OpenAIRE

    Nasution, Inggrita Gusti Sari; Muchtar, Yasmin Chairunnisa

    2013-01-01

    This research is to study the factors which influence the business success of small business ‘processed rotan’. The data employed in the study are primary data within the period of July to August 2013, 30 research observations through census method. Method of analysis used in the study is multiple linear regressions. The results of analysis showed that the factors of labor, innovation and promotion have positive and significant influence on the business success of small busine...

  10. Linear regression and sensitivity analysis in nuclear reactor design

    International Nuclear Information System (INIS)

    Kumar, Akansha; Tsvetkov, Pavel V.; McClarren, Ryan G.

    2015-01-01

    Highlights: • Presented a benchmark for the applicability of linear regression to complex systems. • Applied linear regression to a nuclear reactor power system. • Performed neutronics, thermal–hydraulics, and energy conversion using Brayton’s cycle for the design of a GCFBR. • Performed detailed sensitivity analysis to a set of parameters in a nuclear reactor power system. • Modeled and developed reactor design using MCNP, regression using R, and thermal–hydraulics in Java. - Abstract: The paper presents a general strategy applicable for sensitivity analysis (SA), and uncertainity quantification analysis (UA) of parameters related to a nuclear reactor design. This work also validates the use of linear regression (LR) for predictive analysis in a nuclear reactor design. The analysis helps to determine the parameters on which a LR model can be fit for predictive analysis. For those parameters, a regression surface is created based on trial data and predictions are made using this surface. A general strategy of SA to determine and identify the influential parameters those affect the operation of the reactor is mentioned. Identification of design parameters and validation of linearity assumption for the application of LR of reactor design based on a set of tests is performed. The testing methods used to determine the behavior of the parameters can be used as a general strategy for UA, and SA of nuclear reactor models, and thermal hydraulics calculations. A design of a gas cooled fast breeder reactor (GCFBR), with thermal–hydraulics, and energy transfer has been used for the demonstration of this method. MCNP6 is used to simulate the GCFBR design, and perform the necessary criticality calculations. Java is used to build and run input samples, and to extract data from the output files of MCNP6, and R is used to perform regression analysis and other multivariate variance, and analysis of the collinearity of data

  11. "Factor Analysis Using ""R"""

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2013-02-01

    Full Text Available R (R Development Core Team, 2011 is a very powerful tool to analyze data, that is gaining in popularity due to its costs (its free and flexibility (its open-source. This article gives a general introduction to using R (i.e., loading the program, using functions, importing data. Then, using data from Canivez, Konold, Collins, and Wilson (2009, this article walks the user through how to use the program to conduct factor analysis, from both an exploratory and confirmatory approach.

  12. Linear stability analysis in a solid-propellant rocket motor

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.M.; Kang, K.T.; Yoon, J.K. [Agency for Defense Development, Taejon (Korea, Republic of)

    1995-10-01

    Combustion instability in solid-propellant rocket motors depends on the balance between acoustic energy gains and losses of the system. The objective of this paper is to demonstrate the capability of the program which predicts the standard longitudinal stability using acoustic modes based on linear stability analysis and T-burner test results of propellants. Commercial ANSYS 5.0A program can be used to calculate the acoustic characteristic of a rocket motor. The linear stability prediction was compared with the static firing test results of rocket motors. (author). 11 refs., 17 figs.

  13. Linearly Polarized IR Spectroscopy Theory and Applications for Structural Analysis

    CERN Document Server

    Kolev, Tsonko

    2011-01-01

    A technique that is useful in the study of pharmaceutical products and biological molecules, polarization IR spectroscopy has undergone continuous development since it first emerged almost 100 years ago. Capturing the state of the science as it exists today, "Linearly Polarized IR Spectroscopy: Theory and Applications for Structural Analysis" demonstrates how the technique can be properly utilized to obtain important information about the structure and spectral properties of oriented compounds. The book starts with the theoretical basis of linear-dichroic infrared (IR-LD) spectroscop

  14. Linear and nonlinear subspace analysis of hand movements during grasping.

    Science.gov (United States)

    Cui, Phil Hengjun; Visell, Yon

    2014-01-01

    This study investigated nonlinear patterns of coordination, or synergies, underlying whole-hand grasping kinematics. Prior research has shed considerable light on roles played by such coordinated degrees-of-freedom (DOF), illuminating how motor control is facilitated by structural and functional specializations in the brain, peripheral nervous system, and musculoskeletal system. However, existing analyses suppose that the patterns of coordination can be captured by means of linear analyses, as linear combinations of nominally independent DOF. In contrast, hand kinematics is itself highly nonlinear in nature. To address this discrepancy, we sought to to determine whether nonlinear synergies might serve to more accurately and efficiently explain human grasping kinematics than is possible with linear analyses. We analyzed motion capture data acquired from the hands of individuals as they grasped an array of common objects, using four of the most widely used linear and nonlinear dimensionality reduction algorithms. We compared the results using a recently developed algorithm-agnostic quality measure, which enabled us to assess the quality of the dimensional reductions that resulted by assessing the extent to which local neighborhood information in the data was preserved. Although qualitative inspection of this data suggested that nonlinear correlations between kinematic variables were present, we found that linear modeling, in the form of Principle Components Analysis, could perform better than any of the nonlinear techniques we applied.

  15. Analysis of the efficiency of the linearization techniques for solving multi-objective linear fractional programming problems by goal programming

    Directory of Open Access Journals (Sweden)

    Tunjo Perić

    2017-01-01

    Full Text Available This paper presents and analyzes the applicability of three linearization techniques used for solving multi-objective linear fractional programming problems using the goal programming method. The three linearization techniques are: (1 Taylor’s polynomial linearization approximation, (2 the method of variable change, and (3 a modification of the method of variable change proposed in [20]. All three linearization techniques are presented and analyzed in two variants: (a using the optimal value of the objective functions as the decision makers’ aspirations, and (b the decision makers’ aspirations are given by the decision makers. As the criteria for the analysis we use the efficiency of the obtained solutions and the difficulties the analyst comes upon in preparing the linearization models. To analyze the applicability of the linearization techniques incorporated in the linear goal programming method we use an example of a financial structure optimization problem.

  16. Design and Analysis of MEMS Linear Phased Array

    Directory of Open Access Journals (Sweden)

    Guoxiang Fan

    2016-01-01

    Full Text Available A structure of micro-electro-mechanical system (MEMS linear phased array based on “multi-cell” element is designed to increase radiation sound pressure of transducer working in bending vibration mode at high frequency. In order to more accurately predict the resonant frequency of an element, the theoretical analysis of the dynamic equation of a fixed rectangular composite plate and finite element method simulation are adopted. The effects of the parameters both in the lateral and elevation direction on the three-dimensional beam directivity characteristics are comprehensively analyzed. The key parameters in the analysis include the “cell” number of element, “cell” size, “inter-cell” spacing and the number of elements, element width. The simulation results show that optimizing the linear array parameters both in the lateral and elevation direction can greatly improve the three-dimensional beam focusing for MEMS linear phased array, which is obviously different from the traditional linear array.

  17. Functional linear models for association analysis of quantitative traits.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  18. Neutrino mass, dark energy, and the linear growth factor

    International Nuclear Information System (INIS)

    Kiakotou, Angeliki; Lahav, Ofer; Elgaroey, Oystein

    2008-01-01

    We study the degeneracies between neutrino mass and dark energy as they manifest themselves in cosmological observations. In contradiction to a popular formula in the literature, the suppression of the matter power spectrum caused by massive neutrinos is not just a function of the ratio of neutrino to total mass densities f ν =Ω ν /Ω m , but also each of the densities independently. We also present a fitting formula for the logarithmic growth factor of perturbations in a flat universe, f(z,k;f ν ,w,Ω DE )≅[1-A(k)Ω DE f ν +B(k)f ν 2 -C(k)f ν 3 ]Ω m α (z), where α depends on the dark energy equation of state parameter w. We then discuss cosmological probes where the f factor directly appears: peculiar velocities, redshift distortion, and the integrated Sachs-Wolfe effect. We also modify the approximation of Eisenstein and Hu [Astrophys. J. 511, 5 (1999)] for the power spectrum of fluctuations in the presence of massive neutrinos and provide a revised code [http://www.star.ucl.ac.uk/∼lahav/nu m atter p ower.f].

  19. [Relations between biomedical variables: mathematical analysis or linear algebra?].

    Science.gov (United States)

    Hucher, M; Berlie, J; Brunet, M

    1977-01-01

    The authors, after a short reminder of one pattern's structure, stress on the possible double approach of relations uniting the variables of this pattern: use of fonctions, what is within the mathematical analysis sphere, use of linear algebra profiting by matricial calculation's development and automatiosation. They precise the respective interests on these methods, their bounds and the imperatives for utilization, according to the kind of variables, of data, and the objective for work, understanding phenomenons or helping towards decision.

  20. Stability analysis of linear switching systems with time delays

    International Nuclear Information System (INIS)

    Li Ping; Zhong Shouming; Cui Jinzhong

    2009-01-01

    The issue of stability analysis of linear switching system with discrete and distributed time delays is studied in this paper. An appropriate switching rule is applied to guarantee the stability of the whole switching system. Our results use a Riccati-type Lyapunov functional under a condition on the time delay. So, switching systems with mixed delays are developed. A numerical example is given to illustrate the effectiveness of our results.

  1. Linear discriminant analysis of character sequences using occurrences of words

    KAUST Repository

    Dutta, Subhajit; Chaudhuri, Probal; Ghosh, Anil

    2014-01-01

    Classification of character sequences, where the characters come from a finite set, arises in disciplines such as molecular biology and computer science. For discriminant analysis of such character sequences, the Bayes classifier based on Markov models turns out to have class boundaries defined by linear functions of occurrences of words in the sequences. It is shown that for such classifiers based on Markov models with unknown orders, if the orders are estimated from the data using cross-validation, the resulting classifier has Bayes risk consistency under suitable conditions. Even when Markov models are not valid for the data, we develop methods for constructing classifiers based on linear functions of occurrences of words, where the word length is chosen by cross-validation. Such linear classifiers are constructed using ideas of support vector machines, regression depth, and distance weighted discrimination. We show that classifiers with linear class boundaries have certain optimal properties in terms of their asymptotic misclassification probabilities. The performance of these classifiers is demonstrated in various simulated and benchmark data sets.

  2. Linear discriminant analysis of character sequences using occurrences of words

    KAUST Repository

    Dutta, Subhajit

    2014-02-01

    Classification of character sequences, where the characters come from a finite set, arises in disciplines such as molecular biology and computer science. For discriminant analysis of such character sequences, the Bayes classifier based on Markov models turns out to have class boundaries defined by linear functions of occurrences of words in the sequences. It is shown that for such classifiers based on Markov models with unknown orders, if the orders are estimated from the data using cross-validation, the resulting classifier has Bayes risk consistency under suitable conditions. Even when Markov models are not valid for the data, we develop methods for constructing classifiers based on linear functions of occurrences of words, where the word length is chosen by cross-validation. Such linear classifiers are constructed using ideas of support vector machines, regression depth, and distance weighted discrimination. We show that classifiers with linear class boundaries have certain optimal properties in terms of their asymptotic misclassification probabilities. The performance of these classifiers is demonstrated in various simulated and benchmark data sets.

  3. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    Science.gov (United States)

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  4. An introduction to linear ordinary differential equations using the impulsive response method and factorization

    CERN Document Server

    Camporesi, Roberto

    2016-01-01

    This book presents a method for solving linear ordinary differential equations based on the factorization of the differential operator. The approach for the case of constant coefficients is elementary, and only requires a basic knowledge of calculus and linear algebra. In particular, the book avoids the use of distribution theory, as well as the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The case of variable coefficients is addressed using Mammana’s result for the factorization of a real linear ordinary differential operator into a product of first-order (complex) factors, as well as a recent generalization of this result to the case of complex-valued coefficients.

  5. Linear Ordinary Differential Equations with Constant Coefficients. Revisiting the Impulsive Response Method Using Factorization

    Science.gov (United States)

    Camporesi, Roberto

    2011-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of…

  6. Non-linear time series analysis on flow instability of natural circulation under rolling motion condition

    International Nuclear Information System (INIS)

    Zhang, Wenchao; Tan, Sichao; Gao, Puzhen; Wang, Zhanwei; Zhang, Liansheng; Zhang, Hong

    2014-01-01

    Highlights: • Natural circulation flow instabilities in rolling motion are studied. • The method of non-linear time series analysis is used. • Non-linear evolution characteristic of flow instability is analyzed. • Irregular complex flow oscillations are chaotic oscillations. • The effect of rolling parameter on the threshold of chaotic oscillation is studied. - Abstract: Non-linear characteristics of natural circulation flow instabilities under rolling motion conditions were studied by the method of non-linear time series analysis. Experimental flow time series of different dimensionless power and rolling parameters were analyzed based on phase space reconstruction theory. Attractors which were reconstructed in phase space and the geometric invariants, including correlation dimension, Kolmogorov entropy and largest Lyapunov exponent, were determined. Non-linear characteristics of natural circulation flow instabilities under rolling motion conditions was studied based on the results of the geometric invariant analysis. The results indicated that the values of the geometric invariants first increase and then decrease as dimensionless power increases which indicated the non-linear characteristics of the system first enhance and then weaken. The irregular complex flow oscillation is typical chaotic oscillation because the value of geometric invariants is at maximum. The threshold of chaotic oscillation becomes larger as the rolling frequency or rolling amplitude becomes big. The main influencing factors that influence the non-linear characteristics of the natural circulation system under rolling motion are thermal driving force, flow resistance and the additional forces caused by rolling motion. The non-linear characteristics of the natural circulation system under rolling motion changes caused by the change of the feedback and coupling degree among these influencing factors when the dimensionless power or rolling parameters changes

  7. Theoretical analysis of balanced truncation for linear switched systems

    DEFF Research Database (Denmark)

    Petreczky, Mihaly; Wisniewski, Rafal; Leth, John-Josef

    2012-01-01

    In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians and their singu......In this paper we present theoretical analysis of model reduction of linear switched systems based on balanced truncation, presented in [1,2]. More precisely, (1) we provide a bound on the estimation error using L2 gain, (2) we provide a system theoretic interpretation of grammians...... for showing this independence is realization theory of linear switched systems. [1] H. R. Shaker and R. Wisniewski, "Generalized gramian framework for model/controller order reduction of switched systems", International Journal of Systems Science, Vol. 42, Issue 8, 2011, 1277-1291. [2] H. R. Shaker and R....... Wisniewski, "Switched Systems Reduction Framework Based on Convex Combination of Generalized Gramians", Journal of Control Science and Engineering, 2009....

  8. Non-linear elastic thermal stress analysis with phase changes

    International Nuclear Information System (INIS)

    Amada, S.; Yang, W.H.

    1978-01-01

    The non-linear elastic, thermal stress analysis with temperature induced phase changes in the materials is presented. An infinite plate (or body) with a circular hole (or tunnel) is subjected to a thermal loading on its inner surface. The peak temperature around the hole reaches beyond the melting point of the material. The non-linear diffusion equation is solved numerically using the finite difference method. The material properties change rapidly at temperatures where the change of crystal structures and solid-liquid transition occur. The elastic stresses induced by the transient non-homogeneous temperature distribution are calculated. The stresses change remarkably when the phase changes occur and there are residual stresses remaining in the plate after one cycle of thermal loading. (Auth.)

  9. Comparative analysis of linear motor geometries for Stirling coolers

    Science.gov (United States)

    R, Rajesh V.; Kuzhiveli, Biju T.

    2017-12-01

    Compared to rotary motor driven Stirling coolers, linear motor coolers are characterized by small volume and long life, making them more suitable for space and military applications. The motor design and operational characteristics have a direct effect on the operation of the cooler. In this perspective, ample scope exists in understanding the behavioural description of linear motor systems. In the present work, the authors compare and analyze different moving magnet linear motor geometries to finalize the most favourable one for Stirling coolers. The required axial force in the linear motors is generated by the interaction of magnetic fields of a current carrying coil and that of a permanent magnet. The compact size, commercial availability of permanent magnets and low weight requirement of the system are quite a few constraints for the design. The finite element analysis performed using Maxwell software serves as the basic tool to analyze the magnet movement, flux distribution in the air gap and the magnetic saturation levels on the core. A number of material combinations are investigated for core before finalizing the design. The effect of varying the core geometry on the flux produced in the air gap is also analyzed. The electromagnetic analysis of the motor indicates that the permanent magnet height ought to be taken in such a way that it is under the influence of electromagnetic field of current carrying coil as well as the outer core in the balanced position. This is necessary so that sufficient amount of thrust force is developed by efficient utilisation of the air gap flux density. Also, the outer core ends need to be designed to facilitate enough room for the magnet movement under the operating conditions.

  10. Linearized spectrum correlation analysis for line emission measurements.

    Science.gov (United States)

    Nishizawa, T; Nornberg, M D; Den Hartog, D J; Sarff, J S

    2017-08-01

    A new spectral analysis method, Linearized Spectrum Correlation Analysis (LSCA), for charge exchange and passive ion Doppler spectroscopy is introduced to provide a means of measuring fast spectral line shape changes associated with ion-scale micro-instabilities. This analysis method is designed to resolve the fluctuations in the emission line shape from a stationary ion-scale wave. The method linearizes the fluctuations around a time-averaged line shape (e.g., Gaussian) and subdivides the spectral output channels into two sets to reduce contributions from uncorrelated fluctuations without averaging over the fast time dynamics. In principle, small fluctuations in the parameters used for a line shape model can be measured by evaluating the cross spectrum between different channel groupings to isolate a particular fluctuating quantity. High-frequency ion velocity measurements (100-200 kHz) were made by using this method. We also conducted simulations to compare LSCA with a moment analysis technique under a low photon count condition. Both experimental and synthetic measurements demonstrate the effectiveness of LSCA.

  11. Robust Linear Models for Cis-eQTL Analysis.

    Science.gov (United States)

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  12. Using Linear Algebra to Introduce Computer Algebra, Numerical Analysis, Data Structures and Algorithms (and To Teach Linear Algebra, Too).

    Science.gov (United States)

    Gonzalez-Vega, Laureano

    1999-01-01

    Using a Computer Algebra System (CAS) to help with the teaching of an elementary course in linear algebra can be one way to introduce computer algebra, numerical analysis, data structures, and algorithms. Highlights the advantages and disadvantages of this approach to the teaching of linear algebra. (Author/MM)

  13. Factors affecting construction performance: exploratory factor analysis

    Science.gov (United States)

    Soewin, E.; Chinda, T.

    2018-04-01

    The present work attempts to develop a multidimensional performance evaluation framework for a construction company by considering all relevant measures of performance. Based on the previous studies, this study hypothesizes nine key factors, with a total of 57 associated items. The hypothesized factors, with their associated items, are then used to develop questionnaire survey to gather data. The exploratory factor analysis (EFA) was applied to the collected data which gave rise 10 factors with 57 items affecting construction performance. The findings further reveal that the items constituting ten key performance factors (KPIs) namely; 1) Time, 2) Cost, 3) Quality, 4) Safety & Health, 5) Internal Stakeholder, 6) External Stakeholder, 7) Client Satisfaction, 8) Financial Performance, 9) Environment, and 10) Information, Technology & Innovation. The analysis helps to develop multi-dimensional performance evaluation framework for an effective measurement of the construction performance. The 10 key performance factors can be broadly categorized into economic aspect, social aspect, environmental aspect, and technology aspects. It is important to understand a multi-dimension performance evaluation framework by including all key factors affecting the construction performance of a company, so that the management level can effectively plan to implement an effective performance development plan to match with the mission and vision of the company.

  14. An implementation analysis of the linear discontinuous finite element method

    International Nuclear Information System (INIS)

    Becker, T. L.

    2013-01-01

    This paper provides an implementation analysis of the linear discontinuous finite element method (LD-FEM) that spans the space of (l, x, y, z). A practical implementation of LD includes 1) selecting a computationally efficient algorithm to solve the 4 x 4 matrix system Ax = b that describes the angular flux in a mesh element, and 2) choosing how to store the data used to construct the matrix A and the vector b to either reduce memory consumption or increase computational speed. To analyze the first of these, three algorithms were selected to solve the 4 x 4 matrix equation: Cramer's rule, a streamlined implementation of Gaussian elimination, and LAPACK's Gaussian elimination subroutine dgesv. The results indicate that Cramer's rule and the streamlined Gaussian elimination algorithm perform nearly equivalently and outperform LAPACK's implementation of Gaussian elimination by a factor of 2. To analyze the second implementation detail, three formulations of the discretized LD-FEM equations were provided for implementation in a transport solver: 1) a low-memory formulation, which relies heavily on 'on-the-fly' calculations and less on the storage of pre-computed data, 2) a high-memory formulation, which pre-computes much of the data used to construct A and b, and 3) a reduced-memory formulation, which lies between the low - and high-memory formulations. These three formulations were assessed in the Jaguar transport solver based on relative memory footprint and computational speed for increasing mesh size and quadrature order. The results indicated that the memory savings of the low-memory formulation were not sufficient to warrant its implementation. The high-memory formulation resulted in a significant speed advantage over the reduced-memory option (10-50%), but also resulted in a proportional increase in memory consumption (5-45%) for increasing quadrature order and mesh count; therefore, the practitioner should weigh the system memory constraints against any

  15. An implementation analysis of the linear discontinuous finite element method

    Energy Technology Data Exchange (ETDEWEB)

    Becker, T. L. [Bechtel Marine Propulsion Corporation, Knolls Atomic Power Laboratory, P.O. Box 1072, Schenectady, NY 12301-1072 (United States)

    2013-07-01

    This paper provides an implementation analysis of the linear discontinuous finite element method (LD-FEM) that spans the space of (l, x, y, z). A practical implementation of LD includes 1) selecting a computationally efficient algorithm to solve the 4 x 4 matrix system Ax = b that describes the angular flux in a mesh element, and 2) choosing how to store the data used to construct the matrix A and the vector b to either reduce memory consumption or increase computational speed. To analyze the first of these, three algorithms were selected to solve the 4 x 4 matrix equation: Cramer's rule, a streamlined implementation of Gaussian elimination, and LAPACK's Gaussian elimination subroutine dgesv. The results indicate that Cramer's rule and the streamlined Gaussian elimination algorithm perform nearly equivalently and outperform LAPACK's implementation of Gaussian elimination by a factor of 2. To analyze the second implementation detail, three formulations of the discretized LD-FEM equations were provided for implementation in a transport solver: 1) a low-memory formulation, which relies heavily on 'on-the-fly' calculations and less on the storage of pre-computed data, 2) a high-memory formulation, which pre-computes much of the data used to construct A and b, and 3) a reduced-memory formulation, which lies between the low - and high-memory formulations. These three formulations were assessed in the Jaguar transport solver based on relative memory footprint and computational speed for increasing mesh size and quadrature order. The results indicated that the memory savings of the low-memory formulation were not sufficient to warrant its implementation. The high-memory formulation resulted in a significant speed advantage over the reduced-memory option (10-50%), but also resulted in a proportional increase in memory consumption (5-45%) for increasing quadrature order and mesh count; therefore, the practitioner should weigh the system memory

  16. Modeling and analysis of linearized wheel-rail contact dynamics

    International Nuclear Information System (INIS)

    Soomro, Z.

    2014-01-01

    The dynamics of the railway vehicles are nonlinear and depend upon several factors including vehicle speed, normal load and adhesion level. The presence of contaminants on the railway track makes them unpredictable too. Therefore in order to develop an effective control strategy it is important to analyze the effect of each factor on dynamic response thoroughly. In this paper a linearized model of a railway wheel-set is developed and is later analyzed by varying the speed and adhesion level by keeping the normal load constant. A wheel-set is the wheel-axle assembly of a railroad car. Patch contact is the study of the deformation of solids that touch each other at one or more points. (author)

  17. Stability analysis and stabilization strategies for linear supply chains

    Science.gov (United States)

    Nagatani, Takashi; Helbing, Dirk

    2004-04-01

    Due to delays in the adaptation of production or delivery rates, supply chains can be dynamically unstable with respect to perturbations in the consumption rate, which is known as “bull-whip effect”. Here, we study several conceivable production strategies to stabilize supply chains, which is expressed by different specifications of the management function controlling the production speed in dependence of the stock levels. In particular, we will investigate, whether the reaction to stock levels of other producers or suppliers has a stabilizing effect. We will also demonstrate that the anticipation of future stock levels can stabilize the supply system, given the forecast horizon τ is long enough. To show this, we derive linear stability conditions and carry out simulations for different control strategies. The results indicate that the linear stability analysis is a helpful tool for the judgement of the stabilization effect, although unexpected deviations can occur in the non-linear regime. There are also signs of phase transitions and chaotic behavior, but this remains to be investigated more thoroughly in the future.

  18. Robust linear discriminant analysis with distance based estimators

    Science.gov (United States)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina

    2017-11-01

    Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.

  19. Design and analysis approach for linear aerospike nozzle

    International Nuclear Information System (INIS)

    Khan, S.U.; Khan, A.A.; Munir, A.

    2014-01-01

    The paper presents an aerodynamic design of a simplified linear aerospike nozzle and its detailed exhaust flow analysis with no spike truncation. Analytical method with isentropic planar flow was used to generate the nozzle contour through MATLAB . The developed code produces a number of outputs comprising nozzle wall profile, flow properties along the nozzle wall, thrust coefficient, thrust, as well as amount of nozzle truncation. Results acquired from design code and numerical analyses are compared for observing differences. The numerical analysis adopted an inviscid model carried out through commercially available and reliable computational fluid dynamics (CFD) software. Use of the developed code would assist the readers to perform quick analysis of different aerodynamic design parameters for the aerospike nozzle that has tremendous scope of application in future launch vehicles. Keyword: Rocket propulsion, Aerospike Nozzle, Control Design, Computational Fluid Dynamics. (author)

  20. Linear and nonlinear analysis of fluid slosh dampers

    Science.gov (United States)

    Sayar, B. A.; Baumgarten, J. R.

    1982-11-01

    A vibrating structure and a container partially filled with fluid are considered coupled in a free vibration mode. To simplify the mathematical analysis, a pendulum model to duplicate the fluid motion and a mass-spring dashpot representing the vibrating structure are used. The equations of motion are derived by Lagrange's energy approach and expressed in parametric form. For a wide range of parametric values the logarithmic decrements of the main system are calculated from theoretical and experimental response curves in the linear analysis. However, for the nonlinear analysis the theoretical and experimental response curves of the main system are compared. Theoretical predictions are justified by experimental observations with excellent agreement. It is concluded finally that for a proper selection of design parameters, containers partially filled with viscous fluids serve as good vibration dampers.

  1. Calculation of elastic-plastic strain ranges for fatigue analysis based on linear elastic stresses

    International Nuclear Information System (INIS)

    Sauer, G.

    1998-01-01

    Fatigue analysis requires that the maximum strain ranges be known. These strain ranges are generally computed from linear elastic analysis. The elastic strain ranges are enhanced by a factor K e to obtain the total elastic-plastic strain range. The reliability of the fatigue analysis depends on the quality of this factor. Formulae for calculating the K e factor are proposed. A beam is introduced as a computational model for determining the elastic-plastic strains. The beam is loaded by the elastic stresses of the real structure. The elastic-plastic strains of the beam are compared with the beam's elastic strains. This comparison furnishes explicit expressions for the K e factor. The K e factor is tested by means of seven examples. (orig.)

  2. Analysis of γ spectra in airborne radioactivity measurements using multiple linear regressions

    International Nuclear Information System (INIS)

    Bao Min; Shi Quanlin; Zhang Jiamei

    2004-01-01

    This paper describes the net peak counts calculating of nuclide 137 Cs at 662 keV of γ spectra in airborne radioactivity measurements using multiple linear regressions. Mathematic model is founded by analyzing every factor that has contribution to Cs peak counts in spectra, and multiple linear regression function is established. Calculating process adopts stepwise regression, and the indistinctive factors are eliminated by F check. The regression results and its uncertainty are calculated using Least Square Estimation, then the Cs peak net counts and its uncertainty can be gotten. The analysis results for experimental spectrum are displayed. The influence of energy shift and energy resolution on the analyzing result is discussed. In comparison with the stripping spectra method, multiple linear regression method needn't stripping radios, and the calculating result has relation with the counts in Cs peak only, and the calculating uncertainty is reduced. (authors)

  3. Non linear seismic analysis of charge/discharge machine

    International Nuclear Information System (INIS)

    Dostal, M.; Trbojevic, V.M.; Nobile, M.

    1987-01-01

    The main conclusions of the seismic analysis of the Latina CDM are: i. The charge machine has been demonstrated to be capable of withstanding the effects of a 0.1 g earthquake. Stresses and displacements were all within allowable limits and the stability criteria were fully satisfied for all positions of the cross-travel bogie on the gantry. ii. Movements due to loss of friction between the cross-travel bogie wheels and the rail was found to be small, i.e. less than 2 mm for all cases considered. The modes of rocking of the fixed and hinged legs preclude any possibility of excessive movement between the long travel bogie wheels and the rail. iii. The non-linear analysis incorporating contact and friction has given more realistic results than any of the linear verification analyses. The method of analysis indicates that even the larger structures can be efficiently solved on a mini computer for a long forcing input (16 s). (orig.)

  4. Analysis of the linear induction motor in transient operation

    Energy Technology Data Exchange (ETDEWEB)

    Gentile, G; Rotondale, N; Scarano, M

    1987-05-01

    The paper deals with the analysis of a bilateral linear induction motor in transient operation. We have considered an impressed voltage one-dimensional model which takes into account end effects. The real winding distribution of the armature has been represented as a lumped parameters system. By using the space vectors methodology, the partial differential equation of the sheet is solved bythe variable separation method. Therefore it's possible to arrange a system of ordinary differential equations where the unknown quantities are the space vectors of the air-gap flux density and sheet currents. Finally, we have analyzed the characteristic quantities for a no-load starting of small power motors.

  5. Relatively Inexact Proximal Point Algorithm and Linear Convergence Analysis

    Directory of Open Access Journals (Sweden)

    Ram U. Verma

    2009-01-01

    Full Text Available Based on a notion of relatively maximal (m-relaxed monotonicity, the approximation solvability of a general class of inclusion problems is discussed, while generalizing Rockafellar's theorem (1976 on linear convergence using the proximal point algorithm in a real Hilbert space setting. Convergence analysis, based on this new model, is simpler and compact than that of the celebrated technique of Rockafellar in which the Lipschitz continuity at 0 of the inverse of the set-valued mapping is applied. Furthermore, it can be used to generalize the Yosida approximation, which, in turn, can be applied to first-order evolution equations as well as evolution inclusions.

  6. Linear and Nonlinear Multiset Canonical Correlation Analysis (invited talk)

    DEFF Research Database (Denmark)

    Hilger, Klaus Baggesen; Nielsen, Allan Aasbjerg; Larsen, Rasmus

    2002-01-01

    This paper deals with decompositioning of multiset data. Friedman's alternating conditional expectations (ACE) algorithm is extended to handle multiple sets of variables of different mixtures. The new algorithm finds estimates of the optimal transformations of the involved variables that maximize...... the sum of the pair-wise correlations over all sets. The new algorithm is termed multi-set ACE (MACE) and can find multiple orthogonal eigensolutions. MACE is a generalization of the linear multiset correlations analysis (MCCA). It handles multivariate multisets of arbitrary mixtures of both continuous...

  7. Comparison of equivalent linear and non linear methods on ground response analysis: case study at West Bangka site

    International Nuclear Information System (INIS)

    Eko Rudi Iswanto; Eric Yee

    2016-01-01

    Within the framework of identifying NPP sites, site surveys are performed in West Bangka (WB), Bangka-Belitung Island Province. Ground response analysis of a potential site has been carried out using peak strain profiles and peak ground acceleration. The objective of this research is to compare Equivalent Linear (EQL) and Non Linear (NL) methods of ground response analysis on the selected NPP site (West Bangka) using Deep Soil software. Equivalent linear method is widely used because requires soil data in simple way and short time of computational process. On the other hand, non linear method is capable of representing the actual soil behaviour by considering non linear soil parameter. The results showed that EQL method has similar trends to NL method. At surface layer, the acceleration values for EQL and NL methods are resulted as 0.425 g and 0.375 g respectively. NL method is more reliable in capturing higher frequencies of spectral acceleration compared to EQL method. (author)

  8. On macroeconomic values investigation using fuzzy linear regression analysis

    Directory of Open Access Journals (Sweden)

    Richard Pospíšil

    2017-06-01

    Full Text Available The theoretical background for abstract formalization of the vague phenomenon of complex systems is the fuzzy set theory. In the paper, vague data is defined as specialized fuzzy sets - fuzzy numbers and there is described a fuzzy linear regression model as a fuzzy function with fuzzy numbers as vague parameters. To identify the fuzzy coefficients of the model, the genetic algorithm is used. The linear approximation of the vague function together with its possibility area is analytically and graphically expressed. A suitable application is performed in the tasks of the time series fuzzy regression analysis. The time-trend and seasonal cycles including their possibility areas are calculated and expressed. The examples are presented from the economy field, namely the time-development of unemployment, agricultural production and construction respectively between 2009 and 2011 in the Czech Republic. The results are shown in the form of the fuzzy regression models of variables of time series. For the period 2009-2011, the analysis assumptions about seasonal behaviour of variables and the relationship between them were confirmed; in 2010, the system behaved fuzzier and the relationships between the variables were vaguer, that has a lot of causes, from the different elasticity of demand, through state interventions to globalization and transnational impacts.

  9. Linear Stability Analysis of an Acoustically Vaporized Droplet

    Science.gov (United States)

    Siddiqui, Junaid; Qamar, Adnan; Samtaney, Ravi

    2015-11-01

    Acoustic droplet vaporization (ADV) is a phase transition phenomena of a superheat liquid (Dodecafluoropentane, C5F12) droplet to a gaseous bubble, instigated by a high-intensity acoustic pulse. This approach was first studied in imaging applications, and applicable in several therapeutic areas such as gas embolotherapy, thrombus dissolution, and drug delivery. High-speed imaging and theoretical modeling of ADV has elucidated several physical aspects, ranging from bubble nucleation to its subsequent growth. Surface instabilities are known to exist and considered responsible for evolving bubble shapes (non-spherical growth, bubble splitting and bubble droplet encapsulation). We present a linear stability analysis of the dynamically evolving interfaces of an acoustically vaporized micro-droplet (liquid A) in an infinite pool of a second liquid (liquid B). We propose a thermal ADV model for the base state. The linear analysis utilizes spherical harmonics (Ynm, of degree m and order n) and under various physical assumptions results in a time-dependent ODE of the perturbed interface amplitudes (one at the vapor/liquid A interface and the other at the liquid A/liquid B interface). The perturbation amplitudes are found to grow exponentially and do not depend on m. Supported by KAUST Baseline Research Funds.

  10. The flow analysis of supercavitating cascade by linear theory

    Energy Technology Data Exchange (ETDEWEB)

    Park, E.T. [Sung Kyun Kwan Univ., Seoul (Korea, Republic of); Hwang, Y. [Seoul National Univ., Seoul (Korea, Republic of)

    1996-06-01

    In order to reduce damages due to cavitation effects and to improve performance of fluid machinery, supercavitation around the cascade and the hydraulic characteristics of supercavitating cascade must be analyzed accurately. And the study on the effects of cavitation on fluid machinery and analysis on the performances of supercavitating hydrofoil through various elements governing flow field are critically important. In this study comparison of experiment results with the computed results of linear theory using singularity method was obtainable. Specially singularity points like sources and vortexes on hydrofoil and freestreamline were distributed to analyze two dimensional flow field of supercavitating cascade, and governing equations of flow field were derived and hydraulic characteristics of cascade were calculated by numerical analysis of the governing equations. 7 refs., 6 figs.

  11. Slope Safety Factor Calculations With Non-Linear Yield Criterion Using Finite Elements

    DEFF Research Database (Denmark)

    Clausen, Johan; Damkilde, Lars

    2006-01-01

    The factor of safety for a slope is calculated with the finite element method using a non-linear yield criterion of the Hoek-Brown type. The parameters of the Hoek-Brown criterion are found from triaxial test data. Parameters of the linear Mohr-Coulomb criterion are calibrated to the same triaxial...... are carried out at much higher stress levels than present in a slope failure, this leads to the conclusion that the use of the non-linear criterion leads to a safer slope design...

  12. Validation of head scatter factor for an Elekta synergy platform linear accelerator

    International Nuclear Information System (INIS)

    Johannes, N.B.

    2013-07-01

    A semi-empirical method has been proposed and developed to model and compute head or collimator scatter factors for 6 and 15 MV photon beams from Elekta Synergy platform linear accelerator at the radiation oncology centre of 'Sweden-Ghana Medical Centre Limited', East Legon Hills in Accra. The proposed model was based on two dimensional Gaussian distribution, whose output was compared to measured head scatter factor data for the linear accelerator obtained during commissioning of the teletherapy machine. The two dimensions Gaussian distribution model used physical specifications and configuration of the head unit (collimator system) of the linear accelerator, which were obtained from the user manual provided by the manufacturer of the linear accelerator. The algorithm for the model was implemented using Matlab software in the Microsoft windows environment. The model was done for both square and rectangular fields, and the output compared with corresponding measured data. The comparisons for the square fields were used to establish an error term in the Gaussian distribution function. The error term was determined by plotting the difference between the output factors from MatLab and the corresponding measured data as function of one side of a square field (equivalent square field). The correlation equation of the curve obtained was chosen as the error term, which was incorporated into the Gaussian distribution function. This was repeated for two photon beam energies (6 and 15 MV). The refined Gaussian distributions were then used to determine head scatter factors for square and rectangular fields. For the rectangular fields, Sterling's proposed formula was used to find equivalent square used to obtain the equivalent square fields found in the error terms of the proposed formula was sed to find equivalent square used to obtain the equivalent square fields found in the error terms of the proposed and developed model. The output of the 2D Gaussian distribution without

  13. Factor analysis of multivariate data

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A.; Mahadevan, R.

    A brief introduction to factor analysis is presented. A FORTRAN program, which can perform the Q-mode and R-mode factor analysis and the singular value decomposition of a given data matrix is presented in Appendix B. This computer program, uses...

  14. Design, analysis and fabrication of a linear permanent magnet ...

    Indian Academy of Sciences (India)

    MONOJIT SEAL

    Linear permanent magnet synchronous machine; LPMSM—fabrication; design optimisation; finite-element ... induction motor (LIM) prototype was patented in 1890 [1]. Since then, linear ..... Also, for manual winding, more slot area is allotted to ...

  15. Analysis of magnetohydrodynamic flow in linear induction EM pump

    International Nuclear Information System (INIS)

    Geun Jong Yoo; Choi, H.K.; Eun, J.J.; Bae, Y.S.

    2005-01-01

    Numerical analysis is performed for magnetic and magnetohydrodynamic (MHD) flow fields in linear induction type electromagnetic (EM) pump. A finite volume method is applied to solve magnetic field governing equations and the Navier-Stokes equations. Vector and scalar potential methods are adopted to obtain the electric and magnetic fields and the resulting Lorentz force in solving Maxwell equations. The magnetic field and velocity distributions are found to be influenced by the phase of applied electric current. Computational results indicate that the magnetic flux distribution with changing phase of input electric current is characterized by pairs of counter-rotating closed loops. The velocity distributions are affected by the intensity of Lorentz force. The governing equations for the magnetic and flow fields are only semi-coupled in this study, therefore, further study with fully-coupled governing equations are required. (authors)

  16. Longitudinal Jitter Analysis of a Linear Accelerator Electron Gun

    Directory of Open Access Journals (Sweden)

    MingShan Liu

    2016-11-01

    Full Text Available We present measurements and analysis of the longitudinal timing jitter of a Beijing Electron Positron Collider (BEPCII linear accelerator electron gun. We simulated the longitudinal jitter effect of the gun using PARMELA to evaluate beam performance, including: beam profile, average energy, energy spread, and XY emittances. The maximum percentage difference of the beam parameters is calculated to be 100%, 13.27%, 42.24% and 65.01%, 86.81%, respectively. Due to this, the bunching efficiency is reduced to 54%. However, the longitudinal phase difference of the reference particle was 9.89°. The simulation results are in agreement with tests and are helpful to optimize the beam parameters by tuning the trigger timing of the gun during the bunching process.

  17. Weibull and lognormal Taguchi analysis using multiple linear regression

    International Nuclear Information System (INIS)

    Piña-Monarrez, Manuel R.; Ortiz-Yañez, Jesús F.

    2015-01-01

    The paper provides to reliability practitioners with a method (1) to estimate the robust Weibull family when the Taguchi method (TM) is applied, (2) to estimate the normal operational Weibull family in an accelerated life testing (ALT) analysis to give confidence to the extrapolation and (3) to perform the ANOVA analysis to both the robust and the normal operational Weibull family. On the other hand, because the Weibull distribution neither has the normal additive property nor has a direct relationship with the normal parameters (µ, σ), in this paper, the issues of estimating a Weibull family by using a design of experiment (DOE) are first addressed by using an L_9 (3"4) orthogonal array (OA) in both the TM and in the Weibull proportional hazard model approach (WPHM). Then, by using the Weibull/Gumbel and the lognormal/normal relationships and multiple linear regression, the direct relationships between the Weibull and the lifetime parameters are derived and used to formulate the proposed method. Moreover, since the derived direct relationships always hold, the method is generalized to the lognormal and ALT analysis. Finally, the method’s efficiency is shown through its application to the used OA and to a set of ALT data. - Highlights: • It gives the statistical relations and steps to use the Taguchi Method (TM) to analyze Weibull data. • It gives the steps to determine the unknown Weibull family to both the robust TM setting and the normal ALT level. • It gives a method to determine the expected lifetimes and to perform its ANOVA analysis in TM and ALT analysis. • It gives a method to give confidence to the extrapolation in an ALT analysis by using the Weibull family of the normal level.

  18. On the dynamic analysis of piecewise-linear networks

    OpenAIRE

    Heemels, W.P.M.H.; Camlibel, M.K.; Schumacher, J.M.

    2002-01-01

    Piecewise-linear (PL) modeling is often used to approximate the behavior of nonlinear circuits. One of the possible PL modeling methodologies is based on the linear complementarity problem, and this approach has already been used extensively in the circuits and systems community for static networks. In this paper, the object of study will be dynamic electrical circuits that can be recast as linear complementarity systems, i.e., as interconnections of linear time-invariant differential equatio...

  19. Spectral analysis of linear relations and degenerate operator semigroups

    International Nuclear Information System (INIS)

    Baskakov, A G; Chernyshov, K I

    2002-01-01

    Several problems of the spectral theory of linear relations in Banach spaces are considered. Linear differential inclusions in a Banach space are studied. The construction of the phase space and solutions is carried out with the help of the spectral theory of linear relations, ergodic theorems, and degenerate operator semigroups

  20. Three dimensional finite element linear analysis of reinforced concrete structures

    International Nuclear Information System (INIS)

    Inbasakaran, M.; Pandarinathan, V.G.; Krishnamoorthy, C.S.

    1979-01-01

    A twenty noded isoparametric reinforced concrete solid element for the three dimensional linear elastic stress analysis of reinforced concrete structures is presented. The reinforcement is directly included as an integral part of the element thus facilitating discretization of the structure independent of the orientation of reinforcement. Concrete stiffness is evaluated by taking 3 x 3 x 3 Gauss integration rule and steel stiffness is evaluated numerically by considering three Gaussian points along the length of reinforcement. The numerical integration for steel stiffness necessiates the conversion of global coordiantes of the Gaussian points to nondimensional local coordinates and this is done by Newton Raphson iterative method. Subroutines for the above formulation have been developed and added to SAP and STAP routines for solving the examples. The validity of the reinforced concrete element is verified by comparison of results from finite element analysis and analytical results. It is concluded that this finite element model provides a valuable analytical tool for the three dimensional elastic stress analysis of concrete structures like beams curved in plan and nuclear containment vessels. (orig.)

  1. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  2. Application of linearized model to the stability analysis of the pressurized water reactor

    International Nuclear Information System (INIS)

    Li Haipeng; Huang Xiaojin; Zhang Liangju

    2008-01-01

    A Linear Time-Invariant model of the Pressurized Water Reactor is formulated through the linearization of the nonlinear model. The model simulation results show that the linearized model agrees well with the nonlinear model under small perturbation. Based upon the Lyapunov's First Method, the linearized model is applied to the stability analysis of the Pressurized Water Reactor. The calculation results show that the methodology of linearization to stability analysis is conveniently feasible. (authors)

  3. Time-dependent tumour repopulation factors in linear-quadratic equations

    International Nuclear Information System (INIS)

    Dale, R.G.

    1989-01-01

    Tumour proliferation effects can be tentatively quantified in the linear-quadratic (LQ) method by the incorporation of a time-dependent factor, the magnitude of which is related both to the value of α in the tumour α/β ratio, and to the tumour doubling time. The method, the principle of which has been suggested by a numbre of other workers for use in fractionated therapy, is here applied to both fractionated and protracted radiotherapy treatments, and examples of its uses are given. By assuming that repopulation of late-responding tissues is significant during normal treatment strategies in terms of the behaviour of the Extrapolated Response Dose (ERD). Although the numerical credibility of the analysis used here depends on the reliability of the LQ model, and on the assumption that the rate of repopulation is constant throughout treatment, the predictions are consistent with other lines of reasoning which point to the advantages of accelerated hyperfractionation. In particular, it is demonstrated that accelerated fractionation represents a relatively 'foregiving' treatment which enables tumours of a variety of sensitivities and clonogenic growth rates to be treated moderately successfully, even though the critical cellular parameters may not be known in individual cases. The analysis also suggests that tumours which combine low intrinsic sensitivity with a very short doubling time might be bettter controlled by low dose-rate continuous therapy than by almost any form of accelerated hyperfractionation. (author). 24 refs.; 5 figs

  4. Frame sequences analysis technique of linear objects movement

    Science.gov (United States)

    Oshchepkova, V. Y.; Berg, I. A.; Shchepkin, D. V.; Kopylova, G. V.

    2017-12-01

    Obtaining data by noninvasive methods are often needed in many fields of science and engineering. This is achieved through video recording in various frame rate and light spectra. In doing so quantitative analysis of movement of the objects being studied becomes an important component of the research. This work discusses analysis of motion of linear objects on the two-dimensional plane. The complexity of this problem increases when the frame contains numerous objects whose images may overlap. This study uses a sequence containing 30 frames at the resolution of 62 × 62 pixels and frame rate of 2 Hz. It was required to determine the average velocity of objects motion. This velocity was found as an average velocity for 8-12 objects with the error of 15%. After processing dependencies of the average velocity vs. control parameters were found. The processing was performed in the software environment GMimPro with the subsequent approximation of the data obtained using the Hill equation.

  5. Frequency prediction by linear stability analysis around mean flow

    Science.gov (United States)

    Bengana, Yacine; Tuckerman, Laurette

    2017-11-01

    The frequency of certain limit cycles resulting from a Hopf bifurcation, such as the von Karman vortex street, can be predicted by linear stability analysis around their mean flows. Barkley (2006) has shown this to yield an eigenvalue whose real part is zero and whose imaginary part matches the nonlinear frequency. This property was named RZIF by Turton et al. (2015); moreover they found that the traveling waves (TW) of thermosolutal convection have the RZIF property. They explained this as a consequence of the fact that the temporal Fourier spectrum is dominated by the mean flow and first harmonic. We could therefore consider that only the first mode is important in the saturation of the mean flow as presented in the Self-Consistent Model (SCM) of Mantic-Lugo et al. (2014). We have implemented a full Newton's method to solve the SCM for thermosolutal convection. We show that while the RZIF property is satisfied far from the threshold, the SCM model reproduces the exact frequency only very close to the threshold. Thus, the nonlinear interaction of only the first mode with itself is insufficiently accurate to estimate the mean flow. Our next step will be to take into account higher harmonics and to apply this analysis to the standing waves, for which RZIF does not hold.

  6. Log Linear Models for Religious and Social Factors affecting the practice of Family Planning Methods in Lahore, Pakistan

    Directory of Open Access Journals (Sweden)

    Farooq Ahmad

    2006-01-01

    Full Text Available This is cross sectional study based on 304 households (couples with wives age less than 48 years, chosen from urban locality (city Lahore. Fourteen religious, demographic and socio-economic factors of categorical nature like husband education, wife education, husband’s monthly income, occupation of husband, household size, husband-wife discussion, number of living children, desire for more children, duration of marriage, present age of wife, age of wife at marriage, offering of prayers, political view, and religiously decisions were taken to understand acceptance of family planning. Multivariate log-linear analysis was applied to identify association pattern and interrelationship among factors. The logit model was applied to explore the relationship between predictor factors and dependent factor, and to explore which are the factors upon which acceptance of family planning is highly depending. Log-linear analysis demonstrate that preference of contraceptive use was found to be consistently associated with factors Husband-Wife discussion, Desire for more children, No. of children, Political view and Duration of married life. While Husband’s monthly income, Occupation of husband, Age of wife at marriage and Offering of prayers resulted in no statistical explanation of adoption of family planning methods.

  7. Non linear structures seismic analysis by modal synthesis

    International Nuclear Information System (INIS)

    Aita, S.; Brochard, D.; Guilbaud, D.; Gibert, R.J.

    1987-01-01

    The structures submitted to a seismic excitation, may present a great amplitude response which induces a non linear behaviour. These non linearities have an important influence on the response of the structure. Even in this case (local shocks) the modal synthesis method remains attractive. In this paper we will present the way of taking into account, a local non linearity (shock between structures) in the seismic response of structures, by using the modal synthesis method [fr

  8. Factorization of a class of almost linear second-order differential equations

    International Nuclear Information System (INIS)

    Estevez, P G; Kuru, S; Negro, J; Nieto, L M

    2007-01-01

    A general type of almost linear second-order differential equations, which are directly related to several interesting physical problems, is characterized. The solutions of these equations are obtained using the factorization technique, and their non-autonomous invariants are also found by means of scale transformations

  9. Cryptanalysis of DES with a reduced number of rounds: Sequences of linear factors in block ciphers

    NARCIS (Netherlands)

    D. Chaum (David); J.-H. Evertse (Jan-Hendrik)

    1985-01-01

    textabstractA blockcipher is said to have a linear factor if, for all plaintexts and keys, there is a fixed non-empty set of key bits whose simultaneous complementation leaves the exclusive-or sum of a fixed non-empty set of ciphertext bits unchanged.

  10. Using Hierarchical Linear Modelling to Examine Factors Predicting English Language Students' Reading Achievement

    Science.gov (United States)

    Fung, Karen; ElAtia, Samira

    2015-01-01

    Using Hierarchical Linear Modelling (HLM), this study aimed to identify factors such as ESL/ELL/EAL status that would predict students' reading performance in an English language arts exam taken across Canada. Using data from the 2007 administration of the Pan-Canadian Assessment Program (PCAP) along with the accompanying surveys for students and…

  11. Classification of acute stress using linear and non-linear heart rate variability analysis derived from sternal ECG

    DEFF Research Database (Denmark)

    Tanev, George; Saadi, Dorthe Bodholt; Hoppe, Karsten

    2014-01-01

    Chronic stress detection is an important factor in predicting and reducing the risk of cardiovascular disease. This work is a pilot study with a focus on developing a method for detecting short-term psychophysiological changes through heart rate variability (HRV) features. The purpose of this pilot...... study is to establish and to gain insight on a set of features that could be used to detect psychophysiological changes that occur during chronic stress. This study elicited four different types of arousal by images, sounds, mental tasks and rest, and classified them using linear and non-linear HRV...

  12. Microlocal analysis of a seismic linearized inverse problem

    NARCIS (Netherlands)

    Stolk, C.C.

    1999-01-01

    The seismic inverse problem is to determine the wavespeed c x in the interior of a medium from measurements at the boundary In this paper we analyze the linearized inverse problem in general acoustic media The problem is to nd a left inverse of the linearized forward map F or equivalently to nd the

  13. Analytic central path, sensitivity analysis and parametric linear programming

    NARCIS (Netherlands)

    A.G. Holder; J.F. Sturm; S. Zhang (Shuzhong)

    1998-01-01

    textabstractIn this paper we consider properties of the central path and the analytic center of the optimal face in the context of parametric linear programming. We first show that if the right-hand side vector of a standard linear program is perturbed, then the analytic center of the optimal face

  14. On the dynamic analysis of piecewise-linear networks

    NARCIS (Netherlands)

    Heemels, WPMH; Camlibel, MK; Schumacher, JM

    Piecewise-linear (PL) modeling is often used to approximate the behavior of nonlinear circuits. One of the possible PL modeling methodologies is based on the linear complementarity problem, and this approach has already been used extensively in the circuits and systems community for static networks.

  15. Linear regression analysis: part 14 of a series on evaluation of scientific publications.

    Science.gov (United States)

    Schneider, Astrid; Hommel, Gerhard; Blettner, Maria

    2010-11-01

    Regression analysis is an important statistical method for the analysis of medical data. It enables the identification and characterization of relationships among multiple factors. It also enables the identification of prognostically relevant risk factors and the calculation of risk scores for individual prognostication. This article is based on selected textbooks of statistics, a selective review of the literature, and our own experience. After a brief introduction of the uni- and multivariable regression models, illustrative examples are given to explain what the important considerations are before a regression analysis is performed, and how the results should be interpreted. The reader should then be able to judge whether the method has been used correctly and interpret the results appropriately. The performance and interpretation of linear regression analysis are subject to a variety of pitfalls, which are discussed here in detail. The reader is made aware of common errors of interpretation through practical examples. Both the opportunities for applying linear regression analysis and its limitations are presented.

  16. On form factors of the conjugated field in the non-linear Schroedinger model

    Energy Technology Data Exchange (ETDEWEB)

    Kozlowski, K.K.

    2011-05-15

    Izergin-Korepin's lattice discretization of the non-linear Schroedinger model along with Oota's inverse problem provides one with determinant representations for the form factors of the lattice discretized conjugated field operator. We prove that these form factors converge, in the zero lattice spacing limit, to those of the conjugated field operator in the continuous model. We also compute the large-volume asymptotic behavior of such form factors in the continuous model. These are in particular characterized by Fredholm determinants of operators acting on closed contours. We provide a way of defining these Fredholm determinants in the case of generic paramaters. (orig.)

  17. First course in factor analysis

    CERN Document Server

    Comrey, Andrew L

    2013-01-01

    The goal of this book is to foster a basic understanding of factor analytic techniques so that readers can use them in their own research and critically evaluate their use by other researchers. Both the underlying theory and correct application are emphasized. The theory is presented through the mathematical basis of the most common factor analytic models and several methods used in factor analysis. On the application side, considerable attention is given to the extraction problem, the rotation problem, and the interpretation of factor analytic results. Hence, readers are given a background of

  18. Force Characteristics Analysis for Linear Machine with DC Field Excitations

    Directory of Open Access Journals (Sweden)

    A/L Krishna Preshant

    2018-01-01

    Full Text Available In urban regions and particularly in developing countries such as Malaysia with its ever-growing transport sector, there is the need for energy efficient systems. In urban railway systems there is a requirement of frequent braking and start/stop motion, and energy is lost during these processes. To improve the issues of the conventional braking systems, particularly in Japan, they have introduced linear induction motor techniques. The drawbacks of this method, however, is the use of permanent magnets, which not only increase the weight of the entire system but also increases magnetic cogging. Hence an alternative is required which uses the same principles as Magnetic-Levitation but using a magnet-less system. Therefore, the objective of this research is to propose an electromagnetic rail brake system and to analyze the effect of replacing permanent magnets with a magnet-less braking systems to produce a significant amount of brake thrust as compared with the permanent magnet system. The modeling and performance analysis of the model is done using Finite Element Analysis (FEA. The mechanical aspects of the model are designed on Solidworks and then imported to JMAG Software to proceed with the electro-magnetic analysis of the model. There are 3 models developed: Base Model (steel, Permanent Magnet (PM Model and DC Coil Model. The performance of the proposed 2D models developed is evaluated in terms of average force produced and motor constant square density. By comparing the values for the 3 models for the same case of 9A current supplied for a 0.1mm/s moving velocity, the base model, permanent magnet model and DC coil model produced an average force of 7.78 N, 7.55 N, and 8.34 N respectively, however, with increase in DC current supplied to the DC coil model, the average force produced is increased to 13.32 N. Thus, the advantage of the DC coil (magnet-less model, is, that the force produced can be controlled by varying the number of turns in the

  19. Observation and analysis of oscillations in linear accelerators

    International Nuclear Information System (INIS)

    Seeman, J.T.

    1991-11-01

    This report discusses the following on oscillation in linear accelerators: Betatron Oscillations; Betatron Oscillations at High Currents; Transverse Profile Oscillations; Transverse Profile Oscillations at High Currents.; Oscillation and Profile Transient Jitter; and Feedback on Transverse Oscillations

  20. Electromagnetic linear machines with dual Halbach array design and analysis

    CERN Document Server

    Yan, Liang; Peng, Juanjuan; Zhang, Lei; Jiao, Zongxia

    2017-01-01

    This book extends the conventional two-dimensional (2D) magnet arrangement into 3D pattern for permanent magnet linear machines for the first time, and proposes a novel dual Halbach array. It can not only effectively increase the radial component of magnetic flux density and output force of tubular linear machines, but also significantly reduce the axial flux density, radial force and thus system vibrations and noises. The book is also the first to address the fundamentals and provide a summary of conventional arrays, as well as novel concepts for PM pole design in electric linear machines. It covers theoretical study, numerical simulation, design optimization and experimental works systematically. The design concept and analytical approaches can be implemented to other linear and rotary machines with similar structures. The book will be of interest to academics, researchers, R&D engineers and graduate students in electronic engineering and mechanical engineering who wish to learn the core principles, met...

  1. Sparse Linear Solver for Power System Analysis Using FPGA

    National Research Council Canada - National Science Library

    Johnson, J. R; Nagvajara, P; Nwankpa, C

    2005-01-01

    .... Numerical solution to load flow equations are typically computed using Newton-Raphson iteration, and the most time consuming component of the computation is the solution of a sparse linear system...

  2. Thyroid nodule classification using ultrasound elastography via linear discriminant analysis.

    Science.gov (United States)

    Luo, Si; Kim, Eung-Hun; Dighe, Manjiri; Kim, Yongmin

    2011-05-01

    The non-surgical diagnosis of thyroid nodules is currently made via a fine needle aspiration (FNA) biopsy. It is estimated that somewhere between 250,000 and 300,000 thyroid FNA biopsies are performed in the United States annually. However, a large percentage (approximately 70%) of these biopsies turn out to be benign. Since the aggressive FNA management of thyroid nodules is costly, quantitative risk assessment and stratification of a nodule's malignancy is of value in triage and more appropriate healthcare resources utilization. In this paper, we introduce a new method for classifying the thyroid nodules based on the ultrasound (US) elastography features. Unlike approaches to assess the stiffness of a thyroid nodule by visually inspecting the pseudo-color pattern in the strain image, we use a classification algorithm to stratify the nodule by using the power spectrum of strain rate waveform extracted from the US elastography image sequence. Pulsation from the carotid artery was used to compress the thyroid nodules. Ultrasound data previously acquired from 98 thyroid nodules were used in this retrospective study to evaluate our classification algorithm. A classifier was developed based on the linear discriminant analysis (LDA) and used to differentiate the thyroid nodules into two types: (I) no FNA (observation-only) and (II) FNA. Using our method, 62 nodules were classified as type I, all of which were benign, while 36 nodules were classified as Type-II, 16 malignant and 20 benign, resulting in a sensitivity of 100% and specificity of 75.6% in detecting malignant thyroid nodules. This indicates that our triage method based on US elastography has the potential to substantially reduce the number of FNA biopsies (63.3%) by detecting benign nodules and managing them via follow-up observations rather than an FNA biopsy. Published by Elsevier B.V.

  3. Lithuanian Population Aging Factors Analysis

    Directory of Open Access Journals (Sweden)

    Agnė Garlauskaitė

    2015-05-01

    Full Text Available The aim of this article is to identify the factors that determine aging of Lithuania’s population and to assess the influence of these factors. The article shows Lithuanian population aging factors analysis, which consists of two main parts: the first describes the aging of the population and its characteristics in theoretical terms. Second part is dedicated to the assessment of trends that influence the aging population and demographic factors and also to analyse the determinants of the aging of the population of Lithuania. After analysis it is concluded in the article that the decline in the birth rate and increase in the number of emigrants compared to immigrants have the greatest impact on aging of the population, so in order to show the aging of the population, a lot of attention should be paid to management of these demographic processes.

  4. Likelihood-based Dynamic Factor Analysis for Measurement and Forecasting

    NARCIS (Netherlands)

    Jungbacker, B.M.J.P.; Koopman, S.J.

    2015-01-01

    We present new results for the likelihood-based analysis of the dynamic factor model. The latent factors are modelled by linear dynamic stochastic processes. The idiosyncratic disturbance series are specified as autoregressive processes with mutually correlated innovations. The new results lead to

  5. MTF measurement and analysis of linear array HgCdTe infrared detectors

    Science.gov (United States)

    Zhang, Tong; Lin, Chun; Chen, Honglei; Sun, Changhong; Lin, Jiamu; Wang, Xi

    2018-01-01

    The slanted-edge technique is the main method for measurement detectors MTF, however this method is commonly used on planar array detectors. In this paper the authors present a modified slanted-edge method to measure the MTF of linear array HgCdTe detectors. Crosstalk is one of the major factors that degrade the MTF value of such an infrared detector. This paper presents an ion implantation guard-ring structure which was designed to effectively absorb photo-carriers that may laterally defuse between adjacent pixels thereby suppressing crosstalk. Measurement and analysis of the MTF of the linear array detectors with and without a guard-ring were carried out. The experimental results indicated that the ion implantation guard-ring structure effectively suppresses crosstalk and increases MTF value.

  6. Correction of X-ray diffraction profiles in linear-type PSPC by position factor

    International Nuclear Information System (INIS)

    Takahashi, Toshio

    1992-01-01

    PSPC (Position Sensitive Proportional Counter) makes it possible to obtain one-dimentional diffraction profiles without mechanical scanning. In a linear-type PSPC, the obtained profiles need correcting, because the position factor influences the intensity of the diffracted X-ray beam and the counting rate at each position on PSPC. The distances from the specimen are not the same at the center and at the edge of the detector, and the intensity decreases at the edge because of radiation and absorption. The counting rate varies with the incident angle of the diffracted beam at each position on PSPC. The position factor f i at channel i of the multichannel-analyser is given by f i = cos 4 α i ·exp{-μR(1/cosα i -1)} where R is the distance between the specimen and the center of PSPC, μ is the linear absorption coefficient and α i is the incident angle of the diffracted beam at channel i. The background profiles of silica gel powder were measured with CrKα and CuKα. The parameters of the model function were fitted to the profiles by the non-linear least squares method. The agreement between these parameters and the calculated values shows that the position factor can correct the measured profiles properly. (author)

  7. Comparison of modal spectral and non-linear time history analysis of a piping system

    International Nuclear Information System (INIS)

    Gerard, R.; Aelbrecht, D.; Lafaille, J.P.

    1987-01-01

    A typical piping system of the discharge line of the chemical and volumetric control system, outside the containment, between the penetration and the heat exchanger, an operating power plant was analyzed using four different methods: Modal spectral analysis with 2% constant damping, modal spectral analysis using ASME Code Case N411 (PVRC damping), linear time history analysis, non-linear time history analysis. This paper presents an estimation of the conservatism of the linear methods compared to the non-linear analysis. (orig./HP)

  8. Modeling and analysis of linear hyperbolic systems of balance laws

    CERN Document Server

    Bartecki, Krzysztof

    2016-01-01

    This monograph focuses on the mathematical modeling of distributed parameter systems in which mass/energy transport or wave propagation phenomena occur and which are described by partial differential equations of hyperbolic type. The case of linear (or linearized) 2 x 2 hyperbolic systems of balance laws is considered, i.e., systems described by two coupled linear partial differential equations with two variables representing physical quantities, depending on both time and one-dimensional spatial variable. Based on practical examples of a double-pipe heat exchanger and a transportation pipeline, two typical configurations of boundary input signals are analyzed: collocated, wherein both signals affect the system at the same spatial point, and anti-collocated, in which the input signals are applied to the two different end points of the system. The results of this book emerge from the practical experience of the author gained during his studies conducted in the experimental installation of a heat exchange cente...

  9. Control system analysis for the perturbed linear accelerator rf system

    CERN Document Server

    Sung Il Kwon

    2002-01-01

    This paper addresses the modeling problem of the linear accelerator RF system in SNS. Klystrons are modeled as linear parameter varying systems. The effect of the high voltage power supply ripple on the klystron output voltage and the output phase is modeled as an additive disturbance. The cavity is modeled as a linear system and the beam current is modeled as the exogenous disturbance. The output uncertainty of the low level RF system which results from the uncertainties in the RF components and cabling is modeled as multiplicative uncertainty. Also, the feedback loop uncertainty and digital signal processing signal conditioning subsystem uncertainties are lumped together and are modeled as multiplicative uncertainty. Finally, the time delays in the loop are modeled as a lumped time delay. For the perturbed open loop system, the closed loop system performance, and stability are analyzed with the PI feedback controller.

  10. CONTROL SYSTEM ANALYSIS FOR THE PERTURBED LINEAR ACCELERATOR RF SYSTEM

    International Nuclear Information System (INIS)

    SUNG-IL KWON; AMY H. REGAN

    2002-01-01

    This paper addresses the modeling problem of the linear accelerator RF system in SNS. Klystrons are modeled as linear parameter varying systems. The effect of the high voltage power supply ripple on the klystron output voltage and the output phase is modeled as an additive disturbance. The cavity is modeled as a linear system and the beam current is modeled as the exogenous disturbance. The output uncertainty of the low level RF system which results from the uncertainties in the RF components and cabling is modeled as multiplicative uncertainty. Also, the feedback loop uncertainty and digital signal processing signal conditioning subsystem uncertainties are lumped together and are modeled as multiplicative uncertainty. Finally, the time delays in the loop are modeled as a lumped time delay. For the perturbed open loop system, the closed loop system performance, and stability are analyzed with the PI feedback controller

  11. MDCT linear and volumetric analysis of adrenal glands: Normative data and multiparametric assessment

    International Nuclear Information System (INIS)

    Carsin-Vu, Aline; Mule, Sebastien; Janvier, Annaelle; Hoeffel, Christine; Oubaya, Nadia; Delemer, Brigitte; Soyer, Philippe

    2016-01-01

    To study linear and volumetric adrenal measurements, their reproducibility, and correlations between total adrenal volume (TAV) and adrenal micronodularity, age, gender, body mass index (BMI), visceral (VAAT) and subcutaneous adipose tissue volume (SAAT), presence of diabetes, chronic alcoholic abuse and chronic inflammatory disease (CID). We included 154 patients (M/F, 65/89; mean age, 57 years) undergoing abdominal multidetector row computed tomography (MDCT). Two radiologists prospectively independently performed adrenal linear and volumetric measurements with semi-automatic software. Inter-observer reliability was studied using inter-observer correlation coefficient (ICC). Relationships between TAV and associated factors were studied using bivariate and multivariable analysis. Mean TAV was 8.4 ± 2.7 cm 3 (3.3-18.7 cm 3 ). ICC was excellent for TAV (0.97; 95 % CI: 0.96-0.98) and moderate to good for linear measurements. TAV was significantly greater in men (p < 0.0001), alcoholics (p = 0.04), diabetics (p = 0.0003) and those with micronodular glands (p = 0.001). TAV was lower in CID patients (p = 0.0001). TAV correlated positively with VAAT (r = 0.53, p < 0.0001), BMI (r = 0.42, p < 0.0001), SAAT (r = 0.29, p = 0.0003) and age (r = 0.23, p = 0.005). Multivariable analysis revealed gender, micronodularity, diabetes, age and BMI as independent factors influencing TAV. Adrenal gland MDCT-based volumetric measurements are more reproducible than linear measurements. Gender, micronodularity, age, BMI and diabetes independently influence TAV. (orig.)

  12. Linear elastic obstacles: analysis of experimental results in the case of stress dependent pre-exponentials

    International Nuclear Information System (INIS)

    Surek, T.; Kuon, L.G.; Luton, M.J.; Jones, J.J.

    1975-01-01

    For the case of linear elastic obstacles, the analysis of experimental plastic flow data is shown to have a particularly simple form when the pre-exponential factor is a single-valued function of the modulus-reduced stress. The analysis permits the separation of the stress and temperature dependence of the strain rate into those of the pre-exponential factor and the activation free energy. As a consequence, the true values of the activation enthalpy, volume and entropy also are obtained. The approach is applied to four sets of experimental data, including Zr, and the results for the pre-exponential term are examined for self-consistency in view of the assumed functional dependence

  13. Linear stability analysis of flow instabilities with a nodalized reduced order model in heated channel

    International Nuclear Information System (INIS)

    Paul, Subhanker; Singh, Suneet

    2015-01-01

    The prime objective of the presented work is to develop a Nodalized Reduced Order Model (NROM) to carry linear stability analysis of flow instabilities in a two-phase flow system. The model is developed by dividing the single phase and two-phase region of a uniformly heated channel into N number of nodes followed by time dependent spatial linear approximations for single phase enthalpy and two-phase quality between the consecutive nodes. Moving boundary scheme has been adopted in the model, where all the node boundaries vary with time due to the variation of boiling boundary inside the heated channel. Using a state space approach, the instability thresholds are delineated by stability maps plotted in parameter planes of phase change number (N pch ) and subcooling number (N sub ). The prime feature of the present model is that, though the model equations are simpler due to presence of linear-linear approximations for single phase enthalpy and two-phase quality, yet the results are in good agreement with the existing models (Karve [33]; Dokhane [34]) where the model equations run for several pages and experimental data (Solberg [41]). Unlike the existing ROMs, different two-phase friction factor multiplier correlations have been incorporated in the model. The applicability of various two-phase friction factor multipliers and their effects on stability behaviour have been depicted by carrying a comparative study. It is also observed that the Friedel model for friction factor calculations produces the most accurate results with respect to the available experimental data. (authors)

  14. Factor Analysis for Clustered Observations.

    Science.gov (United States)

    Longford, N. T.; Muthen, B. O.

    1992-01-01

    A two-level model for factor analysis is defined, and formulas for a scoring algorithm for this model are derived. A simple noniterative method based on decomposition of total sums of the squares and cross-products is discussed and illustrated with simulated data and data from the Second International Mathematics Study. (SLD)

  15. Transforming Rubrics Using Factor Analysis

    Science.gov (United States)

    Baryla, Ed; Shelley, Gary; Trainor, William

    2012-01-01

    Student learning and program effectiveness is often assessed using rubrics. While much time and effort may go into their creation, it is equally important to assess how effective and efficient the rubrics actually are in terms of measuring competencies over a number of criteria. This study demonstrates the use of common factor analysis to identify…

  16. Stability analysis of switched linear systems defined by graphs

    NARCIS (Netherlands)

    Athanasopoulos, N.; Lazar, M.

    2014-01-01

    We present necessary and sufficient conditions for global exponential stability for switched discrete-time linear systems, under arbitrary switching, which is constrained within a set of admissible transitions. The class of systems studied includes the family of systems under arbitrary switching,

  17. Force analysis of linear induction motor for magnetic levitation system

    NARCIS (Netherlands)

    Kuijpers, A.A.; Nemlioglu, C.; Sahin, F.; Verdel, A.J.D.; Compter, J.C.; Lomonova, E.

    2010-01-01

    This paper presents the analyses of thrust and normal forces of linear induction motor (LIM) segments which are implemented in a rotating ring system. To obtain magnetic levitation in a cost effective and sustainable way, decoupled control of thrust and normal forces is required. This study includes

  18. Linear analysis of degree correlations in complex networks

    Indian Academy of Sciences (India)

    Many real-world networks such as the protein–protein interaction networks and metabolic networks often display nontrivial correlations between degrees of vertices connected by edges. Here, we analyse the statistical methods used usually to describe the degree correlation in the networks, and analytically give linear ...

  19. Geometrically non linear analysis of functionally graded material ...

    African Journals Online (AJOL)

    user

    when compared to the other engineering materials (Akhavan and Hamed, 2010). However, FGM plates under mechanical loading may undergo elastic instability. Hence, the non-linear behavior of functionally graded plates has to be understood for their optimum design. Reddy (2000) proposed the theoretical formulation ...

  20. Analysis of Students' Errors on Linear Programming at Secondary ...

    African Journals Online (AJOL)

    The purpose of this study was to identify secondary school students' errors on linear programming at 'O' level. It is based on the fact that students' errors inform teaching hence an essential tool for any serious mathematics teacher who intends to improve mathematics teaching. The study was guided by a descriptive survey ...

  1. Simulated Analysis of Linear Reversible Enzyme Inhibition with SCILAB

    Science.gov (United States)

    Antuch, Manuel; Ramos, Yaquelin; Álvarez, Rubén

    2014-01-01

    SCILAB is a lesser-known program (than MATLAB) for numeric simulations and has the advantage of being free software. A challenging software-based activity to analyze the most common linear reversible inhibition types with SCILAB is described. Students establish typical values for the concentration of enzyme, substrate, and inhibitor to simulate…

  2. Hyperspectral and multispectral data fusion based on linear-quadratic nonnegative matrix factorization

    Science.gov (United States)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2017-04-01

    This paper proposes three multisharpening approaches to enhance the spatial resolution of urban hyperspectral remote sensing images. These approaches, related to linear-quadratic spectral unmixing techniques, use a linear-quadratic nonnegative matrix factorization (NMF) multiplicative algorithm. These methods begin by unmixing the observable high-spectral/low-spatial resolution hyperspectral and high-spatial/low-spectral resolution multispectral images. The obtained high-spectral/high-spatial resolution features are then recombined, according to the linear-quadratic mixing model, to obtain an unobservable multisharpened high-spectral/high-spatial resolution hyperspectral image. In the first designed approach, hyperspectral and multispectral variables are independently optimized, once they have been coherently initialized. These variables are alternately updated in the second designed approach. In the third approach, the considered hyperspectral and multispectral variables are jointly updated. Experiments, using synthetic and real data, are conducted to assess the efficiency, in spatial and spectral domains, of the designed approaches and of linear NMF-based approaches from the literature. Experimental results show that the designed methods globally yield very satisfactory spectral and spatial fidelities for the multisharpened hyperspectral data. They also prove that these methods significantly outperform the used literature approaches.

  3. Linear accelerator-breeder (LAB): a preliminary analysis and proposal

    International Nuclear Information System (INIS)

    1976-01-01

    The development and demonstration of a Linear Accelerator-Breeder (LAB) is proposed. This would be a machine which would use a powerful linear accelerator to produce an intense beam of protons or deuterons impinging on a target of a heavy element, to produce spallation neutrons. These neutrons would in turn be absorbed in fertile 238 U or 232 Th to produce fissile 239 Pu or 233 U. Though a Linear Accelerator-Breeder is not visualized as competitive to a fast breeder such as the LMFBR, it would offer definite benefits in improved flexibility of options, and it could probably be developed more rapidly than the LMFBR if fuel cycle problems made this desirable. It is estimated that at a beam power of 300 MW a Linear Accelerator-Breeder could produce about 1100 kg/year of fissile 239 Pu or 233 U, which would be adequate to fuel from 2,650 to 15,000 MW(e) of fission reactor capacity depending on the fuel cycle used. A two-year design study is proposed, and various cost estimates are presented. The concept of the Linear Accelerator-Breeder is not new, having been the basis for a major AEC project (MTA) a number of years ago. It has also been pursued in Canada starting from the proposal for an Intense Neutron Generator (ING) several years ago. The technical basis for a reasonable design has only recently been achieved. The concept offers an opportunity to fill an important gap that may develop between the short-term and long-term energy options for energy security of the nation

  4. An empirical study for ranking risk factors using linear assignment: A case study of road construction

    Directory of Open Access Journals (Sweden)

    Amin Foroughi

    2012-04-01

    Full Text Available Road construction projects are considered as the most important governmental issues since there are normally heavy investments required in such projects. There is also shortage of financial resources in governmental budget, which makes the asset allocation more challenging. One primary step in reducing the cost is to determine different risks associated with execution of such project activities. In this study, we present some important risk factors associated with road construction in two levels for a real-world case study of rail-road industry located between two cities of Esfahan and Deligan. The first group of risk factors includes the probability and the effects for various attributes including cost, time, quality and performance. The second group of risk factors includes socio-economical factors as well as political and managerial aspects. The study finds 21 main risk factors as well as 193 sub risk factors. The factors are ranked using groups decision-making method called linear assignment. The preliminary results indicate that the road construction projects could finish faster with better outcome should we carefully consider risk factors and attempt to reduce their impacts.

  5. Linear Matrix Inequalities for Analysis and Control of Linear Vector Second-Order Systems

    DEFF Research Database (Denmark)

    Adegas, Fabiano Daher; Stoustrup, Jakob

    2015-01-01

    the Lyapunov matrix and the system matrices by introducing matrix multipliers, which potentially reduce conservativeness in hard control problems. Multipliers facilitate the usage of parameter-dependent Lyapunov functions as certificates of stability of uncertain and time-varying vector second-order systems......SUMMARY Many dynamical systems are modeled as vector second-order differential equations. This paper presents analysis and synthesis conditions in terms of LMI with explicit dependence in the coefficient matrices of vector second-order systems. These conditions benefit from the separation between....... The conditions introduced in this work have the potential to increase the practice of analyzing and controlling systems directly in vector second-order form. Copyright © 2014 John Wiley & Sons, Ltd....

  6. A STATISTICAL ANALYSIS OF GDP AND FINAL CONSUMPTION USING SIMPLE LINEAR REGRESSION. THE CASE OF ROMANIA 1990–2010

    OpenAIRE

    Aniela Balacescu; Marian Zaharia

    2011-01-01

    This paper aims to examine the causal relationship between GDP and final consumption. The authors used linear regression model in which GDP is considered variable results, and final consumption variable factor. In drafting article we used Excel software application that is a modern computing and statistical data analysis.

  7. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  8. Contact analysis and experimental investigation of a linear ultrasonic motor.

    Science.gov (United States)

    Lv, Qibao; Yao, Zhiyuan; Li, Xiang

    2017-11-01

    The effects of surface roughness are not considered in the traditional motor model which fails to reflect the actual contact mechanism between the stator and slider. An analytical model for calculating the tangential force of linear ultrasonic motor is proposed in this article. The presented model differs from the previous spring contact model, the asperities in contact between stator and slider are considered. The influences of preload and exciting voltage on tangential force in moving direction are analyzed. An experiment is performed to verify the feasibility of this proposed model by comparing the simulation results with the measured data. Moreover, the proposed model and spring model are compared. The results reveal that the proposed model is more accurate than spring model. The discussion is helpful for designing and modeling of linear ultrasonic motors. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Linear dynamical quantum systems analysis, synthesis, and control

    CERN Document Server

    Nurdin, Hendra I

    2017-01-01

    This monograph provides an in-depth treatment of the class of linear-dynamical quantum systems. The monograph presents a detailed account of the mathematical modeling of these systems using linear algebra and quantum stochastic calculus as the main tools for a treatment that emphasizes a system-theoretic point of view and the control-theoretic formulations of quantum versions of familiar problems from the classical (non-quantum) setting, including estimation and filtering, realization theory, and feedback control. Both measurement-based feedback control (i.e., feedback control by a classical system involving a continuous-time measurement process) and coherent feedback control (i.e., feedback control by another quantum system without the intervention of any measurements in the feedback loop) are treated. Researchers and graduates studying systems and control theory, quantum probability and stochastics or stochastic control whether from backgrounds in mechanical or electrical engineering or applied mathematics ...

  10. Stability analysis of switched linear systems defined by graphs

    OpenAIRE

    Athanasopoulos, Nikolaos; Lazar, Mircea

    2015-01-01

    We present necessary and sufficient conditions for global exponential stability for switched discrete-time linear systems, under arbitrary switching, which is constrained within a set of admissible transitions. The class of systems studied includes the family of systems under arbitrary switching, periodic systems, and systems with minimum and maximum dwell time specifications. To reach the result, we describe the set of rules that define the admissible transitions with a weighted directed gra...

  11. Communication: Symmetrical quasi-classical analysis of linear optical spectroscopy

    Science.gov (United States)

    Provazza, Justin; Coker, David F.

    2018-05-01

    The symmetrical quasi-classical approach for propagation of a many degree of freedom density matrix is explored in the context of computing linear spectra. Calculations on a simple two state model for which exact results are available suggest that the approach gives a qualitative description of peak positions, relative amplitudes, and line broadening. Short time details in the computed dipole autocorrelation function result in exaggerated tails in the spectrum.

  12. Analysis of photo linear elements, Laramie Mountains, Wyoming

    Science.gov (United States)

    Blackstone, D. L., Jr.

    1973-01-01

    The author has identified the following significant results. Photo linear features in the Precambrian rocks of the Laramie Mountains are delineated, and the azimuths plotted on rose diagrams. Three strike directions are dominant, two of which are in the northeast quadrant. Laramide folds in the Laramie basin to the west of the mountains appear to have the same trend, and apparently have been controlled by response of the basement along fractures such as have been measured from the imagery.

  13. Association between parental socio-demographic factors and declined linear growth of young children in Jakarta

    Directory of Open Access Journals (Sweden)

    Hartono Gunardi

    2018-02-01

    Full Text Available Background: In Indonesia, approximately 35.5% of children under five years old were stunted. Stunting is related to shorter adult stature, poor cognition and educational performance, low adult wages, lost productivity, and higher risk of nutrition-related chronic disease. The aim of this study was to identify parental socio-demographic risk factors of declined linear growth in children younger than 2 years old.Methods: This was a cohort-prospective study between August 2012 and May 2014 at three primary community health care centers (Puskesmas in Jakarta, Indonesia, namely Puskesmas Jatinegara, Mampang, and Tebet. Subjects were healthy children under 2 years old, in which their weight and height were measured serially (at 6–11 weeks old and 18–24 months old. The length-for-age based on those data was used to determine stature status. The serial measurement was done to detect growth pattern. Parental socio-demographic data were obtained from questionnairesResults: From the total of 160 subjects, 14 (8.7% showed declined growth pattern from normal to stunted and 10 (6.2% to severely stunted. As many as 134 (83.8% subjects showed consistent normal growth pattern. Only 2 (1.2% showed improvement in the linear growth. Maternal education duration less than 9 years (RR=2.60, 95% CI=1.23–5.46; p=0.02 showed statistically significant association with declined linear growth in children.Conclusion: Mother with education duration less than 9 years was the determining socio-demographic risk factor that contributed to the declined linear growth in children less than 2 years of age.

  14. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  15. Comparative study between output factors obtained in a linear accelerator used for radiosurgery treatments

    International Nuclear Information System (INIS)

    Velázquez Trejo, J.J.; Olive, K.I.; Gutiérrez Castillo, J.G.; Hardy Pérez, A.E.

    2017-01-01

    Purpose: To compare the output factors obtained in a linear accelerator with conical collimators using five models of detectors, through tree different methods: the ratio of detector readings, the “daisy chain” technique (for diodes) and applying the k fclin, fmsr Qclin, Qmsr factors based in the formalism proposed by the IAEA (this one was applied only to tree detectors). Methods: A linear accelerator Varian-iX was employed with BrainLab conical collimators (30 mm to 7.5 mm), the detectors used were: PTW-PinPoint 31016 (×2), PTW-tipo E 60017 (×2), PTW-microLion 31018 (×2), EDGE (Sun-Nuclear), y PTW-Semiflex 31010. For the first three models were analyzed two detectors with different series. The measurements were carried out in water at depth of 1.5 cm and source to surface distance of 98.5 cm. Results: With the readings ratio method, all detectors showed differences from 3.5% to more than 15% in the smallest field sizes, for the diodes the “daisy chain” method did not provide significant corrections. Applying the k fclin, fmsr Qclin, Qmsr Small the detectors PTW60017, PTW31018 and EDGE showed differences of less than 3%. Conclusions: In small fields the readings ratio method could introduce significant errors in the output factor determination. Applying the k fclin, fmsr Qclin, Qmsr proved to be a viable option. [es

  16. Linear and nonlinear analysis of density wave instability phenomena

    International Nuclear Information System (INIS)

    Ambrosini, Walter

    1999-01-01

    In this paper the mechanism of density-wave oscillations in a boiling channel with uniform and constant heat flux is analysed by linear and nonlinear analytical tools. A model developed on the basis of a semi-implicit numerical discretization of governing partial differential equations is used to provide information on the transient distribution of relevant variables along the channel during instabilities. Furthermore, a lumped parameter model and a distributed parameter model developed in previous activities are also adopted for independent confirmation of the observed trends. The obtained results are finally put in relation with the picture of the phenomenon proposed in classical descriptions. (author)

  17. A quasi-linear control theory analysis of timesharing skills

    Science.gov (United States)

    Agarwal, G. C.; Gottlieb, G. L.

    1977-01-01

    The compliance of the human ankle joint is measured by applying 0 to 50 Hz band-limited gaussian random torques to the foot of a seated human subject. These torques rotate the foot in a plantar-dorsal direction about a horizontal axis at a medial moleolus of the ankle. The applied torques and the resulting angular rotation of the foot are measured, digitized and recorded for off-line processing. Using such a best-fit, second-order model, the effective moment of inertia of the ankle joint, the angular viscosity and the stiffness are calculated. The ankle joint stiffness is shown to be a linear function of the level of tonic muscle contraction, increasing at a rate of 20 to 40 Nm/rad/Kg.m. of active torque. In terms of the muscle physiology, the more muscle fibers that are active, the greater the muscle stiffness. Joint viscosity also increases with activation. Joint stiffness is also a linear function of the joint angle, increasing at a rate of about 0.7 to 1.1 Nm/rad/deg from plantar flexion to dorsiflexion rotation.

  18. Design and analysis of tubular permanent magnet linear wave generator.

    Science.gov (United States)

    Si, Jikai; Feng, Haichao; Su, Peng; Zhang, Lufeng

    2014-01-01

    Due to the lack of mature design program for the tubular permanent magnet linear wave generator (TPMLWG) and poor sinusoidal characteristics of the air gap flux density for the traditional surface-mounted TPMLWG, a design method and a new secondary structure of TPMLWG are proposed. An equivalent mathematical model of TPMLWG is established to adopt the transformation relationship between the linear velocity of permanent magnet rotary generator and the operating speed of TPMLWG, to determine the structure parameters of the TPMLWG. The new secondary structure of the TPMLWG contains surface-mounted permanent magnets and the interior permanent magnets, which form a series-parallel hybrid magnetic circuit, and their reasonable structure parameters are designed to get the optimum pole-arc coefficient. The electromagnetic field and temperature field of TPMLWG are analyzed using finite element method. It can be included that the sinusoidal characteristics of air gap flux density of the new secondary structure TPMLWG are improved, the cogging force as well as mechanical vibration is reduced in the process of operation, and the stable temperature rise of generator meets the design requirements when adopting the new secondary structure of the TPMLWG.

  19. Design and Analysis of Tubular Permanent Magnet Linear Wave Generator

    Directory of Open Access Journals (Sweden)

    Jikai Si

    2014-01-01

    Full Text Available Due to the lack of mature design program for the tubular permanent magnet linear wave generator (TPMLWG and poor sinusoidal characteristics of the air gap flux density for the traditional surface-mounted TPMLWG, a design method and a new secondary structure of TPMLWG are proposed. An equivalent mathematical model of TPMLWG is established to adopt the transformation relationship between the linear velocity of permanent magnet rotary generator and the operating speed of TPMLWG, to determine the structure parameters of the TPMLWG. The new secondary structure of the TPMLWG contains surface-mounted permanent magnets and the interior permanent magnets, which form a series-parallel hybrid magnetic circuit, and their reasonable structure parameters are designed to get the optimum pole-arc coefficient. The electromagnetic field and temperature field of TPMLWG are analyzed using finite element method. It can be included that the sinusoidal characteristics of air gap flux density of the new secondary structure TPMLWG are improved, the cogging force as well as mechanical vibration is reduced in the process of operation, and the stable temperature rise of generator meets the design requirements when adopting the new secondary structure of the TPMLWG.

  20. Design and Analysis of Tubular Permanent Magnet Linear Wave Generator

    Science.gov (United States)

    Si, Jikai; Feng, Haichao; Su, Peng; Zhang, Lufeng

    2014-01-01

    Due to the lack of mature design program for the tubular permanent magnet linear wave generator (TPMLWG) and poor sinusoidal characteristics of the air gap flux density for the traditional surface-mounted TPMLWG, a design method and a new secondary structure of TPMLWG are proposed. An equivalent mathematical model of TPMLWG is established to adopt the transformation relationship between the linear velocity of permanent magnet rotary generator and the operating speed of TPMLWG, to determine the structure parameters of the TPMLWG. The new secondary structure of the TPMLWG contains surface-mounted permanent magnets and the interior permanent magnets, which form a series-parallel hybrid magnetic circuit, and their reasonable structure parameters are designed to get the optimum pole-arc coefficient. The electromagnetic field and temperature field of TPMLWG are analyzed using finite element method. It can be included that the sinusoidal characteristics of air gap flux density of the new secondary structure TPMLWG are improved, the cogging force as well as mechanical vibration is reduced in the process of operation, and the stable temperature rise of generator meets the design requirements when adopting the new secondary structure of the TPMLWG. PMID:25050388

  1. BRGLM, Interactive Linear Regression Analysis by Least Square Fit

    International Nuclear Information System (INIS)

    Ringland, J.T.; Bohrer, R.E.; Sherman, M.E.

    1985-01-01

    1 - Description of program or function: BRGLM is an interactive program written to fit general linear regression models by least squares and to provide a variety of statistical diagnostic information about the fit. Stepwise and all-subsets regression can be carried out also. There are facilities for interactive data management (e.g. setting missing value flags, data transformations) and tools for constructing design matrices for the more commonly-used models such as factorials, cubic Splines, and auto-regressions. 2 - Method of solution: The least squares computations are based on the orthogonal (QR) decomposition of the design matrix obtained using the modified Gram-Schmidt algorithm. 3 - Restrictions on the complexity of the problem: The current release of BRGLM allows maxima of 1000 observations, 99 variables, and 3000 words of main memory workspace. For a problem with N observations and P variables, the number of words of main memory storage required is MAX(N*(P+6), N*P+P*P+3*N, and 3*P*P+6*N). Any linear model may be fit although the in-memory workspace will have to be increased for larger problems

  2. Linear and Nonlinear Analysis of Brain Dynamics in Children with Cerebral Palsy

    Science.gov (United States)

    Sajedi, Firoozeh; Ahmadlou, Mehran; Vameghi, Roshanak; Gharib, Masoud; Hemmati, Sahel

    2013-01-01

    This study was carried out to determine linear and nonlinear changes of brain dynamics and their relationships with the motor dysfunctions in CP children. For this purpose power of EEG frequency bands (as a linear analysis) and EEG fractality (as a nonlinear analysis) were computed in eyes-closed resting state and statistically compared between 26…

  3. Mathematical Methods in Wave Propagation: Part 2--Non-Linear Wave Front Analysis

    Science.gov (United States)

    Jeffrey, Alan

    1971-01-01

    The paper presents applications and methods of analysis for non-linear hyperbolic partial differential equations. The paper is concluded by an account of wave front analysis as applied to the piston problem of gas dynamics. (JG)

  4. Accurate Evaluation of Expected Shortfall for Linear Portfolios with Elliptically Distributed Risk Factors

    Directory of Open Access Journals (Sweden)

    Dobrislav Dobrev∗

    2017-02-01

    Full Text Available We provide an accurate closed-form expression for the expected shortfall of linear portfolios with elliptically distributed risk factors. Our results aim to correct inaccuracies that originate in Kamdem (2005 and are present also in at least thirty other papers referencing it, including the recent survey by Nadarajah et al. (2014 on estimation methods for expected shortfall. In particular, we show that the correction we provide in the popular multivariate Student t setting eliminates understatement of expected shortfall by a factor varying from at least four to more than 100 across different tail quantiles and degrees of freedom. As such, the resulting economic impact in financial risk management applications could be significant. We further correct such errors encountered also in closely related results in Kamdem (2007 and 2009 for mixtures of elliptical distributions. More generally, our findings point to the extra scrutiny required when deploying new methods for expected shortfall estimation in practice.

  5. Analysis of linear head accelerations from collegiate football impacts.

    Science.gov (United States)

    Brolinson, P Gunnar; Manoogian, Sarah; McNeely, David; Goforth, Mike; Greenwald, Richard; Duma, Stefan

    2006-02-01

    Sports-related concussions result in 300,000 brain injuries in the United States each year. We conducted a study utilizing an in-helmet system that measures and records linear head accelerations to analyze head impacts in collegiate football. The Head Impact Telemetry (HIT) System is an in-helmet system with six spring-mounted accelerometers and an antenna that transmits data via radio frequency to a sideline receiver and laptop computer system. A total of 11,604 head impacts were recorded from the Virginia Tech football team throughout the 2003 and 2004 football seasons during 22 games and 62 practices from a total of 52 players. Although the incidence of injury data are limited, this study presents an extremely large data set from human head impacts that provides valuable insight into the lower limits of head acceleration that cause mild traumatic brain injuries.

  6. The effect of zinc supplementation on linear growth, body composition, and growth factors in preterm infants.

    Science.gov (United States)

    Díaz-Gómez, N Marta; Doménech, Eduardo; Barroso, Flora; Castells, Silvia; Cortabarria, Carmen; Jiménez, Alejandro

    2003-05-01

    The aim of our study was to evaluate the effect of zinc supplementation on linear growth, body composition, and growth factors in premature infants. Thirty-six preterm infants (gestational age: 32.0 +/- 2.1 weeks, birth weight: 1704 +/- 364 g) participated in a longitudinal double-blind, randomized clinical trial. They were randomly allocated either to the supplemental (S) group fed with a standard term formula supplemented with zinc (final content 10 mg/L) and a small quantity of copper (final content 0.6 mg/L), or to the placebo group fed with the same formula without supplementation (final content of zinc: 5 mg/L and copper: 0.4 mg/L), from 36 weeks postconceptional age until 6 months corrected postnatal age. At each evaluation, anthropometric variables and bioelectrical impedance were measured, a 3-day dietary record was collected, and a blood sample was taken. We analyzed serum levels of total alkaline phosphatase, skeletal alkaline phosphatase (sALP), insulin growth factor (IGF)-I, IGF binding protein-3, IGF binding protein-1, zinc and copper, and the concentrations of zinc in erythrocytes. The S group had significantly higher zinc levels in serum and erythrocytes and lower serum copper levels with respect to the placebo group. We found that the S group had a greater linear growth (from baseline to 3 months corrected age: Delta score deviation standard length: 1.32 +/-.8 vs.38 +/-.8). The increase in total body water and in serum levels of sALP was also significantly higher in the S group (total body water: 3 months; corrected age: 3.8 +/-.5 vs 3.5 +/-.4 kg, 6 months; corrected age: 4.5 +/-.5 vs 4.2 +/-.4 kg; sALP: 3 months; corrected age: 140.2 +/- 28.7 vs 118.7 +/- 18.8 micro g/L). Zinc supplementation has a positive effect on linear growth in premature infants.

  7. Analysis of a 3-phase tubular permanent magnet linear generator

    Energy Technology Data Exchange (ETDEWEB)

    Nor, K.M.; Arof, H.; Wijono [Malaya Univ., Kuala Lumpur (Malaysia). Faculty of Engineering

    2005-07-01

    A 3-phase tubular permanent linear generator design was described. The generator was designed to be driven by a single or a double 2-stroke combustion linear engine. Combustion took place alternately between 2 opposed chambers. In the single combustion engine, one of the combustion chambers was replaced by a kickback mechanism. The force on the translator generated by the explosion in the combustion chamber was used to compress the air in the kickback chamber. The pressed air was then used to release the stored energy to push back the translator in the opposite direction. The generator was modelled as a 2D object. A parametric simulation was performed to give a series of discrete data required to calculate machine electrical parameters; flux distribution; coil flux linkage; and, cogging force. Fringing flux was evaluated through the application of a magnetic boundary condition. The infinity boundary was used to include the zero electromagnetic potential in the surface boundary. A complete simulation was run for each step of the translator's motion, which was considered as sinusoidal. The simplification was further corrected using the real engine speed curve. The EMF was derived from the flux linkage difference in the coils at every consecutive translator position. Force was calculated in the translator and stator using a virtual work method. Optimization was performed using a subproblem strategy. It was concluded that the generator can be used to supply electric power as a stand-alone system, emergency power supply, or as part of an integrated system. 11 refs., 2 tabs., 10 figs.

  8. Influence of plant root morphology and tissue composition on phenanthrene uptake: Stepwise multiple linear regression analysis

    International Nuclear Information System (INIS)

    Zhan, Xinhua; Liang, Xiao; Xu, Guohua; Zhou, Lixiang

    2013-01-01

    Polycyclic aromatic hydrocarbons (PAHs) are contaminants that reside mainly in surface soils. Dietary intake of plant-based foods can make a major contribution to total PAH exposure. Little information is available on the relationship between root morphology and plant uptake of PAHs. An understanding of plant root morphologic and compositional factors that affect root uptake of contaminants is important and can inform both agricultural (chemical contamination of crops) and engineering (phytoremediation) applications. Five crop plant species are grown hydroponically in solutions containing the PAH phenanthrene. Measurements are taken for 1) phenanthrene uptake, 2) root morphology – specific surface area, volume, surface area, tip number and total root length and 3) root tissue composition – water, lipid, protein and carbohydrate content. These factors are compared through Pearson's correlation and multiple linear regression analysis. The major factors which promote phenanthrene uptake are specific surface area and lipid content. -- Highlights: •There is no correlation between phenanthrene uptake and total root length, and water. •Specific surface area and lipid are the most crucial factors for phenanthrene uptake. •The contribution of specific surface area is greater than that of lipid. -- The contribution of specific surface area is greater than that of lipid in the two most important root morphological and compositional factors affecting phenanthrene uptake

  9. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    Science.gov (United States)

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  10. Hamiltonian analysis for linearly acceleration-dependent Lagrangians

    Energy Technology Data Exchange (ETDEWEB)

    Cruz, Miguel, E-mail: miguelcruz02@uv.mx, E-mail: roussjgc@gmail.com, E-mail: molgado@fc.uaslp.mx, E-mail: efrojas@uv.mx; Gómez-Cortés, Rosario, E-mail: miguelcruz02@uv.mx, E-mail: roussjgc@gmail.com, E-mail: molgado@fc.uaslp.mx, E-mail: efrojas@uv.mx; Rojas, Efraín, E-mail: miguelcruz02@uv.mx, E-mail: roussjgc@gmail.com, E-mail: molgado@fc.uaslp.mx, E-mail: efrojas@uv.mx [Facultad de Física, Universidad Veracruzana, 91000 Xalapa, Veracruz, México (Mexico); Molgado, Alberto, E-mail: miguelcruz02@uv.mx, E-mail: roussjgc@gmail.com, E-mail: molgado@fc.uaslp.mx, E-mail: efrojas@uv.mx [Facultad de Ciencias, Universidad Autónoma de San Luis Potosí, Avenida Salvador Nava S/N Zona Universitaria, CP 78290 San Luis Potosí, SLP, México (Mexico)

    2016-06-15

    We study the constrained Ostrogradski-Hamilton framework for the equations of motion provided by mechanical systems described by second-order derivative actions with a linear dependence in the accelerations. We stress out the peculiar features provided by the surface terms arising for this type of theories and we discuss some important properties for this kind of actions in order to pave the way for the construction of a well defined quantum counterpart by means of canonical methods. In particular, we analyse in detail the constraint structure for these theories and its relation to the inherent conserved quantities where the associated energies together with a Noether charge may be identified. The constraint structure is fully analyzed without the introduction of auxiliary variables, as proposed in recent works involving higher order Lagrangians. Finally, we also provide some examples where our approach is explicitly applied and emphasize the way in which our original arrangement results in propitious for the Hamiltonian formulation of covariant field theories.

  11. A Fresh Look at Linear Ordinary Differential Equations with Constant Coefficients. Revisiting the Impulsive Response Method Using Factorization

    Science.gov (United States)

    Camporesi, Roberto

    2016-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as…

  12. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.

    Science.gov (United States)

    Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-04-01

    To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.

  13. Least Squares Adjustment: Linear and Nonlinear Weighted Regression Analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2007-01-01

    This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying and satellite positioning applications. In these fields regression is often termed adjustment. The note also contains a couple of typical land surveying...... and satellite positioning application examples. In these application areas we are typically interested in the parameters in the model typically 2- or 3-D positions and not in predictive modelling which is often the main concern in other regression analysis applications. Adjustment is often used to obtain...... the clock error) and to obtain estimates of the uncertainty with which the position is determined. Regression analysis is used in many other fields of application both in the natural, the technical and the social sciences. Examples may be curve fitting, calibration, establishing relationships between...

  14. Non-linear analysis in Light Water Reactor design

    International Nuclear Information System (INIS)

    Rashid, Y.R.; Sharabi, M.N.; Nickell, R.E.; Esztergar, E.P.; Jones, J.W.

    1980-03-01

    The results obtained from a scoping study sponsored by the US Department of Energy (DOE) under the Light Water Reactor (LWR) Safety Technology Program at Sandia National Laboratories are presented. Basically, this project calls for the examination of the hypothesis that the use of nonlinear analysis methods in the design of LWR systems and components of interest include such items as: the reactor vessel, vessel internals, nozzles and penetrations, component support structures, and containment structures. Piping systems are excluded because they are being addressed by a separate study. Essentially, the findings were that nonlinear analysis methods are beneficial to LWR design from a technical point of view. However, the costs needed to implement these methods are the roadblock to readily adopting them. In this sense, a cost-benefit type of analysis must be made on the various topics identified by these studies and priorities must be established. This document is the complete report by ANATECH International Corporation

  15. Free vibration analysis of linear particle chain impact damper

    Science.gov (United States)

    Gharib, Mohamed; Ghani, Saud

    2013-11-01

    Impact dampers have gained much research interest over the past decades that resulted in several analytical and experimental studies being conducted in that area. The main emphasis of such research was on developing and enhancing these popular passive control devices with an objective of decreasing the three parameters of contact forces, accelerations, and noise levels. To that end, the authors of this paper have developed a novel impact damper, called the Linear Particle Chain (LPC) impact damper, which mainly consists of a linear chain of spherical balls of varying sizes. The LPC impact damper was designed utilizing the kinetic energy of the primary system through placing, in the chain arrangement, a small-sized ball between each two large-sized balls. The concept of the LPC impact damper revolves around causing the small-sized ball to collide multiple times with the larger ones upon exciting the primary system. This action is believed to lead to the dissipation of part of the kinetic energy at each collision with the large balls. This paper focuses on the outcome of studying the free vibration of a single degree freedom system that is equipped with the LPC impact damper. The proposed LPC impact damper is validated by means of comparing the responses of a single unit conventional impact damper with those resulting from the LPC impact damper. The results indicated that the latter is considerably more efficient than the former impact damper. In order to further investigate the LPC impact damper effective number of balls and efficient geometry when used in a specific available space in the primary system, a parametric study was conducted and its result is also explained herein. Single unit impact damper [14-16]. Multiunit impact damper [17,18]. Bean bag impact damper [19,20]. Particle/granular impact damper [21,23,22]. Resilient impact damper [24]. Buffered impact damper [25-27]. Multiunit impact damper consists of multiple masses instead of a single mass. This

  16. Linear, Non-Linear and Alternative Algorithms in the Correlation of IEQ Factors with Global Comfort: A Case Study

    Directory of Open Access Journals (Sweden)

    Francesco Fassio

    2014-11-01

    Full Text Available Indoor environmental quality (IEQ factors usually considered in engineering studies, i.e., thermal, acoustical, visual comfort and indoor air quality are individually associated with the occupant satisfaction level on the basis of well-established relationships. On the other hand, the full understanding of how single IEQ factors contribute and interact to determine the overall occupant satisfaction (global comfort is currently an open field of research. The lack of a shared approach in treating the subject depends on many aspects: absence of established protocols for the collection of subjective and objective measurements, the amount of variables to consider and in general the complexity of the technical issues involved. This case study is aimed to perform a comparison between some of the models available, studying the results of a survey conducted with objective and subjective method on a classroom within University of Roma TRE premises. Different models are fitted on the same measured values, allowing comparison between different weighting schemes between IEQ categories obtained with different methods. The critical issues, like differences in the weighting scheme obtained with different IEQ models and the variability of the weighting scheme with respect to the time of exposure of the users in the building, identified during this small scale comfort assessment study, provide the basis for a survey activity on a larger scale, basis for the development of an improved IEQ assessment method.

  17. linear discriminant analysis of structure within african eggplant 'shum'

    African Journals Online (AJOL)

    ACSS

    observed clusters include petiole length, sepal length (or seed color), fruit calyx length, seeds per fruit, leaf fresh .... obtain means. A table of means per trait for each accession was then imported into R statistical software for UPGMA reordered hierarchical cluster analysis. ..... Mwale, S.E., Ssemakula, M.O., Sadik, K.,.

  18. Use of linear discriminant function analysis in seed morphotype ...

    African Journals Online (AJOL)

    Variation in seed morphology of the Lima bean in 31 accessions was studied. Data were collected on 100-seed weight, seed length and seed width. The differences among the accessions were significant, based on the three seed characteristics. K-means cluster analysis grouped the 31 accessions into four distinct groups, ...

  19. Use of Linear Discriminant Function Analysis in Five Yield Sub ...

    African Journals Online (AJOL)

    K-means cluster analysis grouped the 134 accessions into four distinct groups. Pairwise Mahalanobis 2 distance (D) among some of the groups was highly significant. From the study the yield sub-characters pod length, pod width, peduncle length and 100-seed weight contributed most to group separation in the cowpea ...

  20. Quantitative electron microscope autoradiography: application of multiple linear regression analysis

    International Nuclear Information System (INIS)

    Markov, D.V.

    1986-01-01

    A new method for the analysis of high resolution EM autoradiographs is described. It identifies labelled cell organelle profiles in sections on a strictly statistical basis and provides accurate estimates for their radioactivity without the need to make any assumptions about their size, shape and spatial arrangement. (author)

  1. Design and Characteristic Analysis of the Linear Homopolar Synchronous Motor

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Seok Myeong; Jeong, Sang Sub; Lee, Soung Ho [Chungnam National University (Korea, Republic of); Park, Young Tae [KRISS (Korea, Republic of)

    1997-07-21

    The LHSM is the combined electromagnetic propulsion and levitation, braking and guidance system for Maglev. In this paper, the LHSM has the figure-of-eight shaped 3 {phi} armature windings, the field winding, and segmented secondary having transverse bar track. we treat of the development - design, analysis - of a combined electromagnetic propulsion/levitation systems, LHSM. (author). 1 ref., 7 figs., 2 tabs.

  2. Engineering Mathematical Analysis Method for Productivity Rate in Linear Arrangement Serial Structure Automated Flow Assembly Line

    Directory of Open Access Journals (Sweden)

    Tan Chan Sin

    2015-01-01

    Full Text Available Productivity rate (Q or production rate is one of the important indicator criteria for industrial engineer to improve the system and finish good output in production or assembly line. Mathematical and statistical analysis method is required to be applied for productivity rate in industry visual overviews of the failure factors and further improvement within the production line especially for automated flow line since it is complicated. Mathematical model of productivity rate in linear arrangement serial structure automated flow line with different failure rate and bottleneck machining time parameters becomes the basic model for this productivity analysis. This paper presents the engineering mathematical analysis method which is applied in an automotive company which possesses automated flow assembly line in final assembly line to produce motorcycle in Malaysia. DCAS engineering and mathematical analysis method that consists of four stages known as data collection, calculation and comparison, analysis, and sustainable improvement is used to analyze productivity in automated flow assembly line based on particular mathematical model. Variety of failure rate that causes loss of productivity and bottleneck machining time is shown specifically in mathematic figure and presents the sustainable solution for productivity improvement for this final assembly automated flow line.

  3. A fresh look at linear ordinary differential equations with constant coefficients. Revisiting the impulsive response method using factorization

    Science.gov (United States)

    Camporesi, Roberto

    2016-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations of any order based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The approach presented here can be used in a first course on differential equations for science and engineering majors.

  4. High-throughput quantitative biochemical characterization of algal biomass by NIR spectroscopy; multiple linear regression and multivariate linear regression analysis.

    Science.gov (United States)

    Laurens, L M L; Wolfrum, E J

    2013-12-18

    One of the challenges associated with microalgal biomass characterization and the comparison of microalgal strains and conversion processes is the rapid determination of the composition of algae. We have developed and applied a high-throughput screening technology based on near-infrared (NIR) spectroscopy for the rapid and accurate determination of algal biomass composition. We show that NIR spectroscopy can accurately predict the full composition using multivariate linear regression analysis of varying lipid, protein, and carbohydrate content of algal biomass samples from three strains. We also demonstrate a high quality of predictions of an independent validation set. A high-throughput 96-well configuration for spectroscopy gives equally good prediction relative to a ring-cup configuration, and thus, spectra can be obtained from as little as 10-20 mg of material. We found that lipids exhibit a dominant, distinct, and unique fingerprint in the NIR spectrum that allows for the use of single and multiple linear regression of respective wavelengths for the prediction of the biomass lipid content. This is not the case for carbohydrate and protein content, and thus, the use of multivariate statistical modeling approaches remains necessary.

  5. A generalized linear factor model approach to the hierarchical framework for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J

    2015-05-01

    We show how the hierarchical model for responses and response times as developed by van der Linden (2007), Fox, Klein Entink, and van der Linden (2007), Klein Entink, Fox, and van der Linden (2009), and Glas and van der Linden (2010) can be simplified to a generalized linear factor model with only the mild restriction that there is no hierarchical model at the item side. This result is valuable as it enables all well-developed modelling tools and extensions that come with these methods. We show that the restriction we impose on the hierarchical model does not influence parameter recovery under realistic circumstances. In addition, we present two illustrative real data analyses to demonstrate the practical benefits of our approach. © 2014 The British Psychological Society.

  6. Determination Of Output Factor For Photon Beam Of The Mitsubishi EXL-14 Linear Accelerator

    International Nuclear Information System (INIS)

    Nurman R; Sri-Inang S; Dani

    2003-01-01

    This paper describes the determination of output factor for 6 MV photon beam of The Mitsubishi EXL-14 linear accelerator teletherapy unit. Determination of percentage depth dose curve has been done using Wallhofer dosemeter at source to surface distance, SSD of 100 cm and field size of 10 cm x 10 cm. Measurement of output has been carried out using a 0.6 cc ionization chamber inside a water phantom at depth of 5 cm with source to surface distance, SSD of 100 cm for square fields ranging in size 4 cm x 4 cm up to 10 cm x 10 cm. Output for rectangular fields which equal to the equivalent square fields were also determined. The results obtained shows that the deviations of the output for 12 cm x 3 cm and 19 cm x 7 cm were higher than ±2% to the output of the equivalent square fields. (author)

  7. On the efficacy of linear system analysis of renal autoregulation in rats

    DEFF Research Database (Denmark)

    Chon, K H; Chen, Y M; Holstein-Rathlou, N H

    1993-01-01

    In order to assess the linearity of the mechanisms subserving renal blood flow autoregulation, broad-band arterial pressure fluctuations at three different power levels were induced experimentally and the resulting renal blood flow responses were recorded. Linear system analysis methods were...

  8. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis

    Science.gov (United States)

    Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.

    2006-01-01

    Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…

  9. Sensitivity analysis of linear programming problem through a recurrent neural network

    Science.gov (United States)

    Das, Raja

    2017-11-01

    In this paper we study the recurrent neural network for solving linear programming problems. To achieve optimality in accuracy and also in computational effort, an algorithm is presented. We investigate the sensitivity analysis of linear programming problem through the neural network. A detailed example is also presented to demonstrate the performance of the recurrent neural network.

  10. Dynamic analysis of aircraft impact using the linear elastic finite element codes FINEL, SAP and STARDYNE

    International Nuclear Information System (INIS)

    Lundsager, P.; Krenk, S.

    1975-08-01

    The static and dynamic response of a cylindrical/ spherical containment to a Boeing 720 impact is computed using 3 different linear elastic computer codes: FINEL, SAP and STARDYNE. Stress and displacement fields are shown together with time histories for a point in the impact zone. The main conclusions from this study are: - In this case the maximum dynamic load factors for stress and displacements were close to 1, but a static analysis alone is not fully sufficient. - More realistic load time histories should be considered. - The main effects seem to be local. The present study does not indicate general collapse from elastic stresses alone. - Further study of material properties at high rates is needed. (author)

  11. Analysis of infant cry through weighted linear prediction cepstral coefficients and Probabilistic Neural Network.

    Science.gov (United States)

    Hariharan, M; Chee, Lim Sin; Yaacob, Sazali

    2012-06-01

    Acoustic analysis of infant cry signals has been proven to be an excellent tool in the area of automatic detection of pathological status of an infant. This paper investigates the application of parameter weighting for linear prediction cepstral coefficients (LPCCs) to provide the robust representation of infant cry signals. Three classes of infant cry signals were considered such as normal cry signals, cry signals from deaf babies and babies with asphyxia. A Probabilistic Neural Network (PNN) is suggested to classify the infant cry signals into normal and pathological cries. PNN is trained with different spread factor or smoothing parameter to obtain better classification accuracy. The experimental results demonstrate that the suggested features and classification algorithms give very promising classification accuracy of above 98% and it expounds that the suggested method can be used to help medical professionals for diagnosing pathological status of an infant from cry signals.

  12. Standardizing effect size from linear regression models with log-transformed variables for meta-analysis.

    Science.gov (United States)

    Rodríguez-Barranco, Miguel; Tobías, Aurelio; Redondo, Daniel; Molina-Portillo, Elena; Sánchez, María José

    2017-03-17

    Meta-analysis is very useful to summarize the effect of a treatment or a risk factor for a given disease. Often studies report results based on log-transformed variables in order to achieve the principal assumptions of a linear regression model. If this is the case for some, but not all studies, the effects need to be homogenized. We derived a set of formulae to transform absolute changes into relative ones, and vice versa, to allow including all results in a meta-analysis. We applied our procedure to all possible combinations of log-transformed independent or dependent variables. We also evaluated it in a simulation based on two variables either normally or asymmetrically distributed. In all the scenarios, and based on different change criteria, the effect size estimated by the derived set of formulae was equivalent to the real effect size. To avoid biased estimates of the effect, this procedure should be used with caution in the case of independent variables with asymmetric distributions that significantly differ from the normal distribution. We illustrate an application of this procedure by an application to a meta-analysis on the potential effects on neurodevelopment in children exposed to arsenic and manganese. The procedure proposed has been shown to be valid and capable of expressing the effect size of a linear regression model based on different change criteria in the variables. Homogenizing the results from different studies beforehand allows them to be combined in a meta-analysis, independently of whether the transformations had been performed on the dependent and/or independent variables.

  13. Finite elements for non-linear analysis of pipelines

    International Nuclear Information System (INIS)

    Benjamim, A.C.; Ebecken, N.F.F.

    1982-01-01

    The application of a three-dimensional lagrangian formulation for the great dislocations analysis and great rotation of pipelines systems is studied. This formulation is derived from the soil mechanics and take into account the shear stress effects. Two finite element models are implemented. The first, of right axis, uses as interpolation functions the conventional gantry functions, defined in relation to mobile coordinates. The second, of curve axis and variable cross sections, is obtained from the degeneration of the three-dimensional isoparametric element, and uses as interpolation functions third degree polynomials. (E.G.) [pt

  14. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin

    2013-01-01

    We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional feature...... and considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....

  15. An easy guide to factor analysis

    CERN Document Server

    Kline, Paul

    2014-01-01

    Factor analysis is a statistical technique widely used in psychology and the social sciences. With the advent of powerful computers, factor analysis and other multivariate methods are now available to many more people. An Easy Guide to Factor Analysis presents and explains factor analysis as clearly and simply as possible. The author, Paul Kline, carefully defines all statistical terms and demonstrates step-by-step how to work out a simple example of principal components analysis and rotation. He further explains other methods of factor analysis, including confirmatory and path analysis, a

  16. A simplified procedure of linear regression in a preliminary analysis

    Directory of Open Access Journals (Sweden)

    Silvia Facchinetti

    2013-05-01

    Full Text Available The analysis of a statistical large data-set can be led by the study of a particularly interesting variable Y – regressed – and an explicative variable X, chosen among the remained variables, conjointly observed. The study gives a simplified procedure to obtain the functional link of the variables y=y(x by a partition of the data-set into m subsets, in which the observations are synthesized by location indices (mean or median of X and Y. Polynomial models for y(x of order r are considered to verify the characteristics of the given procedure, in particular we assume r= 1 and 2. The distributions of the parameter estimators are obtained by simulation, when the fitting is done for m= r + 1. Comparisons of the results, in terms of distribution and efficiency, are made with the results obtained by the ordinary least square methods. The study also gives some considerations on the consistency of the estimated parameters obtained by the given procedure.

  17. The oscillatory behavior of heated channels: an analysis of the density effect. Part I. The mechanism (non linear analysis). Part II. The oscillations thresholds (linearized analysis)

    International Nuclear Information System (INIS)

    Boure, J.

    1967-01-01

    The problem of the oscillatory behavior of heated channels is presented in terms of delay-times and a density effect model is proposed to explain the behavior. The density effect is the consequence of the physical relationship between enthalpy and density of the fluid. In the first part non-linear equations are derived from the model in a dimensionless form. A description of the mechanism of oscillations is given, based on the analysis of the equations. An inventory of the governing parameters is established. At this point of the study, some facts in agreement with the experiments can be pointed out. In the second part the start of the oscillatory behavior of heated channels is studied in terms of the density effect. The threshold equations are derived, after linearization of the equations obtained in Part I. They can be solved rigorously by numerical methods to yield: -1) a relation between the describing parameters at the onset of oscillations, and -2) the frequency of the oscillations. By comparing the results predicted by the model to the experimental behavior of actual systems, the density effect is very often shown to be the actual cause of oscillatory behaviors. (author) [fr

  18. Manifold valued statistics, exact principal geodesic analysis and the effect of linear approximations

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Hauberg, Søren

    2010-01-01

    , we present a comparison between the non-linear analog of Principal Component Analysis, Principal Geodesic Analysis, in its linearized form and its exact counterpart that uses true intrinsic distances. We give examples of datasets for which the linearized version provides good approximations...... and for which it does not. Indicators for the differences between the two versions are then developed and applied to two examples of manifold valued data: outlines of vertebrae from a study of vertebral fractures and spacial coordinates of human skeleton end-effectors acquired using a stereo camera and tracking...

  19. Transcription factors, coregulators, and epigenetic marks are linearly correlated and highly redundant.

    Directory of Open Access Journals (Sweden)

    Tobias Ahsendorf

    Full Text Available The DNA microstates that regulate transcription include sequence-specific transcription factors (TFs, coregulatory complexes, nucleosomes, histone modifications, DNA methylation, and parts of the three-dimensional architecture of genomes, which could create an enormous combinatorial complexity across the genome. However, many proteins and epigenetic marks are known to colocalize, suggesting that the information content encoded in these marks can be compressed. It has so far proved difficult to understand this compression in a systematic and quantitative manner. Here, we show that simple linear models can reliably predict the data generated by the ENCODE and Roadmap Epigenomics consortia. Further, we demonstrate that a small number of marks can predict all other marks with high average correlation across the genome, systematically revealing the substantial information compression that is present in different cell lines. We find that the linear models for activating marks are typically cell line-independent, while those for silencing marks are predominantly cell line-specific. Of particular note, a nuclear receptor corepressor, transducin beta-like 1 X-linked receptor 1 (TBLR1, was highly predictive of other marks in two hematopoietic cell lines. The methodology presented here shows how the potentially vast complexity of TFs, coregulators, and epigenetic marks at eukaryotic genes is highly redundant and that the information present can be compressed onto a much smaller subset of marks. These findings could be used to efficiently characterize cell lines and tissues based on a small number of diagnostic marks and suggest how the DNA microstates, which regulate the expression of individual genes, can be specified.

  20. Non-linear triangle-based polynomial expansion nodal method for hexagonal core analysis

    International Nuclear Information System (INIS)

    Cho, Jin Young; Cho, Byung Oh; Joo, Han Gyu; Zee, Sung Qunn; Park, Sang Yong

    2000-09-01

    This report is for the implementation of triangle-based polynomial expansion nodal (TPEN) method to MASTER code in conjunction with the coarse mesh finite difference(CMFD) framework for hexagonal core design and analysis. The TPEN method is a variation of the higher order polynomial expansion nodal (HOPEN) method that solves the multi-group neutron diffusion equation in the hexagonal-z geometry. In contrast with the HOPEN method, only two-dimensional intranodal expansion is considered in the TPEN method for a triangular domain. The axial dependence of the intranodal flux is incorporated separately here and it is determined by the nodal expansion method (NEM) for a hexagonal node. For the consistency of node geometry of the MASTER code which is based on hexagon, TPEN solver is coded to solve one hexagonal node which is composed of 6 triangular nodes directly with Gauss elimination scheme. To solve the CMFD linear system efficiently, stabilized bi-conjugate gradient(BiCG) algorithm and Wielandt eigenvalue shift method are adopted. And for the construction of the efficient preconditioner of BiCG algorithm, the incomplete LU(ILU) factorization scheme which has been widely used in two-dimensional problems is used. To apply the ILU factorization scheme to three-dimensional problem, a symmetric Gauss-Seidel Factorization scheme is used. In order to examine the accuracy of the TPEN solution, several eigenvalue benchmark problems and two transient problems, i.e., a realistic VVER1000 and VVER440 rod ejection benchmark problems, were solved and compared with respective references. The results of eigenvalue benchmark problems indicate that non-linear TPEN method is very accurate showing less than 15 pcm of eigenvalue errors and 1% of maximum power errors, and fast enough to solve the three-dimensional VVER-440 problem within 5 seconds on 733MHz PENTIUM-III. In the case of the transient problems, the non-linear TPEN method also shows good results within a few minute of

  1. Plastic limit analysis with non linear kinematic strain hardening for metalworking processes applications

    International Nuclear Information System (INIS)

    Chaaba, Ali; Aboussaleh, Mohamed; Bousshine, Lahbib; Boudaia, El Hassan

    2011-01-01

    Limit analysis approaches are widely used to deal with metalworking processes analysis; however, they are applied only for perfectly plastic materials and recently for isotropic hardening ones excluding any kind of kinematic hardening. In the present work, using Implicit Standard Materials concept, sequential limit analysis approach and the finite element method, our objective consists in extending the limit analysis application for including linear and non linear kinematic strain hardenings. Because this plastic flow rule is non associative, the Implicit Standard Materials concept is adopted as a framework of non standard plasticity modeling. The sequential limit analysis procedure which considers the plastic behavior with non linear kinematic strain hardening as a succession of perfectly plastic behavior with yielding surfaces updated after each sequence of limit analysis and geometry updating is applied. Standard kinematic finite element method together with a regularization approach is used for performing two large compression cases (cold forging) in plane strain and axisymmetric conditions

  2. Quantization of liver tissue in dual kVp computed tomography using linear discriminant analysis

    Science.gov (United States)

    Tkaczyk, J. Eric; Langan, David; Wu, Xiaoye; Xu, Daniel; Benson, Thomas; Pack, Jed D.; Schmitz, Andrea; Hara, Amy; Palicek, William; Licato, Paul; Leverentz, Jaynne

    2009-02-01

    Linear discriminate analysis (LDA) is applied to dual kVp CT and used for tissue characterization. The potential to quantitatively model both malignant and benign, hypo-intense liver lesions is evaluated by analysis of portal-phase, intravenous CT scan data obtained on human patients. Masses with an a priori classification are mapped to a distribution of points in basis material space. The degree of localization of tissue types in the material basis space is related to both quantum noise and real compositional differences. The density maps are analyzed with LDA and studied with system simulations to differentiate these factors. The discriminant analysis is formulated so as to incorporate the known statistical properties of the data. Effective kVp separation and mAs relates to precision of tissue localization. Bias in the material position is related to the degree of X-ray scatter and partial-volume effect. Experimental data and simulations demonstrate that for single energy (HU) imaging or image-based decomposition pixel values of water-like tissues depend on proximity to other iodine-filled bodies. Beam-hardening errors cause a shift in image value on the scale of that difference sought between in cancerous and cystic lessons. In contrast, projection-based decomposition or its equivalent when implemented on a carefully calibrated system can provide accurate data. On such a system, LDA may provide novel quantitative capabilities for tissue characterization in dual energy CT.

  3. Development of non-linear vibration analysis code for CANDU fuelling machine

    International Nuclear Information System (INIS)

    Murakami, Hajime; Hirai, Takeshi; Horikoshi, Kiyomi; Mizukoshi, Kaoru; Takenaka, Yasuo; Suzuki, Norio.

    1988-01-01

    This paper describes the development of a non-linear, dynamic analysis code for the CANDU 600 fuelling machine (F-M), which includes a number of non-linearities such as gap with or without Coulomb friction, special multi-linear spring connections, etc. The capabilities and features of the code and the mathematical treatment for the non-linearities are explained. The modeling and numerical methodology for the non-linearities employed in the code are verified experimentally. Finally, the simulation analyses for the full-scale F-M vibration testing are carried out, and the applicability of the code to such multi-degree of freedom systems as F-M is demonstrated. (author)

  4. Factors influencing the occupational injuries of physical therapists in Taiwan: A hierarchical linear model approach.

    Science.gov (United States)

    Tao, Yu-Hui; Wu, Yu-Lung; Huang, Wan-Yun

    2017-01-01

    The evidence literature suggests that physical therapy practitioners are subjected to a high probability of acquiring work-related injuries, but only a few studies have specifically investigated Taiwanese physical therapy practitioners. This study was conducted to determine the relationships among individual and group hospital-level factors that contribute to the medical expenses for the occupational injuries of physical therapy practitioners in Taiwan. Physical therapy practitioners in Taiwan with occupational injuries were selected from the 2013 National Health Insurance Research Databases (NHIRD). The age, gender, job title, hospitals attributes, and outpatient data of physical therapy practitioners who sustained an occupational injury in 2013 were obtained with SAS 9.3. SPSS 20.0 and HLM 7.01 were used to conduct descriptive and hierarchical linear model analyses, respectively. The job title of physical therapy practitioners at the individual level and the hospital type at the group level exert positive effects on per person medical expenses. Hospital hierarchy moderates the individual-level relationships of age and job title with the per person medical expenses. Considering that age, job title, and hospital hierarchy affect medical expenses for the occupational injuries of physical therapy practitioners, we suggest strengthening related safety education and training and elevating the self-awareness of the risk of occupational injuries of physical therapy practitioners to reduce and prevent the occurrence of such injuries.

  5. STICAP: A linear circuit analysis program with stiff systems capability. Volume 1: Theory manual. [network analysis

    Science.gov (United States)

    Cooke, C. H.

    1975-01-01

    STICAP (Stiff Circuit Analysis Program) is a FORTRAN 4 computer program written for the CDC-6400-6600 computer series and SCOPE 3.0 operating system. It provides the circuit analyst a tool for automatically computing the transient responses and frequency responses of large linear time invariant networks, both stiff and nonstiff (algorithms and numerical integration techniques are described). The circuit description and user's program input language is engineer-oriented, making simple the task of using the program. Engineering theories underlying STICAP are examined. A user's manual is included which explains user interaction with the program and gives results of typical circuit design applications. Also, the program structure from a systems programmer's viewpoint is depicted and flow charts and other software documentation are given.

  6. Linear and non-linear stability analysis for finite difference discretizations of high-order Boussinesq equations

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Bingham, Harry B.; Madsen, Per A.

    2004-01-01

    of rotational and irrotational formulations in two horizontal dimensions provides evidence that the irrotational formulation has significantly better stability properties when the deep-water non-linearity is high, particularly on refined grids. Computation of matrix pseudospectra shows that the system is only...... insight into the numerical behaviour of this rather complicated system of non-linear PDEs....

  7. Quantifying the predictive consequences of model error with linear subspace analysis

    Science.gov (United States)

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  8. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    Science.gov (United States)

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  9. A factor analysis to detect factors influencing building national brand

    Directory of Open Access Journals (Sweden)

    Naser Azad

    Full Text Available Developing a national brand is one of the most important issues for development of a brand. In this study, we present factor analysis to detect the most important factors in building a national brand. The proposed study uses factor analysis to extract the most influencing factors and the sample size has been chosen from two major auto makers in Iran called Iran Khodro and Saipa. The questionnaire was designed in Likert scale and distributed among 235 experts. Cronbach alpha is calculated as 84%, which is well above the minimum desirable limit of 0.70. The implementation of factor analysis provides six factors including “cultural image of customers”, “exciting characteristics”, “competitive pricing strategies”, “perception image” and “previous perceptions”.

  10. Simulation and sensitivity analysis for heavy linear paraffins production in LAB production Plant

    Directory of Open Access Journals (Sweden)

    Karimi Hajir

    2014-12-01

    Full Text Available Linear alkyl benzene (LAB is vastly utilized for the production of biodegradable detergents and emulsifiers. Predistillation unit is a part of LAB production plant in which that produced heavy linear paraffins (nC10-nC13. In this study, a mathematical model has been developed for heavy linear paraffins production in distillation columns, which has been solved using a commercial code. The models have been validated by the actual data. The effects of process parameters such as reflux rate, and reflux temperature using Gradient Search technique has been investigated. The sensitivity analysis shows that optimum reflux in columns are achieved.

  11. Equivalent linearization method for limit cycle flutter analysis of plate-type structure in axial flow

    International Nuclear Information System (INIS)

    Lu Li; Yang Yiren

    2009-01-01

    The responses and limit cycle flutter of a plate-type structure with cubic stiffness in viscous flow were studied. The continuous system was dispersed by utilizing Galerkin Method. The equivalent linearization concept was performed to predict the ranges of limit cycle flutter velocities. The coupled map of flutter amplitude-equivalent linear stiffness-critical velocity was used to analyze the stability of limit cycle flutter. The theoretical results agree well with the results of numerical integration, which indicates that the equivalent linearization concept is available to the analysis of limit cycle flutter of plate-type structure. (authors)

  12. COMPARATIVE STUDY OF THREE LINEAR SYSTEM SOLVER APPLIED TO FAST DECOUPLED LOAD FLOW METHOD FOR CONTINGENCY ANALYSIS

    Directory of Open Access Journals (Sweden)

    Syafii

    2017-03-01

    Full Text Available This paper presents the assessment of fast decoupled load flow computation using three linear system solver scheme. The full matrix version of the fast decoupled load flow based on XB methods used in this study. The numerical investigations are carried out on the small and large test systems. The execution time of small system such as IEEE 14, 30, and 57 are very fast, therefore the computation time can not be compared for these cases. Another cases IEEE 118, 300 and TNB 664 produced significant execution speedup. The superLU factorization sparse matrix solver has best performance and speedup of load flow solution as well as in contigency analysis. The invers full matrix solver can solved only for IEEE 118 bus test system in 3.715 second and for another cases take too long time. However for superLU factorization linear solver can solved all of test system in 7.832 second for a largest of test system. Therefore the superLU factorization linear solver can be a viable alternative applied in contingency analysis.

  13. Non-linear analytic and coanalytic problems (Lp-theory, Clifford analysis, examples)

    International Nuclear Information System (INIS)

    Dubinskii, Yu A; Osipenko, A S

    2000-01-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the 'orthogonal' sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented

  14. Non-linear analytic and coanalytic problems ( L_p-theory, Clifford analysis, examples)

    Science.gov (United States)

    Dubinskii, Yu A.; Osipenko, A. S.

    2000-02-01

    Two kinds of new mathematical model of variational type are put forward: non-linear analytic and coanalytic problems. The formulation of these non-linear boundary-value problems is based on a decomposition of the complete scale of Sobolev spaces into the "orthogonal" sum of analytic and coanalytic subspaces. A similar decomposition is considered in the framework of Clifford analysis. Explicit examples are presented.

  15. Noise analysis of fluid-valve system in a linear compressor using CAE

    International Nuclear Information System (INIS)

    Lee, Jun Ho; Jeong, Weui Bong; Kim, Dang Ju

    2009-01-01

    A linear compressor in a refrigerator uses piston motion to transfer refrigerant so its efficiency is higher than a previous reciprocal compressor. Because of interaction between refrigerant and valves system in the linear compressor, however, noise has been a main issue. In spite of doing many experimental researches, there is no way to rightly predict the noise. In order to solve this limitation, the CAE analysis is applied. For giving credit to these computational data, all of the data are experimentally validated.

  16. The Infinitesimal Jackknife with Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  17. Simple estimating method of damages of concrete gravity dam based on linear dynamic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sasaki, T.; Kanenawa, K.; Yamaguchi, Y. [Public Works Research Institute, Tsukuba, Ibaraki (Japan). Hydraulic Engineering Research Group

    2004-07-01

    Due to the occurrence of large earthquakes like the Kobe Earthquake in 1995, there is a strong need to verify seismic resistance of dams against much larger earthquake motions than those considered in the present design standard in Japan. Problems exist in using nonlinear analysis to evaluate the safety of dams including: that the influence which the set material properties have on the results of nonlinear analysis is large, and that the results of nonlinear analysis differ greatly according to the damage estimation models or analysis programs. This paper reports the evaluation indices based on a linear dynamic analysis method and the characteristics of the progress of cracks in concrete gravity dams with different shapes using a nonlinear dynamic analysis method. The study concludes that if simple linear dynamic analysis is appropriately conducted to estimate tensile stress at potential locations of initiating cracks, the damage due to cracks would be predicted roughly. 4 refs., 1 tab., 13 figs.

  18. Identifying Plant Part Composition of Forest Logging Residue Using Infrared Spectral Data and Linear Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Gifty E. Acquah

    2016-08-01

    Full Text Available As new markets, technologies and economies evolve in the low carbon bioeconomy, forest logging residue, a largely untapped renewable resource will play a vital role. The feedstock can however be variable depending on plant species and plant part component. This heterogeneity can influence the physical, chemical and thermochemical properties of the material, and thus the final yield and quality of products. Although it is challenging to control compositional variability of a batch of feedstock, it is feasible to monitor this heterogeneity and make the necessary changes in process parameters. Such a system will be a first step towards optimization, quality assurance and cost-effectiveness of processes in the emerging biofuel/chemical industry. The objective of this study was therefore to qualitatively classify forest logging residue made up of different plant parts using both near infrared spectroscopy (NIRS and Fourier transform infrared spectroscopy (FTIRS together with linear discriminant analysis (LDA. Forest logging residue harvested from several Pinus taeda (loblolly pine plantations in Alabama, USA, were classified into three plant part components: clean wood, wood and bark and slash (i.e., limbs and foliage. Five-fold cross-validated linear discriminant functions had classification accuracies of over 96% for both NIRS and FTIRS based models. An extra factor/principal component (PC was however needed to achieve this in FTIRS modeling. Analysis of factor loadings of both NIR and FTIR spectra showed that, the statistically different amount of cellulose in the three plant part components of logging residue contributed to their initial separation. This study demonstrated that NIR or FTIR spectroscopy coupled with PCA and LDA has the potential to be used as a high throughput tool in classifying the plant part makeup of a batch of forest logging residue feedstock. Thus, NIR/FTIR could be employed as a tool to rapidly probe/monitor the variability

  19. Thermal radiation analysis for small satellites with single-node model using techniques of equivalent linearization

    International Nuclear Information System (INIS)

    Anh, N.D.; Hieu, N.N.; Chung, P.N.; Anh, N.T.

    2016-01-01

    Highlights: • Linearization criteria are presented for a single-node model of satellite thermal. • A nonlinear algebraic system for linearization coefficients is obtained. • The temperature evolutions obtained from different methods are explored. • The temperature mean and amplitudes versus the heat capacity are discussed. • The dual criterion approach yields smaller errors than other approximate methods. - Abstract: In this paper, the method of equivalent linearization is extended to the thermal analysis of satellite using both conventional and dual criteria of linearization. These criteria are applied to a differential nonlinear equation of single-node model of the heat transfer of a small satellite in the Low Earth Orbit. A system of nonlinear algebraic equations for linearization coefficients is obtained in the closed form and then solved by the iteration method. The temperature evolution, average values and amplitudes versus the heat capacity obtained by various approaches including Runge–Kutta algorithm, conventional and dual criteria of equivalent linearization, and Grande's approach are compared together. Numerical results reveal that temperature responses obtained from the method of linearization and Grande's approach are quite close to those obtained from the Runge–Kutta method. The dual criterion yields smaller errors than those of the remaining methods when the nonlinearity of the system increases, namely, when the heat capacity varies in the range [1.0, 3.0] × 10 4  J K −1 .

  20. Treating experimental data of inverse kinetic method by unitary linear regression analysis

    International Nuclear Information System (INIS)

    Zhao Yusen; Chen Xiaoliang

    2009-01-01

    The theory of treating experimental data of inverse kinetic method by unitary linear regression analysis was described. Not only the reactivity, but also the effective neutron source intensity could be calculated by this method. Computer code was compiled base on the inverse kinetic method and unitary linear regression analysis. The data of zero power facility BFS-1 in Russia were processed and the results were compared. The results show that the reactivity and the effective neutron source intensity can be obtained correctly by treating experimental data of inverse kinetic method using unitary linear regression analysis and the precision of reactivity measurement is improved. The central element efficiency can be calculated by using the reactivity. The result also shows that the effect to reactivity measurement caused by external neutron source should be considered when the reactor power is low and the intensity of external neutron source is strong. (authors)

  1. Non-Linear Multi-Physics Analysis and Multi-Objective Optimization in Electroheating Applications

    Czech Academy of Sciences Publication Activity Database

    di Barba, P.; Doležel, Ivo; Mognaschi, M. E.; Savini, A.; Karban, P.

    2014-01-01

    Roč. 50, č. 2 (2014), s. 7016604-7016604 ISSN 0018-9464 Institutional support: RVO:61388998 Keywords : coupled multi-physics problems * finite element method * non-linear equations Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.386, year: 2014

  2. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  3. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    Science.gov (United States)

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  4. A parametric FE modeling of brake for non-linear analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed,Ibrahim; Fatouh, Yasser [Automotive and Tractors Technology Department, Faculty of Industrial Education, Helwan University, Cairo (Egypt); Aly, Wael [Refrigeration and Air-Conditioning Technology Department, Faculty of Industrial Education, Helwan University, Cairo (Egypt)

    2013-07-01

    A parametric modeling of a drum brake based on 3-D Finite Element Methods (FEM) for non-contact analysis is presented. Many parameters are examined during this study such as the effect of drum-lining interface stiffness, coefficient of friction, and line pressure on the interface contact. Firstly, the modal analysis of the drum brake is also studied to get the natural frequency and instability of the drum to facilitate transforming the modal elements to non-contact elements. It is shown that the Unsymmetric solver of the modal analysis is efficient enough to solve this linear problem after transforming the non-linear behavior of the contact between the drum and the lining to a linear behavior. A SOLID45 which is a linear element is used in the modal analysis and then transferred to non-linear elements which are Targe170 and Conta173 that represent the drum and lining for contact analysis study. The contact analysis problems are highly non-linear and require significant computer resources to solve it, however, the contact problem give two significant difficulties. Firstly, the region of contact is not known based on the boundary conditions such as line pressure, and drum and friction material specs. Secondly, these contact problems need to take the friction into consideration. Finally, it showed a good distribution of the nodal reaction forces on the slotted lining contact surface and existing of the slot in the middle of the lining can help in wear removal due to the friction between the lining and the drum. Accurate contact stiffness can give a good representation for the pressure distribution between the lining and the drum. However, a full contact of the front part of the slotted lining could occur in case of 20, 40, 60 and 80 bar of piston pressure and a partially contact between the drum and lining can occur in the rear part of the slotted lining.

  5. A Nutritional Analysis of the Food Basket in BIH: A Linear Programming Approach

    Directory of Open Access Journals (Sweden)

    Arnaut-Berilo Almira

    2017-04-01

    Full Text Available This paper presents linear and goal programming optimization models for determining and analyzing the food basket in Bosnia and Herzegovina (BiH in terms of adequate nutritional needs according to World Health Organization (WHO standards and World Bank (WB recommendations. A linear programming (LP model and goal linear programming model (GLP are adequate since price and nutrient contents are linearly related to food weight. The LP model provides information about the minimal value and the structure of the food basket for an average person in BiH based on nutrient needs. GLP models are designed to give us information on minimal deviations from nutrient needs if the budget is fixed. Based on these results, poverty analysis can be performed. The data used for the models consisted of 158 food items from the general consumption of the population of BiH according to COICOP classifications, with average prices in 2015 for these products.

  6. Stability, performance and sensitivity analysis of I.I.D. jump linear systems

    Science.gov (United States)

    Chávez Fuentes, Jorge R.; González, Oscar R.; Gray, W. Steven

    2018-06-01

    This paper presents a symmetric Kronecker product analysis of independent and identically distributed jump linear systems to develop new, lower dimensional equations for the stability and performance analysis of this type of systems than what is currently available. In addition, new closed form expressions characterising multi-parameter relative sensitivity functions for performance metrics are introduced. The analysis technique is illustrated with a distributed fault-tolerant flight control example where the communication links are allowed to fail randomly.

  7. Analysis of an inventory model for both linearly decreasing demand and holding cost

    Science.gov (United States)

    Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.

    2016-03-01

    This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.

  8. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry; Kasimov, Aslan R.

    2018-01-01

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  9. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry I.

    2017-12-08

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  10. Linear stability analysis of detonations via numerical computation and dynamic mode decomposition

    KAUST Repository

    Kabanov, Dmitry

    2018-03-20

    We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.

  11. An analysis of the electromagnetic field in multi-polar linear induction system

    International Nuclear Information System (INIS)

    Chervenkova, Todorka; Chervenkov, Atanas

    2002-01-01

    In this paper a new method for determination of the electromagnetic field vectors in a multi-polar linear induction system (LIS) is described. The analysis of the electromagnetic field has been done by four dimensional electromagnetic potentials in conjunction with theory of the magnetic loops . The electromagnetic field vectors are determined in the Minkovski's space as elements of the Maxwell's tensor. The results obtained are compared with those got from the analysis made by the finite elements method (FEM).With the method represented in this paper one can determine the electromagnetic field vectors in the multi-polar linear induction system using four-dimensional potential. A priority of this method is the obtaining of analytical results for the electromagnetic field vectors. These results are also valid for linear media. The dependencies are valid also at high speeds of movement. The results of the investigated linear induction system are comparable to those got by the finite elements method. The investigations may be continued in the determination of other characteristics such as drag force, levitation force, etc. The method proposed in this paper for an analysis of linear induction system can be used for optimization calculations. (Author)

  12. Possible factors determining the non-linearity in the VO2-power output relationship in humans: theoretical studies.

    Science.gov (United States)

    Korzeniewski, Bernard; Zoladz, Jerzy A

    2003-08-01

    At low power output exercise (below lactate threshold), the oxygen uptake increases linearly with power output, but at high power output exercise (above lactate threshold) some additional oxygen consumption causes a non-linearity in the overall VO(2) (oxygen uptake rate)-power output relationship. The functional significance of this phenomenon for human exercise tolerance is very important, but the mechanisms underlying it remain unknown. In the present work, a computer model of oxidative phosphorylation in intact skeletal muscle developed previously is used to examine the background of this relationship in different modes of exercise. Our simulations demonstrate that the non-linearity in the VO(2)-power output relationship and the difference in the magnitude of this non-linearity between incremental exercise mode and square-wave exercise mode (constant power output exercise) can be generated by introducing into the model some hypothetical factor F (group of associated factors) that accumulate(s) in time during exercise. The performed computer simulations, based on this assumption, give proper time courses of changes in VO(2) and [PCr] after an onset of work of different intensities, including the slow component in VO(2), well matching the experimental results. Moreover, if it is assumed that the exercise terminates because of fatigue when the amount/intensity of F exceed some threshold value, the model allows the generation of a proper shape of the well-known power-duration curve. This fact suggests that the phenomenon of the non-linearity of the VO(2)-power output relationship and the magnitude of this non-linearity in different modes of exercise is determined by some factor(s) responsible for muscle fatigue.

  13. Analysis of Bernstein's factorization circuit

    NARCIS (Netherlands)

    Lenstra, A.K.; Shamir, A.; Tomlinson, J.; Tromer, E.; Zheng, Y.

    2002-01-01

    In [1], Bernstein proposed a circuit-based implementation of the matrix step of the number field sieve factorization algorithm. These circuits offer an asymptotic cost reduction under the measure "construction cost x run time". We evaluate the cost of these circuits, in agreement with [1], but argue

  14. On the analysis of clonogenic survival data: Statistical alternatives to the linear-quadratic model

    International Nuclear Information System (INIS)

    Unkel, Steffen; Belka, Claus; Lauber, Kirsten

    2016-01-01

    The most frequently used method to quantitatively describe the response to ionizing irradiation in terms of clonogenic survival is the linear-quadratic (LQ) model. In the LQ model, the logarithm of the surviving fraction is regressed linearly on the radiation dose by means of a second-degree polynomial. The ratio of the estimated parameters for the linear and quadratic term, respectively, represents the dose at which both terms have the same weight in the abrogation of clonogenic survival. This ratio is known as the α/β ratio. However, there are plausible scenarios in which the α/β ratio fails to sufficiently reflect differences between dose-response curves, for example when curves with similar α/β ratio but different overall steepness are being compared. In such situations, the interpretation of the LQ model is severely limited. Colony formation assays were performed in order to measure the clonogenic survival of nine human pancreatic cancer cell lines and immortalized human pancreatic ductal epithelial cells upon irradiation at 0-10 Gy. The resulting dataset was subjected to LQ regression and non-linear log-logistic regression. Dimensionality reduction of the data was performed by cluster analysis and principal component analysis. Both the LQ model and the non-linear log-logistic regression model resulted in accurate approximations of the observed dose-response relationships in the dataset of clonogenic survival. However, in contrast to the LQ model the non-linear regression model allowed the discrimination of curves with different overall steepness but similar α/β ratio and revealed an improved goodness-of-fit. Additionally, the estimated parameters in the non-linear model exhibit a more direct interpretation than the α/β ratio. Dimensionality reduction of clonogenic survival data by means of cluster analysis was shown to be a useful tool for classifying radioresistant and sensitive cell lines. More quantitatively, principal component analysis allowed

  15. Factor-of-safety formulations for linear and parabolic failure envelopes for rock. Technical memorandum report RSI-0038

    International Nuclear Information System (INIS)

    Gnirk, P.F.

    1975-01-01

    This report presents documentation of the basic formulation of the factor-of-safety relationships for linear and parabolic failure criteria for rock with an example application for a candidate room-and-pillar configuration at the proposed Alpha repository site in New Mexico. 8 figures, 4 tables

  16. Study on non-linear bistable dynamics model based EEG signal discrimination analysis method.

    Science.gov (United States)

    Ying, Xiaoguo; Lin, Han; Hui, Guohua

    2015-01-01

    Electroencephalogram (EEG) is the recording of electrical activity along the scalp. EEG measures voltage fluctuations generating from ionic current flows within the neurons of the brain. EEG signal is looked as one of the most important factors that will be focused in the next 20 years. In this paper, EEG signal discrimination based on non-linear bistable dynamical model was proposed. EEG signals were processed by non-linear bistable dynamical model, and features of EEG signals were characterized by coherence index. Experimental results showed that the proposed method could properly extract the features of different EEG signals.

  17. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    Science.gov (United States)

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  18. Design Analysis of Taper Width Variations in Magnetless Linear Machine for Traction Applications

    Directory of Open Access Journals (Sweden)

    Saadha Aminath

    2018-01-01

    Full Text Available Linear motors are being used in a different application with a huge popularity in the use of transport industry. With the invention of maglev trains and other high-speed trains, linear motors are being used for the translation and braking applications for these systems. However, a huge drawback of the linear motor design is the cogging force, low thrust values, and voltage ripples. This paper aims to study the force analysis with change in taper/teeth width of the motor stator and mover to understand the best teeth ratio to obtain a high flux density and a high thrust. The analysis is conducted through JMAG software and it is found that the optimum teeth ratio for both the stator and mover gives an increase of 94.4% increases compared to the 0.5mm stator and mover width.

  19. Apatite fission track analysis: geological thermal history analysis based on a three-dimensional random process of linear radiation damage

    International Nuclear Information System (INIS)

    Galbraith, R.F.; Laslett, G.M.; Green, P.F.; Duddy, I.R.

    1990-01-01

    Spontaneous fission of uranium atoms over geological time creates a random process of linearly shaped features (fission tracks) inside an apatite crystal. The theoretical distributions associated with this process are governed by the elapsed time and temperature history, but other factors are also reflected in empirical measurements as consequences of sampling by plane section and chemical etching. These include geometrical biases leading to over-representation of long tracks, the shape and orientation of host features when sampling totally confined tracks, and 'gaps' in heavily annealed tracks. We study the estimation of geological parameters in the presence of these factors using measurements on both confined tracks and projected semi-tracks. Of particular interest is a history of sedimentation, uplift and erosion giving rise to a two-component mixture of tracks in which the parameters reflect the current temperature, the maximum temperature and the timing of uplift. A full likelihood analysis based on all measured densities, lengths and orientations is feasible, but because some geometrical biases and measurement limitations are only partly understood it seems preferable to use conditional likelihoods given numbers and orientations of confined tracks. (author)

  20. A Homotopy-Perturbation analysis of the non-linear contaminant ...

    African Journals Online (AJOL)

    In this research work, a Homotopy-perturbation analysis of a non –linear contaminant flow equation with an initial continuous point source is provided. The equation is characterized by advection, diffusion and adsorption. We assume that the adsorption term is modeled by Freudlich Isotherm. We provide an approximation of ...

  1. Micosoft Excel Sensitivity Analysis for Linear and Stochastic Program Feed Formulation

    Science.gov (United States)

    Sensitivity analysis is a part of mathematical programming solutions and is used in making nutritional and economic decisions for a given feed formulation problem. The terms, shadow price and reduced cost, are familiar linear program (LP) terms to feed formulators. Because of the nonlinear nature of...

  2. Painlevйe analysis and integrability of two-coupled non-linear ...

    Indian Academy of Sciences (India)

    the Painlevйe property. In this case the system is expected to be integrable. In recent years more attention is paid to the study of coupled non-linear oscilla- ... Painlevйe analysis. To be self-contained, in §2 we briefly outline the salient features.

  3. Fourier two-level analysis for discontinuous Galerkin discretization with linear elements

    NARCIS (Netherlands)

    P.W. Hemker (Piet); W. Hoffmann; M.H. van Raalte (Marc)

    2002-01-01

    textabstractIn this paper we study the convergence of a multigrid method for the solution of a linear second order elliptic equation, discretized by discontinuous Galerkin (DG) methods, and we give a detailed analysis of the convergence fordifferent block-relaxation strategies. In addition to an

  4. Application of range-test in multiple linear regression analysis in ...

    African Journals Online (AJOL)

    Application of range-test in multiple linear regression analysis in the presence of outliers is studied in this paper. First, the plot of the explanatory variables (i.e. Administration, Social/Commercial, Economic services and Transfer) on the dependent variable (i.e. GDP) was done to identify the statistical trend over the years.

  5. Principal Component Analysis: Resources for an Essential Application of Linear Algebra

    Science.gov (United States)

    Pankavich, Stephen; Swanson, Rebecca

    2015-01-01

    Principal Component Analysis (PCA) is a highly useful topic within an introductory Linear Algebra course, especially since it can be used to incorporate a number of applied projects. This method represents an essential application and extension of the Spectral Theorem and is commonly used within a variety of fields, including statistics,…

  6. Factors Predictive of Symptomatic Radiation Injury After Linear Accelerator-Based Stereotactic Radiosurgery for Intracerebral Arteriovenous Malformations

    Energy Technology Data Exchange (ETDEWEB)

    Herbert, Christopher, E-mail: cherbert@bccancer.bc.ca [Department of Radiation Oncology, British Columbia Cancer Agency, Vancouver, BC (Canada); Moiseenko, Vitali [Department of Medical Physics, British Columbia Cancer Agency, Vancouver, BC (Canada); McKenzie, Michael [Department of Radiation Oncology, British Columbia Cancer Agency, Vancouver, BC (Canada); Redekop, Gary [Division of Neurosurgery, Vancouver General Hospital, University of British Columbia, Vancouver, BC (Canada); Hsu, Fred [Department of Radiation Oncology, British Columbia Cancer Agency, Abbotsford, BC (Canada); Gete, Ermias; Gill, Brad; Lee, Richard; Luchka, Kurt [Department of Medical Physics, British Columbia Cancer Agency, Vancouver, BC (Canada); Haw, Charles [Division of Neurosurgery, Vancouver General Hospital, University of British Columbia, Vancouver, BC (Canada); Lee, Andrew [Department of Neurosurgery, Royal Columbian Hospital, New Westminster, BC (Canada); Toyota, Brian [Division of Neurosurgery, Vancouver General Hospital, University of British Columbia, Vancouver, BC (Canada); Martin, Montgomery [Department of Medical Imaging, British Columbia Cancer Agency, Vancouver, BC (Canada)

    2012-07-01

    Purpose: To investigate predictive factors in the development of symptomatic radiation injury after treatment with linear accelerator-based stereotactic radiosurgery for intracerebral arteriovenous malformations and relate the findings to the conclusions drawn by Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC). Methods and Materials: Archived plans for 73 patients who were treated at the British Columbia Cancer Agency were studied. Actuarial estimates of freedom from radiation injury were calculated using the Kaplan-Meier method. Univariate and multivariate Cox proportional hazards models were used for analysis of incidence of radiation injury. Log-rank test was used to search for dosimetric parameters associated with freedom from radiation injury. Results: Symptomatic radiation injury was exhibited by 14 of 73 patients (19.2%). Actuarial rate of symptomatic radiation injury was 23.0% at 4 years. Most patients (78.5%) had mild to moderate deficits according to Common Terminology Criteria for Adverse Events, version 4.0. On univariate analysis, lesion volume and diameter, dose to isocenter, and a V{sub x} for doses {>=}8 Gy showed statistical significance. Only lesion diameter showed statistical significance (p < 0.05) in a multivariate model. According to the log-rank test, AVM volumes >5 cm{sup 3} and diameters >30 mm were significantly associated with the risk of radiation injury (p < 0.01). The V{sub 12} also showed strong association with the incidence of radiation injury. Actuarial incidence of radiation injury was 16.8% if V{sub 12} was <28 cm{sup 3} and 53.2% if >28 cm{sup 3} (log-rank test, p = 0.001). Conclusions: This study confirms that the risk of developing symptomatic radiation injury after radiosurgery is related to lesion diameter and volume and irradiated volume. Results suggest a higher tolerance than proposed by QUANTEC. The widely differing findings reported in the literature, however, raise considerable uncertainties.

  7. Factors Predictive of Symptomatic Radiation Injury After Linear Accelerator-Based Stereotactic Radiosurgery for Intracerebral Arteriovenous Malformations

    International Nuclear Information System (INIS)

    Herbert, Christopher; Moiseenko, Vitali; McKenzie, Michael; Redekop, Gary; Hsu, Fred; Gete, Ermias; Gill, Brad; Lee, Richard; Luchka, Kurt; Haw, Charles; Lee, Andrew; Toyota, Brian; Martin, Montgomery

    2012-01-01

    Purpose: To investigate predictive factors in the development of symptomatic radiation injury after treatment with linear accelerator–based stereotactic radiosurgery for intracerebral arteriovenous malformations and relate the findings to the conclusions drawn by Quantitative Analysis of Normal Tissue Effects in the Clinic (QUANTEC). Methods and Materials: Archived plans for 73 patients who were treated at the British Columbia Cancer Agency were studied. Actuarial estimates of freedom from radiation injury were calculated using the Kaplan-Meier method. Univariate and multivariate Cox proportional hazards models were used for analysis of incidence of radiation injury. Log–rank test was used to search for dosimetric parameters associated with freedom from radiation injury. Results: Symptomatic radiation injury was exhibited by 14 of 73 patients (19.2%). Actuarial rate of symptomatic radiation injury was 23.0% at 4 years. Most patients (78.5%) had mild to moderate deficits according to Common Terminology Criteria for Adverse Events, version 4.0. On univariate analysis, lesion volume and diameter, dose to isocenter, and a V x for doses ≥8 Gy showed statistical significance. Only lesion diameter showed statistical significance (p 5 cm 3 and diameters >30 mm were significantly associated with the risk of radiation injury (p 12 also showed strong association with the incidence of radiation injury. Actuarial incidence of radiation injury was 16.8% if V 12 was 3 and 53.2% if >28 cm 3 (log–rank test, p = 0.001). Conclusions: This study confirms that the risk of developing symptomatic radiation injury after radiosurgery is related to lesion diameter and volume and irradiated volume. Results suggest a higher tolerance than proposed by QUANTEC. The widely differing findings reported in the literature, however, raise considerable uncertainties.

  8. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  9. Selection and optimization of spectrometric amplifiers for gamma spectrometry: part II - linearity, live time correction factors and software

    International Nuclear Information System (INIS)

    Moraes, Marco Antonio Proenca Vieira de; Pugliesi, Reinaldo

    1996-01-01

    The objective of the present work was to establish simple criteria to choose the best combination of electronic modules to achieve an adequate high resolution gamma spectrometer. Linearity, live time correction factors and softwares of a gamma spectrometric system composed by a Hp Ge detector have been studied by using several kinds of spectrometric amplifiers: Canberra 2021, Canberra 2025, Ortec 673 and Tennelec 244 and the MCA cards Ortec and Nucleus. The results showed low values of integral non-linearity for all spectrometric amplifiers connected to the Ortec and Nucleus boards. The MCA card should be able to correct amplifier dead time for 17 kcps count rates. (author)

  10. A Simple Linear Regression Method for Quantitative Trait Loci Linkage Analysis With Censored Observations

    OpenAIRE

    Anderson, Carl A.; McRae, Allan F.; Visscher, Peter M.

    2006-01-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using...

  11. Use of multivariate extensions of generalized linear models in the analysis of data from clinical trials

    OpenAIRE

    ALONSO ABAD, Ariel; Rodriguez, O.; TIBALDI, Fabian; CORTINAS ABRAHANTES, Jose

    2002-01-01

    In medical studies the categorical endpoints are quite often. Even though nowadays some models for handling this multicategorical variables have been developed their use is not common. This work shows an application of the Multivariate Generalized Linear Models to the analysis of Clinical Trials data. After a theoretical introduction models for ordinal and nominal responses are applied and the main results are discussed. multivariate analysis; multivariate logistic regression; multicategor...

  12. Worry About Caregiving Performance: A Confirmatory Factor Analysis

    Directory of Open Access Journals (Sweden)

    Ruijie Li

    2018-03-01

    Full Text Available Recent studies on the Zarit Burden Interview (ZBI support the existence of a unique factor, worry about caregiving performance (WaP, beyond role and personal strain. Our current study aims to confirm the existence of WaP within the multidimensionality of ZBI and to determine if predictors of WaP differ from the role and personal strain. We performed confirmatory factor analysis (CFA on 466 caregiver-patient dyads to compare between one-factor (total score, two-factor (role/personal strain, three-factor (role/personal strain and WaP, and four-factor models (role strain split into two factors. We conducted linear regression analyses to explore the relationships between different ZBI factors with socio-demographic and disease characteristics, and investigated the stage-dependent differences between WaP with role and personal strain by dyadic relationship. The four-factor structure that incorporated WaP and split role strain into two factors yielded the best fit. Linear regression analyses reveal that different variables significantly predict WaP (adult child caregiver and Neuropsychiatric Inventory Questionnaire (NPI-Q severity from role/personal strain (adult child caregiver, instrumental activities of daily living, and NPI-Q distress. Unlike other factors, WaP was significantly endorsed in early cognitive impairment. Among spouses, WaP remained low across Clinical Dementia Rating (CDR stages until a sharp rise in CDR 3; adult child and sibling caregivers experience a gradual rise throughout the stages. Our results affirm the existence of WaP as a unique factor. Future research should explore the potential of WaP as a possible intervention target to improve self-efficacy in the milder stages of burden.

  13. voom: Precision weights unlock linear model analysis tools for RNA-seq read counts.

    Science.gov (United States)

    Law, Charity W; Chen, Yunshun; Shi, Wei; Smyth, Gordon K

    2014-02-03

    New normal linear modeling strategies are presented for analyzing read counts from RNA-seq experiments. The voom method estimates the mean-variance relationship of the log-counts, generates a precision weight for each observation and enters these into the limma empirical Bayes analysis pipeline. This opens access for RNA-seq analysts to a large body of methodology developed for microarrays. Simulation studies show that voom performs as well or better than count-based RNA-seq methods even when the data are generated according to the assumptions of the earlier methods. Two case studies illustrate the use of linear modeling and gene set testing methods.

  14. Three dimensional non-linear cracking analysis of prestressed concrete containment vessel

    International Nuclear Information System (INIS)

    Al-Obaid, Y.F.

    2001-01-01

    The paper gives full development of three-dimensional cracking matrices. These matrices are simulated in three-dimensional non-linear finite element analysis adopted for concrete containment vessels. The analysis includes a combination of conventional steel, the steel line r and prestressing tendons and the anisotropic stress-relations for concrete and concrete aggregate interlocking. The analysis is then extended and is linked to cracking analysis within the global finite element program OBAID. The analytical results compare well with those available from a model test. (author)

  15. Factors affecting the HIV/AIDS epidemic: An ecological analysis of ...

    African Journals Online (AJOL)

    Factors affecting the HIV/AIDS epidemic: An ecological analysis of global data. ... Backward multiple linear regression analysis identified the proportion of Muslims, physicians density, and adolescent fertility rate are as the three most prominent factors linked with the national HIV epidemic. Conclusions: The findings support ...

  16. Application of the weak-field asymptotic theory to the analysis of tunneling ionization of linear molecules

    DEFF Research Database (Denmark)

    Madsen, Lars Bojer; Tolstikhin, Oleg I.; Morishita, Toru

    2012-01-01

    The recently developed weak-field asymptotic theory [ Phys. Rev. A 84 053423 (2011)] is applied to the analysis of tunneling ionization of a molecular ion (H2+), several homonuclear (H2, N2, O2) and heteronuclear (CO, HF) diatomic molecules, and a linear triatomic molecule (CO2) in a static...... electric field. The dependence of the ionization rate on the angle between the molecular axis and the field is determined by a structure factor for the highest occupied molecular orbital. This factor is calculated using a virtually exact discrete variable representation wave function for H2+, very accurate...... Hartree-Fock wave functions for the diatomics, and a Hartree-Fock quantum chemistry wave function for CO2. The structure factors are expanded in terms of standard functions and the associated structure coefficients, allowing the determination of the ionization rate for any orientation of the molecule...

  17. Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance

    Energy Technology Data Exchange (ETDEWEB)

    O' Daniel, Jennifer C., E-mail: jennifer.odaniel@duke.edu; Yin, Fang-Fang

    2017-05-01

    Purpose: To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. Methods and Materials: A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Results: Although the failure severity was greatest for daily imaging QA (imaging vs treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Conclusions: Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number.

  18. Quantitative Approach to Failure Mode and Effect Analysis for Linear Accelerator Quality Assurance.

    Science.gov (United States)

    O'Daniel, Jennifer C; Yin, Fang-Fang

    2017-05-01

    To determine clinic-specific linear accelerator quality assurance (QA) TG-142 test frequencies, to maximize physicist time efficiency and patient treatment quality. A novel quantitative approach to failure mode and effect analysis is proposed. Nine linear accelerator-years of QA records provided data on failure occurrence rates. The severity of test failure was modeled by introducing corresponding errors into head and neck intensity modulated radiation therapy treatment plans. The relative risk of daily linear accelerator QA was calculated as a function of frequency of test performance. Although the failure severity was greatest for daily imaging QA (imaging vs treatment isocenter and imaging positioning/repositioning), the failure occurrence rate was greatest for output and laser testing. The composite ranking results suggest that performing output and lasers tests daily, imaging versus treatment isocenter and imaging positioning/repositioning tests weekly, and optical distance indicator and jaws versus light field tests biweekly would be acceptable for non-stereotactic radiosurgery/stereotactic body radiation therapy linear accelerators. Failure mode and effect analysis is a useful tool to determine the relative importance of QA tests from TG-142. Because there are practical time limitations on how many QA tests can be performed, this analysis highlights which tests are the most important and suggests the frequency of testing based on each test's risk priority number. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. [Comparison of application of Cochran-Armitage trend test and linear regression analysis for rate trend analysis in epidemiology study].

    Science.gov (United States)

    Wang, D Z; Wang, C; Shen, C F; Zhang, Y; Zhang, H; Song, G D; Xue, X D; Xu, Z L; Zhang, S; Jiang, G H

    2017-05-10

    We described the time trend of acute myocardial infarction (AMI) from 1999 to 2013 in Tianjin incidence rate with Cochran-Armitage trend (CAT) test and linear regression analysis, and the results were compared. Based on actual population, CAT test had much stronger statistical power than linear regression analysis for both overall incidence trend and age specific incidence trend (Cochran-Armitage trend P valuelinear regression P value). The statistical power of CAT test decreased, while the result of linear regression analysis remained the same when population size was reduced by 100 times and AMI incidence rate remained unchanged. The two statistical methods have their advantages and disadvantages. It is necessary to choose statistical method according the fitting degree of data, or comprehensively analyze the results of two methods.

  20. Analysis of Factors Affecting Inflation in Indonesia: an Islamic Perspective

    Directory of Open Access Journals (Sweden)

    Elis Ratna Wulan

    2015-04-01

    Full Text Available This study aims to determine the factors affecting inflation. The research is descriptive quantitative in nature. The data used are reported exchange rates, interest rates, money supply and inflation during 2008-2012. The research data was analyzed using multiple linear regression analysis. The results showed in the year 2008-2012 the condition of each variable are (1 the rate of inflation has a negative trend, (2 the interest rate has a negative trend, (3 the money supply has a positive trend, (4 the value of exchange rate has a positive trend. The test results by using multiple linear regression analysis result that variable interest rates, the money supply and the exchange rate of the rupiah significant effect on the rate of inflation.

  1. Analysis of blood pressure signal in patients with different ventricular ejection fraction using linear and non-linear methods.

    Science.gov (United States)

    Arcentales, Andres; Rivera, Patricio; Caminal, Pere; Voss, Andreas; Bayes-Genis, Antonio; Giraldo, Beatriz F

    2016-08-01

    Changes in the left ventricle function produce alternans in the hemodynamic and electric behavior of the cardiovascular system. A total of 49 cardiomyopathy patients have been studied based on the blood pressure signal (BP), and were classified according to the left ventricular ejection fraction (LVEF) in low risk (LR: LVEF>35%, 17 patients) and high risk (HR: LVEF≤35, 32 patients) groups. We propose to characterize these patients using a linear and a nonlinear methods, based on the spectral estimation and the recurrence plot, respectively. From BP signal, we extracted each systolic time interval (STI), upward systolic slope (BPsl), and the difference between systolic and diastolic BP, defined as pulse pressure (PP). After, the best subset of parameters were obtained through the sequential feature selection (SFS) method. According to the results, the best classification was obtained using a combination of linear and nonlinear features from STI and PP parameters. For STI, the best combination was obtained considering the frequency peak and the diagonal structures of RP, with an area under the curve (AUC) of 79%. The same results were obtained when comparing PP values. Consequently, the use of combined linear and nonlinear parameters could improve the risk stratification of cardiomyopathy patients.

  2. Multiple factor analysis by example using R

    CERN Document Server

    Pagès, Jérôme

    2014-01-01

    Multiple factor analysis (MFA) enables users to analyze tables of individuals and variables in which the variables are structured into quantitative, qualitative, or mixed groups. Written by the co-developer of this methodology, Multiple Factor Analysis by Example Using R brings together the theoretical and methodological aspects of MFA. It also includes examples of applications and details of how to implement MFA using an R package (FactoMineR).The first two chapters cover the basic factorial analysis methods of principal component analysis (PCA) and multiple correspondence analysis (MCA). The

  3. A solution approach for non-linear analysis of concrete members

    International Nuclear Information System (INIS)

    Hadi, N. M.; Das, S.

    1999-01-01

    Non-linear solution of reinforced concrete structural members, at and beyond its maximum strength poses complex numerical problems. This is due to the fact that concrete exhibits strain softening behaviour once it reaches its maximum strength. This paper introduces an improved non-linear solution capable to overcome the numerical problems efficiently. The paper also presents a new concept of modeling discrete cracks in concrete members by using gap elements. Gap elements are placed in between two adjacent concrete elements in tensile zone. The magnitude of elongation of gap elements, which represents the width of the crack in concrete, increases edith the increase of tensile stress in those elements. As a result, transfer of local from one concrete element to adjacent elements reduces. Results of non-linear finite element analysis of three concrete beams using this new solution strategy are compared with those obtained by other researchers, and a good agreement is achieved. (authors). 13 refs. 9 figs.,

  4. PERFORMANCE OPTIMIZATION OF LINEAR INDUCTION MOTOR BY EDDY CURRENT AND FLUX DENSITY DISTRIBUTION ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. S. MANNA

    2011-12-01

    Full Text Available The development of electromagnetic devices as machines, transformers, heating devices confronts the engineers with several problems. For the design of an optimized geometry and the prediction of the operational behaviour an accurate knowledge of the dependencies of the field quantities inside the magnetic circuits is necessary. This paper provides the eddy current and core flux density distribution analysis in linear induction motor. Magnetic flux in the air gap of the Linear Induction Motor (LIM is reduced to various losses such as end effects, fringes, effect, skin effects etc. The finite element based software package COMSOL Multiphysics Inc. USA is used to get the reliable and accurate computational results for optimization the performance of Linear Induction Motor (LIM. The geometrical characteristics of LIM are varied to find the optimal point of thrust and minimum flux leakage during static and dynamic conditions.

  5. Econometrics analysis of consumer behaviour: a linear expenditure system applied to energy

    International Nuclear Information System (INIS)

    Giansante, C.; Ferrari, V.

    1996-12-01

    In economics literature the expenditure system specification is a well known subject. The problem is to define a coherent representation of consumer behaviour through functional forms easy to calculate. In this work it is used the Stone-Geary Linear Expenditure System and its multi-level decision process version. The Linear Expenditure system is characterized by an easy calculating estimation procedure, and its multi-level specification allows substitution and complementary relations between goods. Moreover, the utility function separability condition on which the Utility Tree Approach is based, justifies to use an estimation procedure in two or more steps. This allows to use an high degree of expenditure categories disaggregation, impossible to reach the Linear Expediture System. The analysis is applied to energy sectors

  6. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Slattery, S. R.; Wilson, P. P. H. [Engineering Physics Department, University of Wisconsin - Madison, 1500 Engineering Dr., Madison, WI 53706 (United States); Evans, T. M. [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37830 (United States)

    2013-07-01

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)

  7. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    International Nuclear Information System (INIS)

    Slattery, S. R.; Wilson, P. P. H.; Evans, T. M.

    2013-01-01

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of stochastic histories from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem to test the models for symmetric operators. In general, the derived approximations show good agreement with measured computational results. (authors)

  8. Linearized method: A new approach for kinetic analysis of central dopamine D2 receptor specific binding

    International Nuclear Information System (INIS)

    Watabe, Hiroshi; Hatazawa, Jun; Ishiwata, Kiichi; Ido, Tatsuo; Itoh, Masatoshi; Iwata, Ren; Nakamura, Takashi; Takahashi, Toshihiro; Hatano, Kentaro

    1995-01-01

    The authors proposed a new method (Linearized method) to analyze neuroleptic ligand-receptor specific binding in a human brain using positron emission tomography (PET). They derived the linear equation to solve four rate constants, k 3 , k 4 , k 5 , k 6 from PET data. This method does not demand radioactivity curve in plasma as an input function to brain, and can do fast calculations in order to determine rate constants. They also tested Nonlinearized method including nonlinear equations which is conventional analysis using plasma radioactivity corrected for ligand metabolites as an input function. The authors applied these methods to evaluate dopamine D 2 receptor specific binding of [ 11 C] YM-09151-2. The value of B max /K d = k 3 k 4 obtained by Linearized method was 5.72 ± 3.1 which was consistent with the value of 5.78 ± 3.4 obtained by Nonlinearized method

  9. Characterising non-linear dynamics in nocturnal breathing patterns of healthy infants using recurrence quantification analysis.

    Science.gov (United States)

    Terrill, Philip I; Wilson, Stephen J; Suresh, Sadasivam; Cooper, David M; Dakin, Carolyn

    2013-05-01

    Breathing dynamics vary between infant sleep states, and are likely to exhibit non-linear behaviour. This study applied the non-linear analytical tool recurrence quantification analysis (RQA) to 400 breath interval periods of REM and N-REM sleep, and then using an overlapping moving window. The RQA variables were different between sleep states, with REM radius 150% greater than N-REM radius, and REM laminarity 79% greater than N-REM laminarity. RQA allowed the observation of temporal variations in non-linear breathing dynamics across a night's sleep at 30s resolution, and provides a basis for quantifying changes in complex breathing dynamics with physiology and pathology. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Flutter analysis of an airfoil with nonlinear damping using equivalent linearization

    Directory of Open Access Journals (Sweden)

    Chen Feixin

    2014-02-01

    Full Text Available The equivalent linearization method (ELM is modified to investigate the nonlinear flutter system of an airfoil with a cubic damping. After obtaining the linearization quantity of the cubic nonlinearity by the ELM, an equivalent system can be deduced and then investigated by linear flutter analysis methods. Different from the routine procedures of the ELM, the frequency rather than the amplitude of limit cycle oscillation (LCO is chosen as an active increment to produce bifurcation charts. Numerical examples show that this modification makes the ELM much more efficient. Meanwhile, the LCOs obtained by the ELM are in good agreement with numerical solutions. The nonlinear damping can delay the occurrence of secondary bifurcation. On the other hand, it has marginal influence on bifurcation characteristics or LCOs.

  11. SU-E-T-627: Failure Modes and Effect Analysis for Monthly Quality Assurance of Linear Accelerator

    International Nuclear Information System (INIS)

    Xie, J; Xiao, Y; Wang, J; Peng, J; Lu, S; Hu, W

    2014-01-01

    Purpose: To develop and implement a failure mode and effect analysis (FMEA) on routine monthly Quality Assurance (QA) tests (physical tests part) of linear accelerator. Methods: A systematic failure mode and effect analysis method was performed for monthly QA procedures. A detailed process tree of monthly QA was created and potential failure modes were defined. Each failure mode may have many influencing factors. For each factor, a risk probability number (RPN) was calculated from the product of probability of occurrence (O), the severity of effect (S), and detectability of the failure (D). The RPN scores are in a range of 1 to 1000, with higher scores indicating stronger correlation to a given influencing factor of a failure mode. Five medical physicists in our institution were responsible to discuss and to define the O, S, D values. Results: 15 possible failure modes were identified and all RPN scores of all influencing factors of these 15 failue modes were from 8 to 150, and the checklist of FMEA in monthly QA was drawn. The system showed consistent and accurate response to erroneous conditions. Conclusion: The influencing factors of RPN greater than 50 were considered as highly-correlated factors of a certain out-oftolerance monthly QA test. FMEA is a fast and flexible tool to develop an implement a quality management (QM) frame work of monthly QA, which improved the QA efficiency of our QA team. The FMEA work may incorporate more quantification and monitoring fuctions in future

  12. Numerical linear analysis of the effects of diamagnetic and shear flow on ballooning modes

    Science.gov (United States)

    Yanqing, HUANG; Tianyang, XIA; Bin, GUI

    2018-04-01

    The linear analysis of the influence of diamagnetic effect and toroidal rotation at the edge of tokamak plasmas with BOUT++ is discussed in this paper. This analysis is done by solving the dispersion relation, which is calculated through the numerical integration of the terms with different physics. This method is able to reveal the contributions of the different terms to the total growth rate. The diamagnetic effect stabilizes the ideal ballooning modes through inhibiting the contribution of curvature. The toroidal rotation effect is also able to suppress the curvature-driving term, and the stronger shearing rate leads to a stronger stabilization effect. In addition, through linear analysis using the energy form, the curvature-driving term provides the free energy absorbed by the line-bending term, diamagnetic term and convective term.

  13. Performance of an Axisymmetric Rocket Based Combined Cycle Engine During Rocket Only Operation Using Linear Regression Analysis

    Science.gov (United States)

    Smith, Timothy D.; Steffen, Christopher J., Jr.; Yungster, Shaye; Keller, Dennis J.

    1998-01-01

    The all rocket mode of operation is shown to be a critical factor in the overall performance of a rocket based combined cycle (RBCC) vehicle. An axisymmetric RBCC engine was used to determine specific impulse efficiency values based upon both full flow and gas generator configurations. Design of experiments methodology was used to construct a test matrix and multiple linear regression analysis was used to build parametric models. The main parameters investigated in this study were: rocket chamber pressure, rocket exit area ratio, injected secondary flow, mixer-ejector inlet area, mixer-ejector area ratio, and mixer-ejector length-to-inlet diameter ratio. A perfect gas computational fluid dynamics analysis, using both the Spalart-Allmaras and k-omega turbulence models, was performed with the NPARC code to obtain values of vacuum specific impulse. Results from the multiple linear regression analysis showed that for both the full flow and gas generator configurations increasing mixer-ejector area ratio and rocket area ratio increase performance, while increasing mixer-ejector inlet area ratio and mixer-ejector length-to-diameter ratio decrease performance. Increasing injected secondary flow increased performance for the gas generator analysis, but was not statistically significant for the full flow analysis. Chamber pressure was found to be not statistically significant.

  14. Spherically symmetric analysis on open FLRW solution in non-linear massive gravity

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Chien-I; Izumi, Keisuke; Chen, Pisin, E-mail: chienichiang@berkeley.edu, E-mail: izumi@phys.ntu.edu.tw, E-mail: chen@slac.stanford.edu [Leung Center for Cosmology and Particle Astrophysics, National Taiwan University, Taipei 10617, Taiwan (China)

    2012-12-01

    We study non-linear massive gravity in the spherically symmetric context. Our main motivation is to investigate the effect of helicity-0 mode which remains elusive after analysis of cosmological perturbation around an open Friedmann-Lemaitre-Robertson-Walker (FLRW) universe. The non-linear form of the effective energy-momentum tensor stemming from the mass term is derived for the spherically symmetric case. Only in the special case where the area of the two sphere is not deviated away from the FLRW universe, the effective energy momentum tensor becomes completely the same as that of cosmological constant. This opens a window for discriminating the non-linear massive gravity from general relativity (GR). Indeed, by further solving these spherically symmetric gravitational equations of motion in vacuum to the linear order, we obtain a solution which has an arbitrary time-dependent parameter. In GR, this parameter is a constant and corresponds to the mass of a star. Our result means that Birkhoff's theorem no longer holds in the non-linear massive gravity and suggests that energy can probably be emitted superluminously (with infinite speed) on the self-accelerating background by the helicity-0 mode, which could be a potential plague of this theory.

  15. A primer for biomedical scientists on how to execute model II linear regression analysis.

    Science.gov (United States)

    Ludbrook, John

    2012-04-01

    1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.

  16. Linear stability analysis of the gas injection augmented natural circulation of STAR-LM

    International Nuclear Information System (INIS)

    Yeon-Jong Yoo; Qiao Wu; James J Sienicki

    2005-01-01

    Full text of publication follows: A linear stability analysis has been performed for the gas injection augmented natural circulation of the Secure Transportable Autonomous Reactor - Liquid Metal (STAR-LM). Natural circulation is of great interest for the development of Generation-IV nuclear energy systems due to its vital role in the area of passive safety and reliability. One of such systems is STAR-LM under development by Argonne National Laboratory. STAR-LM is a 400 MWt class modular, proliferation-resistant, and passively safe liquid metal-cooled fast reactor system that uses inert lead (Pb) coolant and the advanced power conversion system that consists of a gas turbine Brayton cycle utilizing supercritical carbon dioxide (CO 2 ) to obtain higher plant efficiency. The primary loop of STAR-LM relies only on the natural circulation to eliminate the use of circulation pumps for passive safety consideration. To enhance the natural circulation of the primary coolant, STAR-LM optionally incorporates the additional driving force provided by the injection of noncondensable gas into the primary coolant above the reactor core, which is effective in removing heat from the core and transferring it to the secondary working fluid without the attainment of excessive coolant temperature at nominal operating power. Therefore, it naturally raises the concern about the natural circulation instability due to the relatively high temperature change in the core and the two-phase flow condition in the hot leg above the core. For the ease of analysis, the flow path of the loop was partitioned into five thermal-hydraulically distinct sections, i.e., heated core, unheated core, hot leg, heat exchanger, and cold leg. The one-dimensional single-phase flow field equations governing the natural circulation, i.e., continuity, momentum, and energy equations, were used for each section except the hot leg. For the hot leg, the one-dimensional homogeneous equilibrium two-phase flow field

  17. Non-linear analysis of skew thin plate by finite difference method

    International Nuclear Information System (INIS)

    Kim, Chi Kyung; Hwang, Myung Hwan

    2012-01-01

    This paper deals with a discrete analysis capability for predicting the geometrically nonlinear behavior of skew thin plate subjected to uniform pressure. The differential equations are discretized by means of the finite difference method which are used to determine the deflections and the in-plane stress functions of plates and reduced to several sets of linear algebraic simultaneous equations. For the geometrically non-linear, large deflection behavior of the plate, the non-linear plate theory is used for the analysis. An iterative scheme is employed to solve these quasi-linear algebraic equations. Several problems are solved which illustrate the potential of the method for predicting the finite deflection and stress. For increasing lateral pressures, the maximum principal tensile stress occurs at the center of the plate and migrates toward the corners as the load increases. It was deemed important to describe the locations of the maximum principal tensile stress as it occurs. The load-deflection relations and the maximum bending and membrane stresses for each case are presented and discussed

  18. Design and analysis of tubular permanent magnet linear generator for small-scale wave energy converter

    Science.gov (United States)

    Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young

    2017-05-01

    This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.

  19. Theoretical foundations of functional data analysis, with an introduction to linear operators

    CERN Document Server

    Hsing, Tailen

    2015-01-01

    Theoretical Foundations of Functional Data Analysis, with an Introduction to Linear Operators provides a uniquely broad compendium of the key mathematical concepts and results that are relevant for the theoretical development of functional data analysis (FDA).The self-contained treatment of selected topics of functional analysis and operator theory includes reproducing kernel Hilbert spaces, singular value decomposition of compact operators on Hilbert spaces and perturbation theory for both self-adjoint and non self-adjoint operators. The probabilistic foundation for FDA is described from the

  20. Linearization effect in multifractal analysis: Insights from the Random Energy Model

    Science.gov (United States)

    Angeletti, Florian; Mézard, Marc; Bertin, Eric; Abry, Patrice

    2011-08-01

    The analysis of the linearization effect in multifractal analysis, and hence of the estimation of moments for multifractal processes, is revisited borrowing concepts from the statistical physics of disordered systems, notably from the analysis of the so-called Random Energy Model. Considering a standard multifractal process (compound Poisson motion), chosen as a simple representative example, we show the following: (i) the existence of a critical order q∗ beyond which moments, though finite, cannot be estimated through empirical averages, irrespective of the sample size of the observation; (ii) multifractal exponents necessarily behave linearly in q, for q>q∗. Tailoring the analysis conducted for the Random Energy Model to that of Compound Poisson motion, we provide explicative and quantitative predictions for the values of q∗ and for the slope controlling the linear behavior of the multifractal exponents. These quantities are shown to be related only to the definition of the multifractal process and not to depend on the sample size of the observation. Monte Carlo simulations, conducted over a large number of large sample size realizations of compound Poisson motion, comfort and extend these analyses.

  1. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner

    Directory of Open Access Journals (Sweden)

    Yubo Wang

    2017-06-01

    Full Text Available It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC. In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976 ratio and outperforms existing methods such as short-time Fourier transfrom (STFT, continuous Wavelet transform (CWT and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.

  2. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner.

    Science.gov (United States)

    Wang, Yubo; Veluvolu, Kalyana C

    2017-06-14

    It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.

  3. Analysis of technological, institutional and socioeconomic factors ...

    African Journals Online (AJOL)

    Analysis of technological, institutional and socioeconomic factors that influences poor reading culture among secondary school students in Nigeria. ... Proliferation and availability of smart phones, chatting culture and social media were identified as technological factors influencing poor reading culture among secondary ...

  4. Estimate the contribution of incubation parameters influence egg hatchability using multiple linear regression analysis.

    Science.gov (United States)

    Khalil, Mohamed H; Shebl, Mostafa K; Kosba, Mohamed A; El-Sabrout, Karim; Zaki, Nesma

    2016-08-01

    This research was conducted to determine the most affecting parameters on hatchability of indigenous and improved local chickens' eggs. Five parameters were studied (fertility, early and late embryonic mortalities, shape index, egg weight, and egg weight loss) on four strains, namely Fayoumi, Alexandria, Matrouh, and Montazah. Multiple linear regression was performed on the studied parameters to determine the most influencing one on hatchability. The results showed significant differences in commercial and scientific hatchability among strains. Alexandria strain has the highest significant commercial hatchability (80.70%). Regarding the studied strains, highly significant differences in hatching chick weight among strains were observed. Using multiple linear regression analysis, fertility made the greatest percent contribution (71.31%) to hatchability, and the lowest percent contributions were made by shape index and egg weight loss. A prediction of hatchability using multiple regression analysis could be a good tool to improve hatchability percentage in chickens.

  5. Z-score linear discriminant analysis for EEG based brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    Full Text Available Linear discriminant analysis (LDA is one of the most popular classification algorithms for brain-computer interfaces (BCI. LDA assumes Gaussian distribution of the data, with equal covariance matrices for the concerned classes, however, the assumption is not usually held in actual BCI applications, where the heteroscedastic class distributions are usually observed. This paper proposes an enhanced version of LDA, namely z-score linear discriminant analysis (Z-LDA, which introduces a new decision boundary definition strategy to handle with the heteroscedastic class distributions. Z-LDA defines decision boundary through z-score utilizing both mean and standard deviation information of the projected data, which can adaptively adjust the decision boundary to fit for heteroscedastic distribution situation. Results derived from both simulation dataset and two actual BCI datasets consistently show that Z-LDA achieves significantly higher average classification accuracies than conventional LDA, indicating the superiority of the new proposed decision boundary definition strategy.

  6. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    Science.gov (United States)

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  7. MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY

    OpenAIRE

    Chayalakshmi C.L

    2018-01-01

    MULTIPLE LINEAR REGRESSION ANALYSIS FOR PREDICTION OF BOILER LOSSES AND BOILER EFFICIENCY ABSTRACT Calculation of boiler efficiency is essential if its parameters need to be controlled for either maintaining or enhancing its efficiency. But determination of boiler efficiency using conventional method is time consuming and very expensive. Hence, it is not recommended to find boiler efficiency frequently. The work presented in this paper deals with establishing the statistical mo...

  8. Coupled Analytical-Finite Element Methods for Linear Electromagnetic Actuator Analysis

    Directory of Open Access Journals (Sweden)

    K. Srairi

    2005-09-01

    Full Text Available In this paper, a linear electromagnetic actuator with moving parts is analyzed. The movement is considered through the modification of boundary conditions only using coupled analytical and finite element analysis. In order to evaluate the dynamic performance of the device, the coupling between electric, magnetic and mechanical phenomena is established. The displacement of the moving parts and the inductor current are determined when the device is supplied by capacitor discharge voltage.

  9. Stability Analysis of Periodic Orbits in a Class of Duffing-Like Piecewise Linear Vibrators

    KAUST Repository

    El Aroudi, A.

    2014-09-01

    In this paper, we study the dynamical behavior of a Duffing-like piecewise linear (PWL) springmass-damper system for vibration-based energy harvesting applications. First, we present a continuous time single degree of freedom PWL dynamical model of the system. From this PWL model, numerical simulations are carried out by computing frequency response and bifurcation diagram under a deterministic harmonic excitation for different sets of system parameter values. Stability analysis is performed using Floquet theory combined with Fillipov method.

  10. Stability Analysis of Periodic Orbits in a Class of Duffing-Like Piecewise Linear Vibrators

    KAUST Repository

    El Aroudi, A.; Benadero, L.; Ouakad, H.; Younis, Mohammad I.

    2014-01-01

    In this paper, we study the dynamical behavior of a Duffing-like piecewise linear (PWL) springmass-damper system for vibration-based energy harvesting applications. First, we present a continuous time single degree of freedom PWL dynamical model of the system. From this PWL model, numerical simulations are carried out by computing frequency response and bifurcation diagram under a deterministic harmonic excitation for different sets of system parameter values. Stability analysis is performed using Floquet theory combined with Fillipov method.

  11. Use of correspondence analysis partial least squares on linear and unimodal data

    DEFF Research Database (Denmark)

    Frisvad, Jens Christian; Norsker, Merete

    1996-01-01

    Correspondence analysis partial least squares (CA-PLS) has been compared with PLS conceming classification and prediction of unimodal growth temperature data and an example using infrared (IR) spectroscopy for predicting amounts of chemicals in mixtures. CA-PLS was very effective for ordinating...... that could only be seen in two-dimensional plots, and also less effective predictions. PLS was the best method in the linear case treated, with fewer components and a better prediction than CA-PLS....

  12. Development of an efficient iterative solver for linear systems in FE structural analysis

    International Nuclear Information System (INIS)

    Saint-Georges, P.; Warzee, G.; Beauwens, R.; Notay, Y.

    1993-01-01

    The preconditioned conjugate gradient is a well-known and powerful method to solve sparse symmetric positive definite systems of linear equations. Such systems are generated by the finite element discretization in structural analysis but users of finite element in this context generally still rely on direct methods. It is our purpose in the present paper to highlight the improvement brought forward by some new preconditioning techniques and show that the preconditioned conjugate gradient method is more performant than any direct method. (author)

  13. Linear and nonlinear dynamic analysis by boundary element method. Ph.D. Thesis, 1986 Final Report

    Science.gov (United States)

    Ahmad, Shahid

    1991-01-01

    An advanced implementation of the direct boundary element method (BEM) applicable to free-vibration, periodic (steady-state) vibration and linear and nonlinear transient dynamic problems involving two and three-dimensional isotropic solids of arbitrary shape is presented. Interior, exterior, and half-space problems can all be solved by the present formulation. For the free-vibration analysis, a new real variable BEM formulation is presented which solves the free-vibration problem in the form of algebraic equations (formed from the static kernels) and needs only surface discretization. In the area of time-domain transient analysis, the BEM is well suited because it gives an implicit formulation. Although the integral formulations are elegant, because of the complexity of the formulation it has never been implemented in exact form. In the present work, linear and nonlinear time domain transient analysis for three-dimensional solids has been implemented in a general and complete manner. The formulation and implementation of the nonlinear, transient, dynamic analysis presented here is the first ever in the field of boundary element analysis. Almost all the existing formulation of BEM in dynamics use the constant variation of the variables in space and time which is very unrealistic for engineering problems and, in some cases, it leads to unacceptably inaccurate results. In the present work, linear and quadratic isoparametric boundary elements are used for discretization of geometry and functional variations in space. In addition, higher order variations in time are used. These methods of analysis are applicable to piecewise-homogeneous materials, such that not only problems of the layered media and the soil-structure interaction can be analyzed but also a large problem can be solved by the usual sub-structuring technique. The analyses have been incorporated in a versatile, general-purpose computer program. Some numerical problems are solved and, through comparisons

  14. Tools to identify linear combination of prognostic factors which maximizes area under receiver operator curve.

    Science.gov (United States)

    Todor, Nicolae; Todor, Irina; Săplăcan, Gavril

    2014-01-01

    The linear combination of variables is an attractive method in many medical analyses targeting a score to classify patients. In the case of ROC curves the most popular problem is to identify the linear combination which maximizes area under curve (AUC). This problem is complete closed when normality assumptions are met. With no assumption of normality search algorithm are avoided because it is accepted that we have to evaluate AUC n(d) times where n is the number of distinct observation and d is the number of variables. For d = 2, using particularities of AUC formula, we described an algorithm which lowered the number of evaluations of AUC from n(2) to n(n-1) + 1. For d > 2 our proposed solution is an approximate method by considering equidistant points on the unit sphere in R(d) where we evaluate AUC. The algorithms were applied to data from our lab to predict response of treatment by a set of molecular markers in cervical cancers patients. In order to evaluate the strength of our algorithms a simulation was added. In the case of no normality presented algorithms are feasible. For many variables computation time could be increased but acceptable.

  15. Hand function evaluation: a factor analysis study.

    Science.gov (United States)

    Jarus, T; Poremba, R

    1993-05-01

    The purpose of this study was to investigate hand function evaluations. Factor analysis with varimax rotation was used to assess the fundamental characteristics of the items included in the Jebsen Hand Function Test and the Smith Hand Function Evaluation. The study sample consisted of 144 subjects without disabilities and 22 subjects with Colles fracture. Results suggest a four factor solution: Factor I--pinch movement; Factor II--grasp; Factor III--target accuracy; and Factor IV--activities of daily living. These categories differentiated the subjects without Colles fracture from the subjects with Colles fracture. A hand function evaluation consisting of these four factors would be useful. Such an evaluation that can be used for current clinical purposes is provided.

  16. MetabR: an R script for linear model analysis of quantitative metabolomic data

    Directory of Open Access Journals (Sweden)

    Ernest Ben

    2012-10-01

    Full Text Available Abstract Background Metabolomics is an emerging high-throughput approach to systems biology, but data analysis tools are lacking compared to other systems level disciplines such as transcriptomics and proteomics. Metabolomic data analysis requires a normalization step to remove systematic effects of confounding variables on metabolite measurements. Current tools may not correctly normalize every metabolite when the relationships between each metabolite quantity and fixed-effect confounding variables are different, or for the effects of random-effect confounding variables. Linear mixed models, an established methodology in the microarray literature, offer a standardized and flexible approach for removing the effects of fixed- and random-effect confounding variables from metabolomic data. Findings Here we present a simple menu-driven program, “MetabR”, designed to aid researchers with no programming background in statistical analysis of metabolomic data. Written in the open-source statistical programming language R, MetabR implements linear mixed models to normalize metabolomic data and analysis of variance (ANOVA to test treatment differences. MetabR exports normalized data, checks statistical model assumptions, identifies differentially abundant metabolites, and produces output files to help with data interpretation. Example data are provided to illustrate normalization for common confounding variables and to demonstrate the utility of the MetabR program. Conclusions We developed MetabR as a simple and user-friendly tool for implementing linear mixed model-based normalization and statistical analysis of targeted metabolomic data, which helps to fill a lack of available data analysis tools in this field. The program, user guide, example data, and any future news or updates related to the program may be found at http://metabr.r-forge.r-project.org/.

  17. Integrating human factors into process hazard analysis

    International Nuclear Information System (INIS)

    Kariuki, S.G.; Loewe, K.

    2007-01-01

    A comprehensive process hazard analysis (PHA) needs to address human factors. This paper describes an approach that systematically identifies human error in process design and the human factors that influence its production and propagation. It is deductive in nature and therefore considers human error as a top event. The combinations of different factors that may lead to this top event are analysed. It is qualitative in nature and is used in combination with other PHA methods. The method has an advantage because it does not look at the operator error as the sole contributor to the human failure within a system but a combination of all underlying factors

  18. Simplified non-linear time-history analysis based on the Theory of Plasticity

    DEFF Research Database (Denmark)

    Costa, Joao Domingues

    2005-01-01

    This paper aims at giving a contribution to the problem of developing simplified non-linear time-history (NLTH) analysis of structures which dynamical response is mainly governed by plastic deformations, able to provide designers with sufficiently accurate results. The method to be presented...... is based on the Theory of Plasticity. Firstly, the formulation and the computational procedure to perform time-history analysis of a rigid-plastic single degree of freedom (SDOF) system are presented. The necessary conditions for the method to incorporate pinching as well as strength degradation...

  19. Experimental Analysis of Linear Induction Motor under Variable Voltage Variable Frequency (VVVF Power Supply

    Directory of Open Access Journals (Sweden)

    Prasenjit D. Wakode

    2016-07-01

    Full Text Available This paper presents the complete analysis of Linear Induction Motor (LIM under VVVF. The complete variation of LIM air gap flux under ‘blocked Linor’ condition and starting force is analyzed and presented when LIM is given VVVF supply. The analysis of this data is important in further understanding of the equivalent circuit parameters of LIM and to study the magnetic circuit of LIM. The variation of these parameters is important to know the LIM response at different frequencies. The simulation and application of different control strategies such as vector control thus becomes quite easy to apply and understand motor’s response under such strategy of control.

  20. An improved multiple linear regression and data analysis computer program package

    Science.gov (United States)

    Sidik, S. M.

    1972-01-01

    NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.

  1. Classical linear-control analysis applied to business-cycle dynamics and stability

    Science.gov (United States)

    Wingrove, R. C.

    1983-01-01

    Linear control analysis is applied as an aid in understanding the fluctuations of business cycles in the past, and to examine monetary policies that might improve stabilization. The analysis shows how different policies change the frequency and damping of the economic system dynamics, and how they modify the amplitude of the fluctuations that are caused by random disturbances. Examples are used to show how policy feedbacks and policy lags can be incorporated, and how different monetary strategies for stabilization can be analytically compared. Representative numerical results are used to illustrate the main points.

  2. Quantifying the metabolic capabilities of engineered Zymomonas mobilis using linear programming analysis

    Directory of Open Access Journals (Sweden)

    Tsantili Ivi C

    2007-03-01

    Full Text Available Abstract Background The need for discovery of alternative, renewable, environmentally friendly energy sources and the development of cost-efficient, "clean" methods for their conversion into higher fuels becomes imperative. Ethanol, whose significance as fuel has dramatically increased in the last decade, can be produced from hexoses and pentoses through microbial fermentation. Importantly, plant biomass, if appropriately and effectively decomposed, is a potential inexpensive and highly renewable source of the hexose and pentose mixture. Recently, the engineered (to also catabolize pentoses anaerobic bacterium Zymomonas mobilis has been widely discussed among the most promising microorganisms for the microbial production of ethanol fuel. However, Z. mobilis genome having been fully sequenced in 2005, there is still a small number of published studies of its in vivo physiology and limited use of the metabolic engineering experimental and computational toolboxes to understand its metabolic pathway interconnectivity and regulation towards the optimization of its hexose and pentose fermentation into ethanol. Results In this paper, we reconstructed the metabolic network of the engineered Z. mobilis to a level that it could be modelled using the metabolic engineering methodologies. We then used linear programming (LP analysis and identified the Z. mobilis metabolic boundaries with respect to various biological objectives, these boundaries being determined only by Z. mobilis network's stoichiometric connectivity. This study revealed the essential for bacterial growth reactions and elucidated the association between the metabolic pathways, especially regarding main product and byproduct formation. More specifically, the study indicated that ethanol and biomass production depend directly on anaerobic respiration stoichiometry and activity. Thus, enhanced understanding and improved means for analyzing anaerobic respiration and redox potential in vivo are

  3. Application of perturbation theory to the non-linear vibration analysis of a string including the bending moment effects

    International Nuclear Information System (INIS)

    Esmaeilzadeh Khadem, S.; Rezaee, M.

    2001-01-01

    In this paper the large amplitude and non-linear vibration of a string is considered. The initial tension, lateral vibration amplitude, diameter and the modulus of elasticity of the string have main effects on its natural frequencies. Increasing the lateral vibration amplitude makes the assumption of constant initial tension invalid. In this case, therefore, it is impossible to use the classical equation of string with small amplitude transverse motion assumption. On the other hand, by increasing the string diameter, the bending moment effect will increase dramatically, and acts as an impressive restoring moment. Considering the effects of the bending moments, the nonlinear equation governing the large amplitude transverse vibration of a string is derived. The time dependent portion of the governing equation has the from of Duff ing equation is solved using the perturbation theory. The results of the analysis are shown in appropriate graphs, and the natural frequencies of the string due to the non-linear factors are compared with the natural frequencies of the linear vibration os a string without bending moment effects

  4. Quasi-likelihood generalized linear regression analysis of fatality risk data.

    Science.gov (United States)

    2009-01-01

    Transportation-related fatality risks is a function of many interacting human, vehicle, and environmental factors. Statistically valid analysis of such data is challenged both by the complexity of plausible structural models relating fatality rates t...

  5. Analysis by numerical simulations of non-linear phenomenons in vertical pump rotor dynamic

    International Nuclear Information System (INIS)

    Bediou, J.; Pasqualini, G.

    1992-01-01

    Controlling dynamical behavior of main coolant pumps shaftlines is an interesting subject for the user and the constructor. The first is mainly concerned by the interpretation of on field observed behavior, monitoring, reliability and preventive maintenance of his machines. The second must in addition manage with sometimes contradictory requirements related to mechanical design and performances optimization (shaft diameter reduction, clearance,...). The use of numerical modeling is now a classical technique for simple analysis (rough prediction of critical speeds for instance) but is still limited, in particular for vertical shaftline especially when equipped with hydrodynamic bearings, due to the complexity of encountered phenomenons in that type of machine. The vertical position of the shaftline seems to be the origin of non linear dynamical behavior, the analysis of which, as presented in the following discussion, requires specific modelization of fluid film, particularly for hydrodynamic bearings. The low static load generally no longer allows use of stiffness and damping coefficients classically calculated by linearizing fluid film equations near a stable static equilibrium position. For the analysis of such machines, specific numerical models have been developed at Electricite de France in a package for general rotordynamics analysis. Numerical models are briefly described. Then an example is precisely presented and discussed to illustrate some considered phenomenons and their consequences on machine behavior. In this example, the authors interpret the observed behavior by using numerical models, and demonstrate the advantage of such analysis for better understanding of vertical pumps rotordynamic

  6. ANALYSIS OF FACTORS WHICH AFFECTING THE ECONOMIC GROWTH

    Directory of Open Access Journals (Sweden)

    Suparna Wijaya

    2017-03-01

    Full Text Available High economic growth and sustainable process are main conditions for sustainability of economic country development. They are also become measures of the success of the country's economy. Factors which tested in this study are economic and non-economic factors which impacting economic development. This study has a goal to explain the factors that influence on macroeconomic Indonesia. It used linear regression modeling approach. The analysis result showed that Tax Amnesty, Exchange Rate, Inflation, and interest rate, they jointly can bring effect which amounted to 77.6% on economic growth whereas the remaining 22.4% is the influenced by other variables which not observed in this study. Keywords: tax amnesty, exchange rates, inflation, SBI and economic growth

  7. Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.

    Science.gov (United States)

    Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong

    2015-05-01

    In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.

  8. Real time computer control of a nonlinear Multivariable System via Linearization and Stability Analysis

    International Nuclear Information System (INIS)

    Raza, K.S.M.

    2004-01-01

    This paper demonstrates that if a complicated nonlinear, non-square, state-coupled multi variable system is smartly linearized and subjected to a thorough stability analysis then we can achieve our design objectives via a controller which will be quite simple (in term of resource usage and execution time) and very efficient (in terms of robustness). Further the aim is to implement this controller via computer in a real time environment. Therefore first a nonlinear mathematical model of the system is achieved. An intelligent work is done to decouple the multivariable system. Linearization and stability analysis techniques are employed for the development of a linearized and mathematically sound control law. Nonlinearities like the saturation in actuators are also been catered. The controller is then discretized using Runge-Kutta integration. Finally the discretized control law is programmed in a computer in a real time environment. The programme is done in RT -Linux using GNU C for the real time realization of the control scheme. The real time processes, like sampling and controlled actuation, and the non real time processes, like graphical user interface and display, are programmed as different tasks. The issue of inter process communication, between real time and non real time task is addressed quite carefully. The results of this research pursuit are presented graphically. (author)

  9. On summary measure analysis of linear trend repeated measures data: performance comparison with two competing methods.

    Science.gov (United States)

    Vossoughi, Mehrdad; Ayatollahi, S M T; Towhidi, Mina; Ketabchi, Farzaneh

    2012-03-22

    The summary measure approach (SMA) is sometimes the only applicable tool for the analysis of repeated measurements in medical research, especially when the number of measurements is relatively large. This study aimed to describe techniques based on summary measures for the analysis of linear trend repeated measures data and then to compare performances of SMA, linear mixed model (LMM), and unstructured multivariate approach (UMA). Practical guidelines based on the least squares regression slope and mean of response over time for each subject were provided to test time, group, and interaction effects. Through Monte Carlo simulation studies, the efficacy of SMA vs. LMM and traditional UMA, under different types of covariance structures, was illustrated. All the methods were also employed to analyze two real data examples. Based on the simulation and example results, it was found that the SMA completely dominated the traditional UMA and performed convincingly close to the best-fitting LMM in testing all the effects. However, the LMM was not often robust and led to non-sensible results when the covariance structure for errors was misspecified. The results emphasized discarding the UMA which often yielded extremely conservative inferences as to such data. It was shown that summary measure is a simple, safe and powerful approach in which the loss of efficiency compared to the best-fitting LMM was generally negligible. The SMA is recommended as the first choice to reliably analyze the linear trend data with a moderate to large number of measurements and/or small to moderate sample sizes.

  10. Steady state and linear stability analysis of a supercritical water natural circulation loop

    International Nuclear Information System (INIS)

    Sharma, Manish; Pilkhwal, D.S.; Vijayan, P.K.; Saha, D.; Sinha, R.K.

    2010-01-01

    Supercritical water (SCW) has excellent heat transfer characteristics as a coolant for nuclear reactors. Besides it results in high thermal efficiency of the plant. However, the flow can experience instabilities in supercritical water reactors, as the density change is very large for the supercritical fluids. A computer code SUCLIN using supercritical water properties has been developed to carry out the steady state and linear stability analysis of a SCW natural circulation loop. The conservation equations of mass, momentum and energy have been linearized by imposing small perturbation in flow rate, enthalpy, pressure and specific volume. The equations have been solved analytically to generate the characteristic equation. The roots of the equation determine the stability of the system. The code has been qualitatively assessed with published results and has been extensively used for studying the effect of diameter, height, heater inlet temperature, pressure and local loss coefficients on steady state and stability behavior of a Supercritical Water Natural Circulation Loop (SCWNCL). The present paper describes the linear stability analysis model and the results obtained in detail.

  11. Weighted functional linear regression models for gene-based association analysis.

    Science.gov (United States)

    Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I

    2018-01-01

    Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.

  12. Experimental and numerical analysis of behavior of electromagnetic annular linear induction pump

    International Nuclear Information System (INIS)

    Goldsteins, Linards

    2015-01-01

    The research explores the issue of magnetohydrodynamic (MHD) instability in electromagnetic induction pumps with focus on the regimes of high slip Reynolds magnetic number (Rm s ) in Annular Linear Induction Pumps (ALIP) operating with liquid sodium. The context of the thesis is French GEN IV Sodium Fast Reactor research and development program for ASTRID in a framework of which the use of high discharge ALIP in the secondary cooling loops is being studied. CEA has designed, realized and will exploit PEMDYN facility, able to represent MHD instability in high discharge ALIP. In the thesis stability of an ideal ALIP is elaborated theoretically using linear stability analysis. Analysis revealed that strong amplification of perturbation is expected after convective stability threshold is reached. Theory is supported with numerical results and experiments reported in literature. Stable operation and stabilization technique operating with two frequencies in case of an ideal ALIP is discussed and necessary conditions derived. Detailed numerical models of flat linear induction pump (FLIP) taking into account effects of a real pump are developed. New technique of magnetic field measurements has been introduced and experimental results demonstrate a qualitative agreement with numerical models capturing all principal phenomena such as oscillation of magnetic field and perturbed velocity profiles. These results give significantly more profound insight in the phenomenon of MHD instability and can be used as a reference in further studies. (author) [fr

  13. Linearity assumption in soil-to-plant transfer factors of natural uranium and radium in Helianthus annuus L

    International Nuclear Information System (INIS)

    Rodriguez, P. Blanco; Tome, F. Vera; Fernandez, M. Perez; Lozano, J.C.

    2006-01-01

    The linearity assumption of the validation of soil-to-plant transfer factors of natural uranium and 226 Ra was tested using Helianthus annuus L. (sunflower) grown in a hydroponic medium. Transfer of natural uranium and 226 Ra was tested in both the aerial fraction of plants and in the overall seedlings (roots and shoots). The results show that the linearity assumption can be considered valid in the hydroponic growth of sunflowers for the radionuclides studied. The ability of sunflowers to translocate uranium and 226 Ra was also investigated, as well as the feasibility of using sunflower plants to remove uranium and radium from contaminated water, and by extension, their potential for phytoextraction. In this sense, the removal percentages obtained for natural uranium and 226 Ra were 24% and 42%, respectively. Practically all the uranium is accumulated in the roots. However, 86% of the 226 Ra activity concentration in roots was translocated to the aerial part

  14. Linearity assumption in soil-to-plant transfer factors of natural uranium and radium in Helianthus annuus L

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, P. Blanco [Departamento de Fisica, Facultad de Ciencias, Universidad de Extremadura, 06071 Badajoz (Spain); Tome, F. Vera [Departamento de Fisica, Facultad de Ciencias, Universidad de Extremadura, 06071 Badajoz (Spain)]. E-mail: fvt@unex.es; Fernandez, M. Perez [Area de Ecologia, Departamento de Fisica, Facultad de Ciencias, Universidad de Extremadura, 06071 Badajoz (Spain); Lozano, J.C. [Laboratorio de Radiactividad Ambiental, Facultad de Ciencias, Universidad de Salamanca, 37008 Salamanca (Spain)

    2006-05-15

    The linearity assumption of the validation of soil-to-plant transfer factors of natural uranium and {sup 226}Ra was tested using Helianthus annuus L. (sunflower) grown in a hydroponic medium. Transfer of natural uranium and {sup 226}Ra was tested in both the aerial fraction of plants and in the overall seedlings (roots and shoots). The results show that the linearity assumption can be considered valid in the hydroponic growth of sunflowers for the radionuclides studied. The ability of sunflowers to translocate uranium and {sup 226}Ra was also investigated, as well as the feasibility of using sunflower plants to remove uranium and radium from contaminated water, and by extension, their potential for phytoextraction. In this sense, the removal percentages obtained for natural uranium and {sup 226}Ra were 24% and 42%, respectively. Practically all the uranium is accumulated in the roots. However, 86% of the {sup 226}Ra activity concentration in roots was translocated to the aerial part.

  15. A meta-analysis of cambium phenology and growth: linear and non-linear patterns in conifers of the northern hemisphere.

    Science.gov (United States)

    Rossi, Sergio; Anfodillo, Tommaso; Cufar, Katarina; Cuny, Henri E; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gricar, Jozica; Gruber, Andreas; King, Gregory M; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B K

    2013-12-01

    Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1-9 years per site from 1998 to 2011. The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern. The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond differently to changes in environmental conditions.

  16. Near-infrared reflectance analysis by Gauss-Jordan linear algebra

    International Nuclear Information System (INIS)

    Honigs, D.E.; Freelin, J.M.; Hieftje, G.M.; Hirschfeld, T.B.

    1983-01-01

    Near-infrared reflectance analysis is an analytical technique that uses the near-infrared diffuse reflectance of a sample at several discrete wavelengths to predict the concentration of one or more of the chemical species in that sample. However, because near-infrared bands from solid samples are both abundant and broad, the reflectance at a given wavelength usually contains contributions from several sample components, requiring extensive calculations on overlapped bands. In the present study, these calculations have been performed using an approach similar to that employed in multi-component spectrophotometry, but with Gauss-Jordan linear algebra serving as the computational vehicle. Using this approach, correlations for percent protein in wheat flour and percent benzene in hydrocarbons have been obtained and are evaluated. The advantages of a linear-algebra approach over the common one employing stepwise regression are explored

  17. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2015-08-01

    Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques.

  18. Typological analysis of social linear blocks: Spain 1950-1983. The case study of western Andalusia

    Directory of Open Access Journals (Sweden)

    A. Guajardo

    2017-04-01

    Full Text Available A main challenge that cities will need to face in the next few years is the regeneration of the social housing estates built during the decades of 1950s, 1960s and 1970s. One of the causes of their obsolescence is the mismatch between their hous-ing typologies and the contemporary needs. The main target of this study is to contribute to take a step forward in the un-derstanding of these typologies to be able to intervene on them efficiently. With this purpose, a study on 42 linear blocks built in Spain between 1950 and 1983 in western Andalusia has been carried out. The analysis includes three stages: 1 classification of the houses in recognizable groups; 2 an identification of the most used spatial configurations and 3 definition of their programmatic and size characteristics. As a result, a characterization of linear blocks is proposed as a reference model for future regenerative interventions.

  19. A Linear Analysis of a Blended Wing Body (BWB Aircraft Model

    Directory of Open Access Journals (Sweden)

    Claudia Alice STATE

    2011-09-01

    Full Text Available In this article a linear analysis of a Blended Wing Body (BWB aircraft model is performed. The BWB concept is in the attention of both military and civil sectors for the fact that has reduced radar signature (in the absence of a conventional tail and the possibility to carry more people. The trim values are computed, also the eigenvalues and the Jacobian matrix evaluated into the trim point are analyzed. A linear simulation in the MatLab environment is presented in order to express numerically the symbolic computations presented. The initial system is corrected in the way of increasing the consistency and coherence of the modeled type of motion and, also, suggestions are made for future work.

  20. Refining and end use study of coal liquids II - linear programming analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lowe, C.; Tam, S.

    1995-12-31

    A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for the petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.

  1. Depth dose factors for lymphoma's radiotherapy using a 4 MV linear accelerator

    International Nuclear Information System (INIS)

    Scaff, L.A.M.

    1976-01-01

    In the routine treatment of lymphomas using the mantle technique, the daily doses at the midpoints at five anatomical regions are different because their thickness are not equal. A set of tables of depht dose factors with good precision is presented [pt

  2. Uniqueness conditions for constrained three-way factor decompositions with linearly dependent loadings

    NARCIS (Netherlands)

    Stegeman, Alwin; De Almeida, Andre L. F.

    2009-01-01

    In this paper, we derive uniqueness conditions for a constrained version of the parallel factor (Parafac) decomposition, also known as canonical decomposition (Candecomp). Candecomp/Parafac (CP) decomposes a three-way array into a prespecified number of outer product arrays. The constraint is that

  3. Materials analysis using x-ray linear attenuation coefficient measurements at four photon energies

    International Nuclear Information System (INIS)

    Midgley, S M

    2005-01-01

    The analytical properties of an accurate parameterization scheme for the x-ray linear attenuation coefficient are examined. The parameterization utilizes an additive combination of N compositional- and energy-dependent coefficients. The former were derived from a parameterization of elemental cross-sections using a polynomial in atomic number. The compositional-dependent coefficients are referred to as the mixture parameters, representing the electron density and higher order statistical moments describing elemental distribution. Additivity is an important property of the parameterization, allowing measured x-ray linear attenuation coefficients to be written as linear simultaneous equations, and then solved for the unknown coefficients. The energy-dependent coefficients can be determined by calibration from measurements with materials of known composition. The inverse problem may be utilized for materials analysis, whereby the simultaneous equations represent multi-energy linear attenuation coefficient measurements, and are solved for the mixture parameters. For in vivo studies, the choice of measurement energies is restricted to the diagnostic region (approximately 20 keV to 150 keV), where the parameterization requires N ≥ 4 energies. We identify a mathematical pathology that must be overcome in order to solve the inverse problem in this energy regime. An iterative inversion strategy is presented for materials analysis using four or more measurements, and then tested against real data obtained at energies 32 keV to 66 keV. The results demonstrate that it is possible to recover the electron density to within ±4% and fourth mixture parameter. It is also a key finding that the second and third mixture parameters cannot be recovered, as they are of minor importance in the parameterization at diagnostic x-ray energies

  4. A generic approach for a linear elastic fracture mechanics analysis of components containing residual stress

    International Nuclear Information System (INIS)

    Lee, Hyeong Y.; Nikbin, Kamran M.; O'Dowd, Noel P.

    2005-01-01

    A review of through thickness transverse residual stress distribution measurements in a number of components, manufactured from a range of steels, has been carried out. Residual stresses introduced by welding and mechanical deformation have been considered. The geometries consisted of welded T-plate joints, pipe butt joints, tube-on-plate joints, tubular Y-joints and tubular T-joints as well as cold bent tubes and repair welds. In addition, the collected data cover a range of engineering steels including ferritic, austenitic, C-Mn and Cr-Mo steels. The methods used to measure the residual stresses also varied. These included neutron diffraction, X-ray diffraction and deep hole drilling techniques. Measured residual stress data, normalised by their respective yield stress have shown an inverse linear correlation versus the normalised depth of the region containing the residual stress (up to 0.5 of the component thickness). A simplified generic residual stress profile based on a linear fit to the data is proposed for the case of a transverse residual tensile stress field. Whereas the profiles in assessment procedures are case specific the proposed linear profile can be varied to produce a combination of membrane and bending stress distributions to give lower or higher levels of conservatism on stress intensity factors, depending on the amount of case specific data available or the degree of safety required

  5. Feature-space-based FMRI analysis using the optimal linear transformation.

    Science.gov (United States)

    Sun, Fengrong; Morris, Drew; Lee, Wayne; Taylor, Margot J; Mills, Travis; Babyn, Paul S

    2010-09-01

    The optimal linear transformation (OLT), an image analysis technique of feature space, was first presented in the field of MRI. This paper proposes a method of extending OLT from MRI to functional MRI (fMRI) to improve the activation-detection performance over conventional approaches of fMRI analysis. In this method, first, ideal hemodynamic response time series for different stimuli were generated by convolving the theoretical hemodynamic response model with the stimulus timing. Second, constructing hypothetical signature vectors for different activity patterns of interest by virtue of the ideal hemodynamic responses, OLT was used to extract features of fMRI data. The resultant feature space had particular geometric clustering properties. It was then classified into different groups, each pertaining to an activity pattern of interest; the applied signature vector for each group was obtained by averaging. Third, using the applied signature vectors, OLT was applied again to generate fMRI composite images with high SNRs for the desired activity patterns. Simulations and a blocked fMRI experiment were employed for the method to be verified and compared with the general linear model (GLM)-based analysis. The simulation studies and the experimental results indicated the superiority of the proposed method over the GLM-based analysis in detecting brain activities.

  6. From elementary flux modes to elementary flux vectors: Metabolic pathway analysis with arbitrary linear flux constraints

    Science.gov (United States)

    Klamt, Steffen; Gerstl, Matthias P.; Jungreuthmayer, Christian; Mahadevan, Radhakrishnan; Müller, Stefan

    2017-01-01

    Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks. PMID:28406903

  7. Focal spot motion of linear accelerators and its effect on portal image analysis

    International Nuclear Information System (INIS)

    Sonke, Jan-Jakob; Brand, Bob; Herk, Marcel van

    2003-01-01

    The focal spot of a linear accelerator is often considered to have a fully stable position. In practice, however, the beam control loop of a linear accelerator needs to stabilize after the beam is turned on. As a result, some motion of the focal spot might occur during the start-up phase of irradiation. When acquiring portal images, this motion will affect the projected position of anatomy and field edges, especially when low exposures are used. In this paper, the motion of the focal spot and the effect of this motion on portal image analysis are quantified. A slightly tilted narrow slit phantom was placed at the isocenter of several linear accelerators and images were acquired (3.5 frames per second) by means of an amorphous silicon flat panel imager positioned ∼0.7 m below the isocenter. The motion of the focal spot was determined by converting the tilted slit images to subpixel accurate line spread functions. The error in portal image analysis due to focal spot motion was estimated by a subtraction of the relative displacement of the projected slit from the relative displacement of the field edges. It was found that the motion of the focal spot depends on the control system and design of the accelerator. The shift of the focal spot at the start of irradiation ranges between 0.05-0.7 mm in the gun-target (GT) direction. In the left-right (AB) direction the shift is generally smaller. The resulting error in portal image analysis due to focal spot motion ranges between 0.05-1.1 mm for a dose corresponding to two monitor units (MUs). For 20 MUs, the effect of the focal spot motion reduces to 0.01-0.3 mm. The error in portal image analysis due to focal spot motion can be reduced by reducing the applied dose rate

  8. Analysis of Economic Factors Affecting Stock Market

    OpenAIRE

    Xie, Linyin

    2010-01-01

    This dissertation concentrates on analysis of economic factors affecting Chinese stock market through examining relationship between stock market index and economic factors. Six economic variables are examined: industrial production, money supply 1, money supply 2, exchange rate, long-term government bond yield and real estate total value. Stock market comprises fixed interest stocks and equities shares. In this dissertation, stock market is restricted to equity market. The stock price in thi...

  9. Isotherms and thermodynamics by linear and non-linear regression analysis for the sorption of methylene blue onto activated carbon: Comparison of various error functions

    International Nuclear Information System (INIS)

    Kumar, K. Vasanth; Porkodi, K.; Rocha, F.

    2008-01-01

    A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of methylene blue sorption by activated carbon. The r 2 was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions, namely coefficient of determination (r 2 ), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r 2 was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K 2 was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm

  10. Analysis and Design of a Maglev Permanent Magnet Synchronous Linear Motor to Reduce Additional Torque in dq Current Control

    Directory of Open Access Journals (Sweden)

    Feng Xing

    2018-03-01

    Full Text Available The maglev linear motor has three degrees of motion freedom, which are respectively realized by the thrust force in the x-axis, the levitation force in the z-axis and the torque around the y-axis. Both the thrust force and levitation force can be seen as the sum of the forces on the three windings. The resultant thrust force and resultant levitation force are independently controlled by d-axis current and q-axis current respectively. Thus, the commonly used dq transformation control strategy is suitable for realizing the control of the resultant force, either thrust force and levitation force. However, the forces on the three windings also generate additional torque because they do not pass the mover mass center. To realize the maglev system high-precision control, a maglev linear motor with a new structure is proposed in this paper to decrease this torque. First, the electromagnetic model of the motor can be deduced through the Lorenz force formula. Second, the analytic method and finite element method are used to explore the reason of this additional torque and what factors affect its change trend. Furthermore, a maglev linear motor with a new structure is proposed, with two sets of 90 degrees shifted winding designed on the mover. Under such a structure, the mover position dependent periodic part of the additional torque can be offset. Finally, the theoretical analysis is validated by the simulation result that the additionally generated rotating torque can be offset with little fluctuation in the proposed new-structure maglev linear motor. Moreover, the control system is built in MATLAB/Simulink, which shows that it has small thrust ripple and high-precision performance.

  11. Non-linear models for the relation between cardiovascular risk factors and intake of wine, beer and spirits.

    Science.gov (United States)

    Ambler, Gareth; Royston, Patrick; Head, Jenny

    2003-02-15

    It is generally accepted that moderate consumption of alcohol is associated with a reduced risk of coronary heart disease (CHD). It is not clear however whether this benefit is derived through the consumption of a specific beverage type, for example, wine. In this paper the associations between known CHD risk factors and different beverage types are investigated using a novel approach with non-linear modelling. Two types of model are proposed which are designed to detect differential effects of beverage type. These may be viewed as extensions of Box and Tidwell's power-linear model. The risk factors high density lipoprotein cholesterol, fibrinogen and systolic blood pressure are considered using data from a large longitudinal study of British civil servants (Whitehall II). The results for males suggest that gram for gram of alcohol, the effect of wine differs from that of beer and spirits, particularly for systolic blood pressure. In particular increasing wine consumption is associated with slightly more favourable levels of all three risk factors studied. For females there is evidence of a differential relationship only for systolic blood pressure. These findings are tentative but suggest that further research is required to clarify the similarities and differences between the results for males and females and to establish whether either of the models is the more appropriate. However, having clarified these issues, the apparent benefit of consuming wine instead of other alcoholic beverages may be relatively small. Copyright 2003 John Wiley & Sons, Ltd.

  12. Factors that Determine the Non-Linear Amygdala Influence on Hippocampus-Dependent Memory

    OpenAIRE

    Akirav, Irit; Richter-Levin, Gal

    2006-01-01

    Stressful experiences are known to either improve or impair hippocampal-dependent memory tasks and synaptic plasticity. These positive and negative effects of stress on the hippocampus have been largely documented, however little is known about the mechanism involved in the twofold influence of stress on hippocampal functioning and about what factors define an enhancing or inhibitory outcome. We have recently demonstrated that activation of the basolateral amygdala can produce a biphasic effe...

  13. Analysis of the Covered Electrode Welding Process Stability on the Basis of Linear Regression Equation

    Directory of Open Access Journals (Sweden)

    Słania J.

    2014-10-01

    Full Text Available The article presents the process of production of coated electrodes and their welding properties. The factors concerning the welding properties and the currently applied method of assessing are given. The methodology of the testing based on the measuring and recording of instantaneous values of welding current and welding arc voltage is discussed. Algorithm for creation of reference data base of the expert system is shown, aiding the assessment of covered electrodes welding properties. The stability of voltage–current characteristics was discussed. Statistical factors of instantaneous values of welding current and welding arc voltage waveforms used for determining of welding process stability are presented. The results of coated electrodes welding properties are compared. The article presents the results of linear regression as well as the impact of the independent variables on the welding process performance. Finally the conclusions drawn from the research are given.

  14. Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data

    Directory of Open Access Journals (Sweden)

    Mingwu Jin

    2012-01-01

    Full Text Available Local canonical correlation analysis (CCA is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM, a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.

  15. EABOT - Energetic analysis as a basis for robust optimization of trigeneration systems by linear programming

    International Nuclear Information System (INIS)

    Piacentino, A.; Cardona, F.

    2008-01-01

    The optimization of synthesis, design and operation in trigeneration systems for building applications is a quite complex task, due to the high number of decision variables, the presence of irregular heat, cooling and electric load profiles and the variable electricity price. Consequently, computer-aided techniques are usually adopted to achieve the optimal solution, based either on iterative techniques, linear or non-linear programming or evolutionary search. Large efforts have been made in improving algorithm efficiency, which have resulted in an increasingly rapid convergence to the optimal solution and in reduced calculation time; robust algorithm have also been formulated, assuming stochastic behaviour for energy loads and prices. This paper is based on the assumption that margins for improvements in the optimization of trigeneration systems still exist, which require an in-depth understanding of plant's energetic behaviour. Robustness in the optimization of trigeneration systems has more to do with a 'correct and comprehensive' than with an 'efficient' modelling, being larger efforts required to energy specialists rather than to experts in efficient algorithms. With reference to a mixed integer linear programming model implemented in MatLab for a trigeneration system including a pressurized (medium temperature) heat storage, the relevant contribute of thermoeconomics and energo-environmental analysis in the phase of mathematical modelling and code testing are shown

  16. Time-Frequency (Wigner Analysis of Linear and Nonlinear Pulse Propagation in Optical Fibers

    Directory of Open Access Journals (Sweden)

    José Azaña

    2005-06-01

    Full Text Available Time-frequency analysis, and, in particular, Wigner analysis, is applied to the study of picosecond pulse propagation through optical fibers in both the linear and nonlinear regimes. The effects of first- and second-order group velocity dispersion (GVD and self-phase modulation (SPM are first analyzed separately. The phenomena resulting from the interplay between GVD and SPM in fibers (e.g., soliton formation or optical wave breaking are also investigated in detail. Wigner analysis is demonstrated to be an extremely powerful tool for investigating pulse propagation dynamics in nonlinear dispersive systems (e.g., optical fibers, providing a clearer and deeper insight into the physical phenomena that determine the behavior of these systems.

  17. Hybrid System Modeling and Full Cycle Operation Analysis of a Two-Stroke Free-Piston Linear Generator

    Directory of Open Access Journals (Sweden)

    Peng Sun

    2017-02-01

    Full Text Available Free-piston linear generators (FPLGs have attractive application prospects for hybrid electric vehicles (HEVs owing to their high-efficiency, low-emissions and multi-fuel flexibility. In order to achieve long-term stable operation, the hybrid system design and full-cycle operation strategy are essential factors that should be considered. A 25 kW FPLG consisting of an internal combustion engine (ICE, a linear electric machine (LEM and a gas spring (GS is designed. To improve the power density and generating efficiency, the LEM is assembled with two modular flat-type double-sided PM LEM units, which sandwich a common moving-magnet plate supported by a middle keel beam and bilateral slide guide rails to enhance the stiffness of the moving plate. For the convenience of operation processes analysis, the coupling hybrid system is modeled mathematically and a full cycle simulation model is established. Top-level systemic control strategies including the starting, stable operating, fault recovering and stopping strategies are analyzed and discussed. The analysis results validate that the system can run stably and robustly with the proposed full cycle operation strategy. The effective electric output power can reach 26.36 kW with an overall system efficiency of 36.32%.

  18. Linear degrees of freedom in speech production: analysis of cineradio- and labio-film data and articulatory-acoustic modeling.

    Science.gov (United States)

    Beautemps, D; Badin, P; Bailly, G

    2001-05-01

    The following contribution addresses several issues concerning speech degrees of freedom in French oral vowels, stop, and fricative consonants based on an analysis of tongue and lip shapes extracted from cineradio- and labio-films. The midsagittal tongue shapes have been submitted to a linear decomposition where some of the loading factors were selected such as jaw and larynx position while four other components were derived from principal component analysis (PCA). For the lips, in addition to the more traditional protrusion and opening components, a supplementary component was extracted to explain the upward movement of both the upper and lower lips in [v] production. A linear articulatory model was developed; the six tongue degrees of freedom were used as the articulatory control parameters of the midsagittal tongue contours and explained 96% of the tongue data variance. These control parameters were also used to specify the frontal lip width dimension derived from the labio-film front views. Finally, this model was complemented by a conversion model going from the midsagittal to the area function, based on a fitting of the midsagittal distances and the formant frequencies for both vowels and consonants.

  19. A heteroscedastic generalized linear model with a non-normal speed factor for responses and response times.

    Science.gov (United States)

    Molenaar, Dylan; Bolsinova, Maria

    2017-05-01

    In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  20. Linear algebra

    CERN Document Server

    Said-Houari, Belkacem

    2017-01-01

    This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...

  1. Multidisciplinary Inverse Reliability Analysis Based on Collaborative Optimization with Combination of Linear Approximations

    Directory of Open Access Journals (Sweden)

    Xin-Jia Meng

    2015-01-01

    Full Text Available Multidisciplinary reliability is an important part of the reliability-based multidisciplinary design optimization (RBMDO. However, it usually has a considerable amount of calculation. The purpose of this paper is to improve the computational efficiency of multidisciplinary inverse reliability analysis. A multidisciplinary inverse reliability analysis method based on collaborative optimization with combination of linear approximations (CLA-CO is proposed in this paper. In the proposed method, the multidisciplinary reliability assessment problem is first transformed into a problem of most probable failure point (MPP search of inverse reliability, and then the process of searching for MPP of multidisciplinary inverse reliability is performed based on the framework of CLA-CO. This method improves the MPP searching process through two elements. One is treating the discipline analyses as the equality constraints in the subsystem optimization, and the other is using linear approximations corresponding to subsystem responses as the replacement of the consistency equality constraint in system optimization. With these two elements, the proposed method realizes the parallel analysis of each discipline, and it also has a higher computational efficiency. Additionally, there are no difficulties in applying the proposed method to problems with nonnormal distribution variables. One mathematical test problem and an electronic packaging problem are used to demonstrate the effectiveness of the proposed method.

  2. Integrated structural analysis tool using the linear matching method part 1 – Software development

    International Nuclear Information System (INIS)

    Ure, James; Chen, Haofeng; Tipping, David

    2014-01-01

    A number of direct methods based upon the Linear Matching Method (LMM) framework have been developed to address structural integrity issues for components subjected to cyclic thermal and mechanical load conditions. This paper presents a new integrated structural analysis tool using the LMM framework for the assessment of load carrying capacity, shakedown limit, ratchet limit and steady state cyclic response of structures. First, the development of the LMM for the evaluation of design limits in plasticity is introduced. Second, preliminary considerations for the development of the LMM into a tool which can be used on a regular basis by engineers are discussed. After the re-structuring of the LMM subroutines for multiple central processing unit (CPU) solution, the LMM software tool for the assessment of design limits in plasticity is implemented by developing an Abaqus CAE plug-in with graphical user interfaces. Further demonstration of this new LMM analysis tool including practical application and verification is presented in an accompanying paper. - Highlights: • A new structural analysis tool using the Linear Matching Method (LMM) is developed. • The software tool is able to evaluate the design limits in plasticity. • Able to assess limit load, shakedown, ratchet limit and steady state cyclic response. • Re-structuring of the LMM subroutines for multiple CPU solution is conducted. • The software tool is implemented by developing an Abaqus CAE plug-in with GUI

  3. Describing three-class task performance: three-class linear discriminant analysis and three-class ROC analysis

    Science.gov (United States)

    He, Xin; Frey, Eric C.

    2007-03-01

    Binary ROC analysis has solid decision-theoretic foundations and a close relationship to linear discriminant analysis (LDA). In particular, for the case of Gaussian equal covariance input data, the area under the ROC curve (AUC) value has a direct relationship to the Hotelling trace. Many attempts have been made to extend binary classification methods to multi-class. For example, Fukunaga extended binary LDA to obtain multi-class LDA, which uses the multi-class Hotelling trace as a figure-of-merit, and we have previously developed a three-class ROC analysis method. This work explores the relationship between conventional multi-class LDA and three-class ROC analysis. First, we developed a linear observer, the three-class Hotelling observer (3-HO). For Gaussian equal covariance data, the 3- HO provides equivalent performance to the three-class ideal observer and, under less strict conditions, maximizes the signal to noise ratio for classification of all pairs of the three classes simultaneously. The 3-HO templates are not the eigenvectors obtained from multi-class LDA. Second, we show that the three-class Hotelling trace, which is the figureof- merit in the conventional three-class extension of LDA, has significant limitations. Third, we demonstrate that, under certain conditions, there is a linear relationship between the eigenvectors obtained from multi-class LDA and 3-HO templates. We conclude that the 3-HO based on decision theory has advantages both in its decision theoretic background and in the usefulness of its figure-of-merit. Additionally, there exists the possibility of interpreting the two linear features extracted by the conventional extension of LDA from a decision theoretic point of view.

  4. A convergence analysis for a sweeping preconditioner for block tridiagonal systems of linear equations

    KAUST Repository

    Bagci, Hakan

    2014-11-11

    We study sweeping preconditioners for symmetric and positive definite block tridiagonal systems of linear equations. The algorithm provides an approximate inverse that can be used directly or in a preconditioned iterative scheme. These algorithms are based on replacing the Schur complements appearing in a block Gaussian elimination direct solve by hierarchical matrix approximations with reduced off-diagonal ranks. This involves developing low rank hierarchical approximations to inverses. We first provide a convergence analysis for the algorithm for reduced rank hierarchical inverse approximation. These results are then used to prove convergence and preconditioning estimates for the resulting sweeping preconditioner.

  5. Analysis of Known Linear Distributed Average Consensus Algorithms on Cycles and Paths

    Directory of Open Access Journals (Sweden)

    Jesús Gutiérrez-Gutiérrez

    2018-03-01

    Full Text Available In this paper, we compare six known linear distributed average consensus algorithms on a sensor network in terms of convergence time (and therefore, in terms of the number of transmissions required. The selected network topologies for the analysis (comparison are the cycle and the path. Specifically, in the present paper, we compute closed-form expressions for the convergence time of four known deterministic algorithms and closed-form bounds for the convergence time of two known randomized algorithms on cycles and paths. Moreover, we also compute a closed-form expression for the convergence time of the fastest deterministic algorithm considered on grids.

  6. Statistical mechanical analysis of the linear vector channel in digital communication

    International Nuclear Information System (INIS)

    Takeda, Koujin; Hatabu, Atsushi; Kabashima, Yoshiyuki

    2007-01-01

    A statistical mechanical framework to analyze linear vector channel models in digital wireless communication is proposed for a large system. The framework is a generalization of that proposed for code-division multiple-access systems in Takeda et al (2006 Europhys. Lett. 76 1193) and enables the analysis of the system in which the elements of the channel transfer matrix are statistically correlated with each other. The significance of the proposed scheme is demonstrated by assessing the performance of an existing model of multi-input multi-output communication systems

  7. Krylov subspace method with communication avoiding technique for linear system obtained from electromagnetic analysis

    International Nuclear Information System (INIS)

    Ikuno, Soichiro; Chen, Gong; Yamamoto, Susumu; Itoh, Taku; Abe, Kuniyoshi; Nakamura, Hiroaki

    2016-01-01

    Krylov subspace method and the variable preconditioned Krylov subspace method with communication avoiding technique for a linear system obtained from electromagnetic analysis are numerically investigated. In the k−skip Krylov method, the inner product calculations are expanded by Krylov basis, and the inner product calculations are transformed to the scholar operations. k−skip CG method is applied for the inner-loop solver of Variable Preconditioned Krylov subspace methods, and the converged solution of electromagnetic problem is obtained using the method. (author)

  8. Noise analysis and performance of a selfscanned linear InSb detector array

    International Nuclear Information System (INIS)

    Finger, G.; Meyer, M.; Moorwood, A.F.M.

    1987-01-01

    A noise model for detectors operated in the capacitive discharge mode is presented. It is used to analyze the noise performance of the ESO nested timing readout technique applied to a linear 32-element InSb array which is multiplexed by a silicon switched-FET shift register. Analysis shows that KTC noise of the videoline is the major noise contribution; it can be eliminated by weighted double-correlated sampling. Best noise performance of this array is achieved at the smallest possible reverse bias voltage (not more than 20 mV) whereas excess noise is observed at higher reverse bias voltages. 5 references

  9. A convergence analysis for a sweeping preconditioner for block tridiagonal systems of linear equations

    KAUST Repository

    Bagci, Hakan; Pasciak, Joseph E.; Sirenko, Kostyantyn

    2014-01-01

    We study sweeping preconditioners for symmetric and positive definite block tridiagonal systems of linear equations. The algorithm provides an approximate inverse that can be used directly or in a preconditioned iterative scheme. These algorithms are based on replacing the Schur complements appearing in a block Gaussian elimination direct solve by hierarchical matrix approximations with reduced off-diagonal ranks. This involves developing low rank hierarchical approximations to inverses. We first provide a convergence analysis for the algorithm for reduced rank hierarchical inverse approximation. These results are then used to prove convergence and preconditioning estimates for the resulting sweeping preconditioner.

  10. Non-linear canonical correlation for joint analysis of MEG signals from two subjects

    Directory of Open Access Journals (Sweden)

    Cristina eCampi

    2013-06-01

    Full Text Available We consider the problem of analysing magnetoencephalography (MEG data measured from two persons undergoing the same experiment, and we propose a method that searches for sources with maximally correlated energies. Our method is based on canonical correlation analysis (CCA, which provides linear transformations, one for each subject, such that the correlation between the transformed MEG signals is maximized. Here, we present a nonlinear version of CCA which measures the correlation of energies. Furthermore, we introduce a delay parameter in the modelto analyse, e.g., leader-follower changes in experiments where the two subjects are engaged in social interaction.

  11. Stability Analysis of Periodic Orbits in a Class of Duffing-Like Piecewise Linear Vibrators

    Directory of Open Access Journals (Sweden)

    El Aroudi A.

    2014-01-01

    Full Text Available In this paper, we study the dynamical behavior of a Duffing-like piecewise linear (PWL springmass-damper system for vibration-based energy harvesting applications. First, we present a continuous time single degree of freedom PWL dynamical model of the system. From this PWL model, numerical simulations are carried out by computing frequency response and bifurcation diagram under a deterministic harmonic excitation for different sets of system parameter values. Stability analysis is performed using Floquet theory combined with Fillipov method.

  12. Non-linear Analysis of Scalp EEG by Using Bispectra: The Effect of the Reference Choice

    Directory of Open Access Journals (Sweden)

    Federico Chella

    2017-05-01

    Full Text Available Bispectral analysis is a signal processing technique that makes it possible to capture the non-linear and non-Gaussian properties of the EEG signals. It has found various applications in EEG research and clinical practice, including the assessment of anesthetic depth, the identification of epileptic seizures, and more recently, the evaluation of non-linear cross-frequency brain functional connectivity. However, the validity and reliability of the indices drawn from bispectral analysis of EEG signals are potentially biased by the use of a non-neutral EEG reference. The present study aims at investigating the effects of the reference choice on the analysis of the non-linear features of EEG signals through bicoherence, as well as on the estimation of cross-frequency EEG connectivity through two different non-linear measures, i.e., the cross-bicoherence and the antisymmetric cross-bicoherence. To this end, four commonly used reference schemes were considered: the vertex electrode (Cz, the digitally linked mastoids, the average reference, and the Reference Electrode Standardization Technique (REST. The reference effects were assessed both in simulations and in a real EEG experiment. The simulations allowed to investigated: (i the effects of the electrode density on the performance of the above references in the estimation of bispectral measures; and (ii the effects of the head model accuracy in the performance of the REST. For real data, the EEG signals recorded from 10 subjects during eyes open resting state were examined, and the distortions induced by the reference choice in the patterns of alpha-beta bicoherence, cross-bicoherence, and antisymmetric cross-bicoherence were assessed. The results showed significant differences in the findings depending on the chosen reference, with the REST providing superior performance than all the other references in approximating the ideal neutral reference. In conclusion, this study highlights the importance of

  13. Solid state linear dichroic infrared spectral analysis of benzimidazoles and their N 1-protonated salts

    Science.gov (United States)

    Ivanova, B. B.

    2005-11-01

    A stereo structural characterization of 2,5,6-thrimethylbenzimidazole (MBIZ) and 2-amino-benzimidaziole (2-NH 2-BI) and their N 1 protonation salts was carried out using a polarized solid state linear dichroic infrared spectral (IR-LD) analysis in nematic liquid crystal suspension. All experimental predicted structures were compared with the theoretical ones, obtained by ab initio calculations. The Cs to C2v* symmetry transformation as a result of protonation processes, with a view of its reflection on the infrared spectral characteristics was described.

  14. Factor Economic Analysis at Forestry Enterprises

    Directory of Open Access Journals (Sweden)

    M.Yu. Chik

    2018-03-01

    Full Text Available The article studies the importance of economic analysis according to the results of research of scientific works of domestic and foreign scientists. The calculation of the influence of factors on the change in the cost of harvesting timber products by cost items has been performed. The results of the calculation of the influence of factors on the change of costs on 1 UAH are determined using the full cost of sold products. The variable and fixed costs and their distribution are allocated that influences the calculation of the impact of factors on cost changes on 1 UAH of sold products. The paper singles out the general results of calculating the influence of factors on cost changes on 1 UAH of sold products. According to the results of the analysis, the list of reserves for reducing the cost of production at forest enterprises was proposed. The main sources of reserves for reducing the prime cost of forest products at forest enterprises are investigated based on the conducted factor analysis.

  15. An SPSSR -Menu for Ordinal Factor Analysis

    Directory of Open Access Journals (Sweden)

    Mario Basto

    2012-01-01

    Full Text Available Exploratory factor analysis is a widely used statistical technique in the social sciences. It attempts to identify underlying factors that explain the pattern of correlations within a set of observed variables. A statistical software package is needed to perform the calculations. However, there are some limitations with popular statistical software packages, like SPSS. The R programming language is a free software package for statistical and graphical computing. It offers many packages written by contributors from all over the world and programming resources that allow it to overcome the dialog limitations of SPSS. This paper offers an SPSS dialog written in theR programming language with the help of some packages, so that researchers with little or no knowledge in programming, or those who are accustomed to making their calculations based on statistical dialogs, have more options when applying factor analysis to their data and hence can adopt a better approach when dealing with ordinal, Likert-type data.

  16. Axial displacement of external and internal implant-abutment connection evaluated by linear mixed model analysis.

    Science.gov (United States)

    Seol, Hyon-Woo; Heo, Seong-Joo; Koak, Jai-Young; Kim, Seong-Kyun; Kim, Shin-Koo

    2015-01-01

    To analyze the axial displacement of external and internal implant-abutment connection after cyclic loading. Three groups of external abutments (Ext group), an internal tapered one-piece-type abutment (Int-1 group), and an internal tapered two-piece-type abutment (Int-2 group) were prepared. Cyclic loading was applied to implant-abutment assemblies at 150 N with a frequency of 3 Hz. The amount of axial displacement, the Periotest values (PTVs), and the removal torque values(RTVs) were measured. Both a repeated measures analysis of variance and pattern analysis based on the linear mixed model were used for statistical analysis. Scanning electron microscopy (SEM) was used to evaluate the surface of the implant-abutment connection. The mean axial displacements after 1,000,000 cycles were 0.6 μm in the Ext group, 3.7 μm in the Int-1 group, and 9.0 μm in the Int-2 group. Pattern analysis revealed a breakpoint at 171 cycles. The Ext group showed no declining pattern, and the Int-1 group showed no declining pattern after the breakpoint (171 cycles). However, the Int-2 group experienced continuous axial displacement. After cyclic loading, the PTV decreased in the Int-2 group, and the RTV decreased in all groups. SEM imaging revealed surface wear in all groups. Axial displacement and surface wear occurred in all groups. The PTVs remained stable, but the RTVs decreased after cyclic loading. Based on linear mixed model analysis, the Ext and Int-1 groups' axial displacements plateaued after little cyclic loading. The Int-2 group's rate of axial displacement slowed after 100,000 cycles.

  17. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  18. Assessment of non-linear analysis finite element program (NONSAP) for inelastic analysis

    International Nuclear Information System (INIS)

    Chang, T.Y.; Prachuktam, S.; Reich, M.

    1976-11-01

    An assessment on a nonlinear structural analysis finite element program called NONSAP is given with respect to its inelastic analysis capability for pressure vessels and components. The assessment was made from the review of its theoretical basis and bench mark problem runs. It was found that NONSAP has only limited capability for inelastic analysis. However, the program was written flexible enough that it can be easily extended or modified to suit the user's need. Moreover, some of the numerical difficulties in using NONSAP are pointed out

  19. Sub-regional linear programming models in land use analysis: a case study of the Neguev settlement, Costa Rica.

    NARCIS (Netherlands)

    Schipper, R.A.; Stoorvogel, J.J.; Jansen, D.M.

    1995-01-01

    The paper deals with linear programming as a tool for land use analysis at the sub-regional level. A linear programming model of a case study area, the Neguev settlement in the Atlantic zone of Costa Rica, is presented. The matrix of the model includes five submatrices each encompassing a different

  20. Numerical Modal Analysis of Vibrations in a Three-Phase Linear Switched Reluctance Actuator

    Directory of Open Access Journals (Sweden)

    José Salvado

    2017-01-01

    Full Text Available This paper addresses the problem of vibrations produced by switched reluctance actuators, focusing on the linear configuration of this type of machines, aiming at its characterization regarding the structural vibrations. The complexity of the mechanical system and the number of parts used put serious restrictions on the effectiveness of analytical approaches. We build the 3D model of the actuator and use finite element method (FEM to find its natural frequencies. The focus is on frequencies within the range up to nearly 1.2 kHz which is considered relevant, based on preliminary simulations and experiments. Spectral analysis results of audio signals from experimental modal excitation are also shown and discussed. The obtained data support the characterization of the linear actuator regarding the excited modes, its vibration frequencies, and mode shapes, with high potential of excitation due to the regular operation regimes of the machine. The results reveal abundant modes and harmonics and the symmetry characteristics of the actuator, showing that the vibration modes can be excited for different configurations of the actuator. The identification of the most critical modes is of great significance for the actuator’s control strategies. This analysis also provides significant information to adopt solutions to reduce the vibrations at the design.

  1. [Multiple linear regression analysis of X-ray measurement and WOMAC scores of knee osteoarthritis].

    Science.gov (United States)

    Ma, Yu-Feng; Wang, Qing-Fu; Chen, Zhao-Jun; Du, Chun-Lin; Li, Jun-Hai; Huang, Hu; Shi, Zong-Ting; Yin, Yue-Shan; Zhang, Lei; A-Di, Li-Jiang; Dong, Shi-Yu; Wu, Ji

    2012-05-01

    To perform Multiple Linear Regression analysis of X-ray measurement and WOMAC scores of knee osteoarthritis, and to analyze their relationship with clinical and biomechanical concepts. From March 2011 to July 2011, 140 patients (250 knees) were reviewed, including 132 knees in the left and 118 knees in the right; ranging in age from 40 to 71 years, with an average of 54.68 years. The MB-RULER measurement software was applied to measure femoral angle, tibial angle, femorotibial angle, joint gap angle from antero-posterir and lateral position of X-rays. The WOMAC scores were also collected. Then multiple regression equations was applied for the linear regression analysis of correlation between the X-ray measurement and WOMAC scores. There was statistical significance in the regression equation of AP X-rays value and WOMAC scores (Pregression equation of lateral X-ray value and WOMAC scores (P>0.05). 1) X-ray measurement of knee joint can reflect the WOMAC scores to a certain extent. 2) It is necessary to measure the X-ray mechanical axis of knee, which is important for diagnosis and treatment of osteoarthritis. 3) The correlation between tibial angle,joint gap angle on antero-posterior X-ray and WOMAC scores is significant, which can be used to assess the functional recovery of patients before and after treatment.

  2. Linear least-squares method for global luminescent oil film skin friction field analysis

    Science.gov (United States)

    Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu

    2018-06-01

    A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.

  3. Analysis of bifurcation behavior of a piecewise linear vibrator with electromagnetic coupling for energy harvesting applications

    KAUST Repository

    El Aroudi, Abdelali

    2014-05-01

    Recently, nonlinearities have been shown to play an important role in increasing the extracted energy of vibration-based energy harvesting systems. In this paper, we study the dynamical behavior of a piecewise linear (PWL) spring-mass-damper system for vibration-based energy harvesting applications. First, we present a continuous time single degree of freedom PWL dynamical model of the system. Different configurations of the PWL model and their corresponding state-space regions are derived. Then, from this PWL model, extensive numerical simulations are carried out by computing time-domain waveforms, state-space trajectories and frequency responses under a deterministic harmonic excitation for different sets of system parameter values. Stability analysis is performed using Floquet theory combined with Filippov method, Poincaré map modeling and finite difference method (FDM). The Floquet multipliers are calculated using these three approaches and a good concordance is obtained among them. The performance of the system in terms of the harvested energy is studied by considering both purely harmonic excitation and a noisy vibrational source. A frequency-domain analysis shows that the harvested energy could be larger at low frequencies as compared to an equivalent linear system, in particular, for relatively low excitation intensities. This could be an advantage for potential use of this system in low frequency ambient vibrational-based energy harvesting applications. © 2014 World Scientific Publishing Company.

  4. Non-linear analysis and the design of Pumpkin Balloons: stress, stability and viscoelasticity

    Science.gov (United States)

    Rand, J. L.; Wakefield, D. S.

    Tensys have a long-established background in the shape generation and load analysis of architectural stressed membrane structures Founded upon their inTENS finite element analysis suite these activities have broadened to encompass lighter than air structures such as aerostats hybrid air-vehicles and stratospheric balloons Winzen Engineering couple many years of practical balloon design and fabrication experience with both academic and practical knowledge of the characterisation of the non-linear viscoelastic response of the polymeric films typically used for high-altitude scientific balloons Both companies have provided consulting services to the NASA Ultra Long Duration Balloon ULDB Program Early implementations of pumpkin balloons have shown problems of geometric instability characterised by improper deployment and these difficulties have been reproduced numerically using inTENS The solution lies in both the shapes of the membrane lobes and also the need to generate a biaxial stress field in order to mobilise in-plane shear stiffness Balloons undergo significant temperature and pressure variations in flight The different thermal characteristics between tendons and film can lead to significant meridional stress Fabrication tolerances can lead to significant local hoop stress concentrations particularly adjacent to the base and apex end fittings The non-linear viscoelastic response of the envelope film acts positively to help dissipate stress concentrations However creep over time may produce lobe geometry variations that may

  5. Detection of non-milk fat in milk fat by gas chromatography and linear discriminant analysis.

    Science.gov (United States)

    Gutiérrez, R; Vega, S; Díaz, G; Sánchez, J; Coronado, M; Ramírez, A; Pérez, J; González, M; Schettino, B

    2009-05-01

    Gas chromatography was utilized to determine triacylglycerol profiles in milk and non-milk fat. The values of triacylglycerol were subjected to linear discriminant analysis to detect and quantify non-milk fat in milk fat. Two groups of milk fat were analyzed: A) raw milk fat from the central region of Mexico (n = 216) and B) ultrapasteurized milk fat from 3 industries (n = 36), as well as pork lard (n = 2), bovine tallow (n = 2), fish oil (n = 2), peanut (n = 2), corn (n = 2), olive (n = 2), and soy (n = 2). The samples of raw milk fat were adulterated with non-milk fats in proportions of 0, 5, 10, 15, and 20% to form 5 groups. The first function obtained from the linear discriminant analysis allowed the correct classification of 94.4% of the samples with levels <10% of adulteration. The triacylglycerol values of the ultrapasteurized milk fats were evaluated with the discriminant function, demonstrating that one industry added non-milk fat to its product in 80% of the samples analyzed.

  6. Nominal Performance Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose

  7. Design and analysis of an unconventional permanent magnet linear machine for energy harvesting

    Science.gov (United States)

    Zeng, Peng

    This Ph.D. dissertation proposes an unconventional high power density linear electromagnetic kinetic energy harvester, and a high-performance two-stage interface power electronics to maintain maximum power abstraction from the energy source and charge the Li-ion battery load with constant current. The proposed machine architecture is composed of a double-sided flat type silicon steel stator with winding slots, a permanent magnet mover, coil windings, a linear motion guide and an adjustable spring bearing. The unconventional design of the machine is that NdFeB magnet bars in the mover are placed with magnetic fields in horizontal direction instead of vertical direction and the same magnetic poles are facing each other. The derived magnetic equivalent circuit model proves the average air-gap flux density of the novel topology is as high as 0.73 T with 17.7% improvement over that of the conventional topology at the given geometric dimensions of the proof-of-concept machine. Subsequently, the improved output voltage and power are achieved. The dynamic model of the linear generator is also developed, and the analytical equations of output maximum power are derived for the case of driving vibration with amplitude that is equal, smaller and larger than the relative displacement between the mover and the stator of the machine respectively. Furthermore, the finite element analysis (FEA) model has been simulated to prove the derived analytical results and the improved power generation capability. Also, an optimization framework is explored to extend to the multi-Degree-of-Freedom (n-DOF) vibration based linear energy harvesting devices. Moreover, a boost-buck cascaded switch mode converter with current controller is designed to extract the maximum power from the harvester and charge the Li-ion battery with trickle current. Meanwhile, a maximum power point tracking (MPPT) algorithm is proposed and optimized for low frequency driving vibrations. Finally, a proof

  8. Diagnosis and prognosis of Ostheoarthritis by texture analysis using sparse linear models

    DEFF Research Database (Denmark)

    Marques, Joselene; Clemmensen, Line Katrine Harder; Dam, Erik

    We present a texture analysis methodology that combines uncommitted machine-learning techniques and sparse feature transformation methods in a fully automatic framework. We compare the performances of a partial least squares (PLS) forward feature selection strategy to a hard threshold sparse PLS...... algorithm and a sparse linear discriminant model. The texture analysis framework was applied to diagnosis of knee osteoarthritis (OA) and prognosis of cartilage loss. For this investigation, a generic texture feature bank was extracted from magnetic resonance images of tibial knee bone. The features were...... used as input to the sparse algorithms, which dened the best features to retain in the model. To cope with the limited number of samples, the data was evaluated using 10 fold cross validation (CV). The diagnosis evaluation using sparse PLS reached a generalization area-under-the-ROC curve (AUC) of 0...

  9. Normal form analysis of linear beam dynamics in a coupled storage ring

    International Nuclear Information System (INIS)

    Wolski, Andrzej; Woodley, Mark D.

    2004-01-01

    The techniques of normal form analysis, well known in the literature, can be used to provide a straightforward characterization of linear betatron dynamics in a coupled lattice. Here, we consider both the beam distribution and the betatron oscillations in a storage ring. We find that the beta functions for uncoupled motion generalize in a simple way to the coupled case. Defined in the way that we propose, the beta functions remain well behaved (positive and finite) under all circumstances, and have essentially the same physical significance for the beam size and betatron oscillation amplitude as in the uncoupled case. Application of this analysis to the online modeling of the PEP-II rings is also discussed

  10. Combined slope ratio analysis and linear-subtraction: An extension of the Pearce ratio method

    Science.gov (United States)

    De Waal, Sybrand A.

    1996-07-01

    A new technique, called combined slope ratio analysis, has been developed by extending the Pearce element ratio or conserved-denominator method (Pearce, 1968) to its logical conclusions. If two stoichiometric substances are mixed and certain chemical components are uniquely contained in either one of the two mixing substances, then by treating these unique components as conserved, the composition of the substance not containing the relevant component can be accurately calculated within the limits allowed by analytical and geological error. The calculated composition can then be subjected to rigorous statistical testing using the linear-subtraction method recently advanced by Woronow (1994). Application of combined slope ratio analysis to the rocks of the Uwekahuna Laccolith, Hawaii, USA, and the lavas of the 1959-summit eruption of Kilauea Volcano, Hawaii, USA, yields results that are consistent with field observations.

  11. Non-linear thermal and structural analysis of a typical spent fuel silo

    International Nuclear Information System (INIS)

    Alvarez, L.M.; Mancini, G.R.; Spina, O.A.F.; Sala, G.; Paglia, F.

    1993-01-01

    A numerical method for the non-linear structural analysis of a typical reinforced concrete spent fuel silo under thermal loads is proposed. The numerical time integration was performed by means of a time explicit axisymmetric finite-difference numerical operator. An analysis was made of influences by heat, viscoelasticity and cracking upon the concrete behaviour between concrete pouring stage and the first period of the silo's normal operation. The following parameters were considered for the heat generation and transmission process: Heat generated during the concrete's hardening stage, Solar radiation effects, Natural convection, Spent-fuel heat generation. For the modelling of the reinforced concrete behaviour, use was made of a simplified formulation of: Visco-elastic effects, Thermal cracking, Steel reinforcement. A comparison between some experimental temperature characteristic values obtained from the numerical integration process and empirical data obtained from a 1:1 scaled prototype was also carried out. (author)

  12. Finite element historical deformation analysis in piecewise linear plasticity by mathematical programming

    International Nuclear Information System (INIS)

    De Donato, O.; Parisi, M.A.

    1977-01-01

    When loads increase proportionally beyond the elastic limit in the presence of elastic-plastic piecewise-linear constitutive laws, the problem of finding the whole evolution of the plastic strain and displacements of structures was recently shown to be amenable to a parametric linear complementary problem (PLCP) in which the parameter is represented by the load factor, the matrix is symmetric positive definite or at least semi-definite (for perfect plasticity) and the variables with a direct mechanical meaning are the plastic multipliers. With reference to plane trusses and frames with elastic-plastic linear work-hardening material behaviour numerical solutions were also fairly efficiently obtained using a recent mathematical programming algorithm (due to R.W. Cottle) which is able to provide the whole deformation history of the structure and, at the same time to rule out local unloadings along the given proportional loading process by means of 'a priori' checks carried out before each pivotal step of the procedure. Hence it becomes possible to use the holonomic (reversible, path-independent) constitutive laws in finite terms and to benefit by all the relevant numerical and computational advantages despite the non-holonomic nature of plastic behaviour. In the present paper the method of solution is re-examined in view to overcome an important drawback of the algorithm deriving from the size of PLCP fully populated matrix when structural problems with large number of variables are considered and, consequently, the updating, the storing or, generally, the handling of the current tableau may become prohibitive. (Auth.)

  13. Driven Factors Analysis of China’s Irrigation Water Use Efficiency by Stepwise Regression and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Renfu Jia

    2016-01-01

    Full Text Available This paper introduces an integrated approach to find out the major factors influencing efficiency of irrigation water use in China. It combines multiple stepwise regression (MSR and principal component analysis (PCA to obtain more realistic results. In real world case studies, classical linear regression model often involves too many explanatory variables and the linear correlation issue among variables cannot be eliminated. Linearly correlated variables will cause the invalidity of the factor analysis results. To overcome this issue and reduce the number of the variables, PCA technique has been used combining with MSR. As such, the irrigation water use status in China was analyzed to find out the five major factors that have significant impacts on irrigation water use efficiency. To illustrate the performance of the proposed approach, the calculation based on real data was conducted and the results were shown in this paper.

  14. Factor analysis for exercise stress radionuclide ventriculography

    International Nuclear Information System (INIS)

    Hirota, Kazuyoshi; Yasuda, Mitsutaka; Oku, Hisao; Ikuno, Yoshiyasu; Takeuchi, Kazuhide; Takeda, Tadanao; Ochi, Hironobu

    1987-01-01

    Using factor analysis, a new image processing in exercise stress radionuclide ventriculography, changes in factors associated with exercise were evaluated in 14 patients with angina pectoris or old myocardial infarction. The patients were imaged in the left anterior oblique projection, and three factor images were presented on a color coded scale. Abnormal factors (AF) were observed in 6 patients before exercise, 13 during exercise, and 4 after exercise. In 7 patients, the occurrence of AF was associated with exercise. Five of them became free from AF after exercise. Three patients showing AF before exercise had aggravation of AF during exercise. Overall, the occurrence or aggravation of AF was associated with exercise in ten (71 %) of the patients. The other three patients, however, had disappearance of AF during exercise. In the last patient, none of the AF was observed throughout the study. In view of a high incidence of AF associated with exercise, the factor analysis may have the potential in evaluating cardiac reverse from the viewpoint of left ventricular wall motion abnormality. (Namekawa, K.)

  15. Linear analysis on the growth of non-spherical perturbations in supersonic accretion flows

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Kazuya; Yamada, Shoichi, E-mail: ktakahashi@heap.phys.waseda.ac.jp [Advanced Research Institute for Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku 169-8555 (Japan)

    2014-10-20

    We analyzed the growth of non-spherical perturbations in supersonic accretion flows. We have in mind an application to the post-bounce phase of core-collapse supernovae (CCSNe). Such non-spherical perturbations have been suggested by a series of papers by Arnett, who has numerically investigated violent convections in the outer layers of pre-collapse stars. Moreover, Couch and Ott demonstrated in their numerical simulations that such perturbations may lead to a successful supernova even for a progenitor that fails to explode without fluctuations. This study investigated the linear growth of perturbations during the infall onto a stalled shock wave. The linearized equations are solved as an initial and boundary value problem with the use of a Laplace transform. The background is a Bondi accretion flow whose parameters are chosen to mimic the 15 M {sub ☉} progenitor model by Woosley and Heger, which is supposed to be a typical progenitor of CCSNe. We found that the perturbations that are given at a large radius grow as they flow down to the shock radius; the density perturbations can be amplified by a factor of 30, for example. We analytically show that the growth rate is proportional to l, the index of the spherical harmonics. We also found that the perturbations oscillate in time with frequencies that are similar to those of the standing accretion shock instability. This may have an implication for shock revival in CCSNe, which will be investigated in our forthcoming paper in more detail.

  16. Correction factor for hair analysis by PIXE

    International Nuclear Information System (INIS)

    Montenegro, E.C.; Baptista, G.B.; Castro Faria, L.V. de; Paschoa, A.S.

    1980-01-01

    The application of the Particle Induced X-ray Emission (PIXE) technique to analyse quantitatively the elemental composition of hair specimens brings about some difficulties in the interpretation of the data. The present paper proposes a correction factor to account for the effects of the energy loss of the incident particle with penetration depth, and X-ray self-absorption when a particular geometrical distribution of elements in hair is assumed for calculational purposes. The correction factor has been applied to the analysis of hair contents Zn, Cu and Ca as a function of the energy of the incident particle. (orig.)

  17. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  18. Correction factor for hair analysis by PIXE

    International Nuclear Information System (INIS)

    Montenegro, E.C.; Baptista, G.B.; Castro Faria, L.V. de; Paschoa, A.S.

    1979-06-01

    The application of the Particle Induced X-ray Emission (PIXE) technique to analyse quantitatively the elemental composition of hair specimens brings about some difficulties in the interpretation of the data. The present paper proposes a correction factor to account for the effects of energy loss of the incident particle with penetration depth, and x-ray self-absorption when a particular geometrical distribution of elements in hair is assumed for calculational purposes. The correction factor has been applied to the analysis of hair contents Zn, Cu and Ca as a function of the energy of the incident particle.(Author) [pt

  19. Absorption correction factor in X-ray fluorescent quantitative analysis

    International Nuclear Information System (INIS)

    Pimjun, S.

    1994-01-01

    An experiment on absorption correction factor in X-ray fluorescent quantitative analysis were carried out. Standard samples were prepared from the mixture of Fe 2 O 3 and tapioca flour at various concentration of Fe 2 O 3 ranging from 5% to 25%. Unknown samples were kaolin containing 3.5% to-50% of Fe 2 O 3 Kaolin samples were diluted with tapioca flour in order to reduce the absorption of FeK α and make them easy to prepare. Pressed samples with 0.150 /cm 2 and 2.76 cm in diameter, were used in the experiment. Absorption correction factor is related to total mass absorption coefficient (χ) which varied with sample composition. In known sample, χ can be calculated by conveniently the formula. However in unknown sample, χ can be determined by Emission-Transmission method. It was found that the relationship between corrected FeK α intensity and contents of Fe 2 O 3 in these samples was linear. This result indicate that this correction factor can be used to adjust the accuracy of X-ray intensity. Therefore, this correction factor is essential in quantitative analysis of elements comprising in any sample by X-ray fluorescent technique

  20. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M.A. Wasiolek

    2003-07-25

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports (BSC 2003 [DIRS 160964]; BSC 2003 [DIRS 160965]; BSC 2003 [DIRS 160976]; BSC 2003 [DIRS 161239]; BSC 2003 [DIRS 161241]) contain detailed description of the model input parameters. This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs and conversion factors for the TSPA. The BDCFs will be used in performance assessment for calculating annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose from beta- and photon-emitting radionuclides.

  1. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M.A. Wasiolek

    2005-04-28

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis

  2. Nominal Performance Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M.A. Wasiolek

    2005-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis'' (Figure 1-1). The objectives of this analysis are to develop BDCFs for the

  3. Structural equation and log-linear modeling: a comparison of methods in the analysis of a study on caregivers' health

    Directory of Open Access Journals (Sweden)

    Rosenbaum Peter L

    2006-10-01

    Full Text Available Abstract Background In this paper we compare the results in an analysis of determinants of caregivers' health derived from two approaches, a structural equation model and a log-linear model, using the same data set. Methods The data were collected from a cross-sectional population-based sample of 468 families in Ontario, Canada who had a child with cerebral palsy (CP. The self-completed questionnaires and the home-based interviews used in this study included scales reflecting socio-economic status, child and caregiver characteristics, and the physical and psychological well-being of the caregivers. Both analytic models were used to evaluate the relationships between child behaviour, caregiving demands, coping factors, and the well-being of primary caregivers of children with CP. Results The results were compared, together with an assessment of the positive and negative aspects of each approach, including their practical and conceptual implications. Conclusion No important differences were found in the substantive conclusions of the two analyses. The broad confirmation of the Structural Equation Modeling (SEM results by the Log-linear Modeling (LLM provided some reassurance that the SEM had been adequately specified, and that it broadly fitted the data.

  4. Xenon spatial oscillation in nuclear power reactors:an analytical approach through non linear modal analysis

    International Nuclear Information System (INIS)

    Suarez Antola, R.

    2005-01-01

    It was proponed recently to apply an extension of Lyapunov's first method to the non-linear regime, known as non-linear modal analysis (NMA), to the study of space-time problems in nuclear reactor kinetics, nuclear power plant dynamics and nuclear power plant instrumentation and control(1). The present communication shows how to apply NMA to the study of Xenon spatial oscillations in large nuclear reactors. The set of non-linear modal equations derived by J. Lewins(2) for neutron flux, Xenon concentration and Iodine concentration are discussed, and a modified version of these equations is taken as a starting point. Using the methods of singular perturbation theory a slow manifold is constructed in the space of mode amplitudes. This allows the reduction of the original high dimensional dynamics to a low dimensional one. It is shown how the amplitudes of the first mode for neutron flux field, temperature field and concentrations of Xenon and Iodine fields can have a stable steady state value while the corresponding amplitudes of the second mode oscillates in a stable limit cycle. The extrapolated dimensions of the reactor's core are used as bifurcation parameters. Approximate analytical formulae are obtained for the critical values of this parameters( below which the onset of oscillations is produced), for the period and for the amplitudes of the above mentioned oscillations. These results are applied to the discussion of neutron flux and temperature excursions in critical locations of the reactor's core. The results of NMA can be validated from the results obtained applying suitable computer codes, using homogenization theory(3) to link the complex heterogeneous model of the codes with the simplified mathematical model used for NMA

  5. Non-linear failure analysis of HCPB blanket for DEMO taking into account high dose irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Aktaa, J., E-mail: jarir.aktaa@kit.edu [Karlsruhe Institute of Technology (KIT), Institute for Applied Materials, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Kecskés, S.; Pereslavtsev, P.; Fischer, U.; Boccaccini, L.V. [Karlsruhe Institute of Technology (KIT), Institute for Neutron Physics and Reactor Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany)

    2014-10-15

    Highlights: • First non-linear structural analysis for the European Helium Cooled Pebble Bed Blanket Module taking into account high dose irradiation. • Most critical areas were identified and analyzed with regard to the effect of irradiation on predicted damage at these areas. • Despite the extensive computing time 100 cycles were simulated by using the sub-modelling technique investigating damage at most critical area. • The results show a positive effect of irradiation on calculated damage which is mainly attributed to the irradiation induced hardening. - Abstract: For the European helium cooled pebble bed (HCPB) blanket of DEMO the reduced activation ferritic martensitic steel EUROFER has been selected as structural material. During operation the HCPB blanket will be subjected to complex thermo-mechanical loadings and high irradiation doses. Taking into account the material and structural behaviour under these conditions is a precondition for a reliable blanket design. For considering high dose irradiation in structural analysis of the DEMO blanket, the coupled deformation damage model, extended recently taking into account the influence of high dose irradiation on the material behaviour of EUROFER and implemented in the finite element code ABAQUS, has been used. Non-linear finite element (FE) simulations of the DEMO HCPB blanket have been performed considering the design of the HCPB Test Blanket Module (TBM) as reference and the thermal and mechanical boundary conditions of previous analyses. The irradiation dose rate required at each position in the structure as an additional loading parameter is estimated by extrapolating the results available for the TBM in ITER scaling the value calculated in neutronics and activation analysis for ITER boundary conditions to the DEMO boundary conditions. The results of the FE simulations are evaluated considering damage at most critical highly loaded areas of the structure and discussed with regard to the impact of

  6. Non-linear failure analysis of HCPB blanket for DEMO taking into account high dose irradiation

    International Nuclear Information System (INIS)

    Aktaa, J.; Kecskés, S.; Pereslavtsev, P.; Fischer, U.; Boccaccini, L.V.

    2014-01-01

    Highlights: • First non-linear structural analysis for the European Helium Cooled Pebble Bed Blanket Module taking into account high dose irradiation. • Most critical areas were identified and analyzed with regard to the effect of irradiation on predicted damage at these areas. • Despite the extensive computing time 100 cycles were simulated by using the sub-modelling technique investigating damage at most critical area. • The results show a positive effect of irradiation on calculated damage which is mainly attributed to the irradiation induced hardening. - Abstract: For the European helium cooled pebble bed (HCPB) blanket of DEMO the reduced activation ferritic martensitic steel EUROFER has been selected as structural material. During operation the HCPB blanket will be subjected to complex thermo-mechanical loadings and high irradiation doses. Taking into account the material and structural behaviour under these conditions is a precondition for a reliable blanket design. For considering high dose irradiation in structural analysis of the DEMO blanket, the coupled deformation damage model, extended recently taking into account the influence of high dose irradiation on the material behaviour of EUROFER and implemented in the finite element code ABAQUS, has been used. Non-linear finite element (FE) simulations of the DEMO HCPB blanket have been performed considering the design of the HCPB Test Blanket Module (TBM) as reference and the thermal and mechanical boundary conditions of previous analyses. The irradiation dose rate required at each position in the structure as an additional loading parameter is estimated by extrapolating the results available for the TBM in ITER scaling the value calculated in neutronics and activation analysis for ITER boundary conditions to the DEMO boundary conditions. The results of the FE simulations are evaluated considering damage at most critical highly loaded areas of the structure and discussed with regard to the impact of

  7. Confirmatory factor analysis using Microsoft Excel.

    Science.gov (United States)

    Miles, Jeremy N V

    2005-11-01

    This article presents a method for using Microsoft (MS) Excel for confirmatory factor analysis (CFA). CFA is often seen as an impenetrable technique, and thus, when it is taught, there is frequently little explanation of the mechanisms or underlying calculations. The aim of this article is to demonstrate that this is not the case; it is relatively straightforward to produce a spreadsheet in MS Excel that can carry out simple CFA. It is possible, with few or no programming skills, to effectively program a CFA analysis and, thus, to gain insight into the workings of the procedure.

  8. Linear analysis near a steady-state of biochemical networks: control analysis, correlation metrics and circuit theory

    Directory of Open Access Journals (Sweden)

    Qian Hong

    2008-05-01

    Full Text Available Abstract Background: Several approaches, including metabolic control analysis (MCA, flux balance analysis (FBA, correlation metric construction (CMC, and biochemical circuit theory (BCT, have been developed for the quantitative analysis of complex biochemical networks. Here, we present a comprehensive theory of linear analysis for nonequilibrium steady-state (NESS biochemical reaction networks that unites these disparate approaches in a common mathematical framework and thermodynamic basis. Results: In this theory a number of relationships between key matrices are introduced: the matrix A obtained in the standard, linear-dynamic-stability analysis of the steady-state can be decomposed as A = SRT where R and S are directly related to the elasticity-coefficient matrix for the fluxes and chemical potentials in MCA, respectively; the control-coefficients for the fluxes and chemical potentials can be written in terms of RT BS and ST BS respectively where matrix B is the inverse of A; the matrix S is precisely the stoichiometric matrix in FBA; and the matrix eAt plays a central role in CMC. Conclusion: One key finding that emerges from this analysis is that the well-known summation theorems in MCA take different forms depending on whether metabolic steady-state is maintained by flux injection or concentration clamping. We demonstrate that if rate-limiting steps exist in a biochemical pathway, they are the steps with smallest biochemical conductances and largest flux control-coefficients. We hypothesize that biochemical networks for cellular signaling have a different strategy for minimizing energy waste and being efficient than do biochemical networks for biosynthesis. We also discuss the intimate relationship between MCA and biochemical systems analysis (BSA.

  9. Dynamic analysis and electronic circuit implementation of a novel 3D autonomous system without linear terms

    Science.gov (United States)

    Kengne, J.; Jafari, S.; Njitacke, Z. T.; Yousefi Azar Khanian, M.; Cheukem, A.

    2017-11-01

    Mathematical models (ODEs) describing the dynamics of almost all continuous time chaotic nonlinear systems (e.g. Lorenz, Rossler, Chua, or Chen system) involve at least a nonlinear term in addition to linear terms. In this contribution, a novel (and singular) 3D autonomous chaotic system without linear terms is introduced. This system has an especial feature of having two twin strange attractors: one ordinary and one symmetric strange attractor when the time is reversed. The complex behavior of the model is investigated in terms of equilibria and stability, bifurcation diagrams, Lyapunov exponent plots, time series and Poincaré sections. Some interesting phenomena are found including for instance, period-doubling bifurcation, antimonotonicity (i.e. the concurrent creation and annihilation of periodic orbits) and chaos while monitoring the system parameters. Compared to the (unique) case previously reported by Xu and Wang (2014) [31], the system considered in this work displays a more 'elegant' mathematical expression and experiences richer dynamical behaviors. A suitable electronic circuit (i.e. the analog simulator) is designed and used for the investigations. Pspice based simulation results show a very good agreement with the theoretical analysis.

  10. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression.

    Science.gov (United States)

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio

    2016-10-01

    We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Sub-wavelength plasmonic readout for direct linear analysis of optically tagged DNA

    Science.gov (United States)

    Varsanik, Jonathan; Teynor, William; LeBlanc, John; Clark, Heather; Krogmeier, Jeffrey; Yang, Tian; Crozier, Kenneth; Bernstein, Jonathan

    2010-02-01

    This work describes the development and fabrication of a novel nanofluidic flow-through sensing chip that utilizes a plasmonic resonator to excite fluorescent tags with sub-wavelength resolution. We cover the design of the microfluidic chip and simulation of the plasmonic resonator using Finite Difference Time Domain (FDTD) software. The fabrication methods are presented, with testing procedures and preliminary results. This research is aimed at improving the resolution limits of the Direct Linear Analysis (DLA) technique developed by US Genomics [1]. In DLA, intercalating dyes which tag a specific 8 base-pair sequence are inserted in a DNA sample. This sample is pumped though a nano-fluidic channel, where it is stretched into a linear geometry and interrogated with light which excites the fluorescent tags. The resulting sequence of optical pulses produces a characteristic "fingerprint" of the sample which uniquely identifies any sample of DNA. Plasmonic confinement of light to a 100 nm wide metallic nano-stripe enables resolution of a higher tag density compared to free space optics. Prototype devices have been fabricated and are being tested with fluorophore solutions and tagged DNA. Preliminary results show evanescent coupling to the plasmonic resonator is occurring with 0.1 micron resolution, however light scattering limits the S/N of the detector. Two methods to reduce scattered light are presented: index matching and curved waveguides.

  12. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    Directory of Open Access Journals (Sweden)

    Mu Zhou

    2014-01-01

    Full Text Available This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs in logarithmic received signal strength (RSS varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.

  13. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    Science.gov (United States)

    Tian, Zengshan; Xu, Kunjie; Yu, Xiang

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349

  14. Diagnostics for generalized linear hierarchical models in network meta-analysis.

    Science.gov (United States)

    Zhao, Hong; Hodges, James S; Carlin, Bradley P

    2017-09-01

    Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Robust best linear estimation for regression analysis using surrogate and instrumental variables.

    Science.gov (United States)

    Wang, C Y

    2012-04-01

    We investigate methods for regression analysis when covariates are measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies the classical measurement error model, but it may not have repeated measurements. In addition to the surrogate variables that are available among the subjects in the calibration sample, we assume that there is an instrumental variable (IV) that is available for all study subjects. An IV is correlated with the unobserved true exposure variable and hence can be useful in the estimation of the regression coefficients. We propose a robust best linear estimator that uses all the available data, which is the most efficient among a class of consistent estimators. The proposed estimator is shown to be consistent and asymptotically normal under very weak distributional assumptions. For Poisson or linear regression, the proposed estimator is consistent even if the measurement error from the surrogate or IV is heteroscedastic. Finite-sample performance of the proposed estimator is examined and compared with other estimators via intensive simulation studies. The proposed method and other methods are applied to a bladder cancer case-control study.

  16. On the relation between flexibility analysis and robust optimization for linear systems

    KAUST Repository

    Zhang, Qi

    2016-03-05

    Flexibility analysis and robust optimization are two approaches to solving optimization problems under uncertainty that share some fundamental concepts, such as the use of polyhedral uncertainty sets and the worst-case approach to guarantee feasibility. The connection between these two approaches has not been sufficiently acknowledged and examined in the literature. In this context, the contributions of this work are fourfold: (1) a comparison between flexibility analysis and robust optimization from a historical perspective is presented; (2) for linear systems, new formulations for the three classical flexibility analysis problems—flexibility test, flexibility index, and design under uncertainty—based on duality theory and the affinely adjustable robust optimization (AARO) approach are proposed; (3) the AARO approach is shown to be generally more restrictive such that it may lead to overly conservative solutions; (4) numerical examples show the improved computational performance from the proposed formulations compared to the traditional flexibility analysis models. © 2016 American Institute of Chemical Engineers AIChE J, 62: 3109–3123, 2016

  17. On the accuracy of mode-superposition analysis of linear systems under stochastic agencies

    International Nuclear Information System (INIS)

    Bellomo, M.; Di Paola, M.; La Mendola, L.; Muscolino, G.

    1987-01-01

    This paper deals with the response of linear structures using modal reduction. The MAM (mode acceleration method) correction is extended to stochastic analysis in the stationary case. In this framework the response of the given structure must be described in a probabilistic sense and the spectral moments of the nodal response must be computed in order to obtain a full description of the vibratory stochastic phenomenon. In the deterministic analysis the response is substantially made up of two terms, one of which accounts for the dynamic response due to the lower modes while the second accounts for the contribution due to the higher modes. In stochastic analysis the nodal spectral moments are made up of three terms; the first accounts for the spectral moments of the dynamic response due to the lower modes, the second accounts for the spectral moments of input and the third accounts for the cross-spectral moments between the input and the nodal output. The analysis is applied to a 35-storey building subjected to wind multivariate environments. (orig./HP)

  18. A non-linear reduced order methodology applicable to boiling water reactor stability analysis

    International Nuclear Information System (INIS)

    Prill, Dennis Paul

    2013-01-01

    Thermal-hydraulic coupling between power, flow rate and density, intensified by neutronics feedback are the main drivers of boiling water reactor (BWR) stability behavior. High-power low-flow conditions in connection with unfavorable power distributions can lead the BWR system into unstable regions where power oscillations can be triggered. This important threat to operational safety requires careful analysis for proper understanding. Analyzing an exhaustive parameter space of the non-linear BWR system becomes feasible with methodologies based on reduced order models (ROMs), saving computational cost and improving the physical understanding. Presently within reactor dynamics, no general and automatic prediction of high-dimensional ROMs based on detailed BWR models are available. In this thesis a systematic self-contained model order reduction (MOR) technique is derived which is applicable for several classes of dynamical problems, and in particular to BWRs of any degree of details. Expert knowledge can be given by operational, experimental or numerical transient data and is transfered into an optimal basis function representation. The methodology is mostly automated and provides the framework for the reduction of various different systems of any level of complexity. Only little effort is necessary to attain a reduced version within this self-written code which is based on coupling of sophisticated commercial software. The methodology reduces a complex system in a grid-free manner to a small system able to capture even non-linear dynamics. It is based on an optimal choice of basis functions given by the so-called proper orthogonal decomposition (POD). Required steps to achieve reliable and numerical stable ROM are given by a distinct calibration road-map. In validation and verification steps, a wide spectrum of representative test examples is systematically studied regarding a later BWR application. The first example is non-linear and has a dispersive character

  19. DISRUPTIVE EVENT BIOSPHERE DOSE CONVERSION FACTOR ANALYSIS

    International Nuclear Information System (INIS)

    M.A. Wasiolek

    2005-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The Biosphere Model Report (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis'' (Figure 1-1). The objective of this analysis was to develop the BDCFs for the volcanic

  20. Dose conversion factors and linear energy transfer for irradiation of thin blood layers with low-energy X rays

    International Nuclear Information System (INIS)

    Verhaegen, F.; Seuntjens, J.

    1994-01-01

    For irradiation of thin samples of biological material with low-energy X rays, conversion of measured air kerma, free in air to average absorbed dose to the sample is necessary. In the present paper, conversion factors from measured air kerma to average absorbed dose in thin blood samples are given for four low-energy X-ray qualities (14-50 kVp). These factors were obtained by Monte Carlo simulation of a practical sample holder. Data for different thicknesses of the blood and backing layer are presented. The conversion factors are found to depend strongly on the thicknesses of the blood layer and backing layer. In radiobiological work, knowledge of linear energy transfer (LET) values for the radiation quality used is often required. Track-averaged LET values for low-energy X rays are presented in this work. It is concluded that the thickness of the sample does not influence the LET value appreciably, indicating that for all radiobiological purposes this value can be regarded as a constant throughout the sample. Furthermore, the large difference between the LET value for a 50 kV spectrum found in this work and the value given in ICRU Report 16 is pointed out. 16 refs., 7 figs., 1 tab

  1. Analysis of mineral phases in coal utilizing factor analysis

    International Nuclear Information System (INIS)

    Roscoe, B.A.; Hopke, P.K.

    1982-01-01

    The mineral phase inclusions of coal are discussed. The contribution of these to a coal sample are determined utilizing several techniques. Neutron activation analysis in conjunction with coal washability studies have produced some information on the general trends of elemental variation in the mineral phases. These results have been enhanced by the use of various statistical techniques. The target transformation factor analysis is specifically discussed and shown to be able to produce elemental profiles of the mineral phases in coal. A data set consisting of physically fractionated coal samples was generated. These samples were analyzed by neutron activation analysis and then their elemental concentrations examined using TTFA. Information concerning the mineral phases in coal can thus be acquired from factor analysis even with limited data. Additional data may permit the resolution of additional mineral phases as well as refinement of theose already identified

  2. SAP-4, Static and Dynamic Linear System Stress Analysis for Various Structures

    International Nuclear Information System (INIS)

    Zawadzki, S.

    1984-01-01

    1 - Description of problem or function: SAP4 is a structural analysis program for determining the static and dynamic response of linear systems. The structural systems to be analyzed may be composed of combinations of a number of different structural elements. Currently the program contains the following element types - (a) three-dimensional truss element, (b) three-dimensional beam element, (c) plane stress and plane strain element, (d) two-dimensional axisymmetric solid, (e) three-dimensional solid, (f) variable-number nodes thick shell and three-dimensional element, (g) thin-plate or thin-shell element, (h) boundary element, and (i) pipe element (tangent and bend). 2 - Method of solution: The formation of the structure matrices is carried out in the same way in a static or dynamic analysis. The static analysis is continued by solving the equations of equilibrium followed by the computation of element stresses. In a dynamic analysis the choice is between frequency calculations only, frequency calculations followed by response history analysis, frequency calculations followed by response spectrum analysis, or response history analysis by direct integration. To obtain the frequencies and vibration mode shapes, solution routines are used which calculate the required eigenvalues and eigenvectors directly without a transformation of the structure stiffness matrix and mass matrix to a reduced form. To perform the direct integration an unconditionally stable scheme is used, which also operates on the original structure stiffness matrix and mass matrix. In this manner the program operation and input data required for a dynamic analysis are simple extensions of those needed for a static analysis. 3 - Restrictions on the complexity of the problem: The capacity of the program depends mainly on the total number of nodal points in the system, the number of eigenvalues needed in the dynamic analysis, and the computer used. There is practically no restriction on the number of

  3. A Beginner’s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis

    Directory of Open Access Journals (Sweden)

    An Gie Yong

    2013-10-01

    Full Text Available The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.

  4. The analysis of linear partial differential operators I distribution theory and Fourier analysis

    CERN Document Server

    Hörmander, Lars

    2003-01-01

    The main change in this edition is the inclusion of exercises with answers and hints. This is meant to emphasize that this volume has been written as a general course in modern analysis on a graduate student level and not only as the beginning of a specialized course in partial differen­ tial equations. In particular, it could also serve as an introduction to harmonic analysis. Exercises are given primarily to the sections of gen­ eral interest; there are none to the last two chapters. Most of the exercises are just routine problems meant to give some familiarity with standard use of the tools introduced in the text. Others are extensions of the theory presented there. As a rule rather complete though brief solutions are then given in the answers and hints. To a large extent the exercises have been taken over from courses or examinations given by Anders Melin or myself at the University of Lund. I am grateful to Anders Melin for letting me use the problems originating from him and for numerous valuable comm...

  5. Specter: linear deconvolution for targeted analysis of data-independent acquisition mass spectrometry proteomics.

    Science.gov (United States)

    Peckner, Ryan; Myers, Samuel A; Jacome, Alvaro Sebastian Vaca; Egertson, Jarrett D; Abelin, Jennifer G; MacCoss, Michael J; Carr, Steven A; Jaffe, Jacob D

    2018-05-01

    Mass spectrometry with data-independent acquisition (DIA) is a promising method to improve the comprehensiveness and reproducibility of targeted and discovery proteomics, in theory by systematically measuring all peptide precursors in a biological sample. However, the analytical challenges involved in discriminating between peptides with similar sequences in convoluted spectra have limited its applicability in important cases, such as the detection of single-nucleotide polymorphisms (SNPs) and alternative site localizations in phosphoproteomics data. We report Specter (https://github.com/rpeckner-broad/Specter), an open-source software tool that uses linear algebra to deconvolute DIA mixture spectra directly through comparison to a spectral library, thus circumventing the problems associated with typical fragment-correlation-based approaches. We validate the sensitivity of Specter and its performance relative to that of other methods, and show that Specter is able to successfully analyze cases involving highly similar peptides that are typically challenging for DIA analysis methods.

  6. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  7. High-Speed Linear Raman Spectroscopy for Instability Analysis of a Bluff Body Flame

    Science.gov (United States)

    Kojima, Jun; Fischer, David

    2013-01-01

    We report a high-speed laser diagnostics technique based on point-wise linear Raman spectroscopy for measuring the frequency content of a CH4-air premixed flame stabilized behind a circular bluff body. The technique, which primarily employs a Nd:YLF pulsed laser and a fast image-intensified CCD camera, successfully measures the time evolution of scalar parameters (N2, O2, CH4, and H2O) in the vortex-induced flame instability at a data rate of 1 kHz. Oscillation of the V-shaped flame front is quantified through frequency analysis of the combustion species data and their correlations. This technique promises to be a useful diagnostics tool for combustion instability studies.

  8. Linear-fitting-based similarity coefficient map for tissue dissimilarity analysis in -w magnetic resonance imaging

    International Nuclear Information System (INIS)

    Yu Shao-De; Wu Shi-Bin; Xie Yao-Qin; Wang Hao-Yu; Wei Xin-Hua; Chen Xin; Pan Wan-Long; Hu Jiani

    2015-01-01

    Similarity coefficient mapping (SCM) aims to improve the morphological evaluation of weighted magnetic resonance imaging However, how to interpret the generated SCM map is still pending. Moreover, is it probable to extract tissue dissimilarity messages based on the theory behind SCM? The primary purpose of this paper is to address these two questions. First, the theory of SCM was interpreted from the perspective of linear fitting. Then, a term was embedded for tissue dissimilarity information. Finally, our method was validated with sixteen human brain image series from multi-echo . Generated maps were investigated from signal-to-noise ratio (SNR) and perceived visual quality, and then interpreted from intra- and inter-tissue intensity. Experimental results show that both perceptibility of anatomical structures and tissue contrast are improved. More importantly, tissue similarity or dissimilarity can be quantified and cross-validated from pixel intensity analysis. This method benefits image enhancement, tissue classification, malformation detection and morphological evaluation. (paper)

  9. Non-linear belt transient analysis. A hybrid model for numerical belt conveyor simulation

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, A. [Scientific Solutions, Inc., Aurora, CO (United States)

    2008-07-01

    Frictional and rolling losses along a running conveyor are discussed due to their important influence on wave propagation during starting and stopping. Hybrid friction models allow belt rubber losses and material flexing to be included in the initial tension calculations prior to any dynamic analysis. Once running tensions are defined, a numerical integration method using non-linear stiffness gradients is used to generate transient forces during starting and stopping. A modified Euler integration technique is used to simulate the entire starting and stopping cycle in less than 0.1 seconds. The procedure enables a faster scrutiny of unforeseen conveyor design issues such as low belt tension zones and high forces at drives. (orig.)

  10. Thermal analysis of linear pulse motor for SMART control element drive mechanism

    International Nuclear Information System (INIS)

    Hur, H.; Kim, J. H.; Kim, J. I.; Jang, K. C.; Kang, D. H.

    1999-01-01

    It is important that the temperature of the motor windings be maintained within the allowable limit of the insulation, since the linear pulse motor of CEDM is always supplied with current during the reactor operation. In this study three motor windings were fabricated with three different diameters of coil wires, and the temperatures inside the windings were measured with different current values. As the insulation of the windings is composed of teflon, glass fiber, and air, it is not an easy task to determine experimentally the thermal properties of the complex insulation. In this study, the thermal properties of the insulation were obtained by comparing the results of finite element thermal analyses and those of experiment. The thermal properties obtained here will be used as input for the optimization analysis of the motor

  11. Financial Distress Prediction using Linear Discriminant Analysis and Support Vector Machine

    Science.gov (United States)

    Santoso, Noviyanti; Wibowo, Wahyu

    2018-03-01

    A financial difficulty is the early stages before the bankruptcy. Bankruptcies caused by the financial distress can be seen from the financial statements of the company. The ability to predict financial distress became an important research topic because it can provide early warning for the company. In addition, predicting financial distress is also beneficial for investors and creditors. This research will be made the prediction model of financial distress at industrial companies in Indonesia by comparing the performance of Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) combined with variable selection technique. The result of this research is prediction model based on hybrid Stepwise-SVM obtains better balance among fitting ability, generalization ability and model stability than the other models.

  12. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin; Genton, Marc G.

    2013-01-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  13. Advanced non-linear flow-induced vibration and fretting-wear analysis capabilities

    Energy Technology Data Exchange (ETDEWEB)

    Toorani, M.; Pan, L.; Li, R.; Idvorian, N. [Babcock and Wilcox Canada Ltd., Cambridge, Ontario (Canada); Vincent, B.

    2009-07-01

    Fretting wear is a potentially significant degradation mechanism in nuclear steam generators and other shell and tube heat transfer equipment as well. This paper presents an overview of the recently developed code FIVDYNA which is used for the non-linear flow-induced vibration and fretting wear analysis for operating steam generators (OTSG and RSG) and shell-and-tube heat exchangers. FIVDYNA is a non-linear time-history Flow-Induced Vibration (FIV) analysis computer program that has been developed by Babcock and Wilcox Canada to advance the understanding of tube vibration and tube to tube-support interaction. In addition to the dynamic fluid induced forces the program takes into account other tube static forces due to axial and lateral tube preload and thermal interaction loads. The program is capable of predicting the location where the fretting wear is most likely to occur and its magnitude taking into account the support geometry including gaps. FIVDYNA uses the general purpose finite element computer code ABAQUS as its solver. Using ABAQUS gives the user the flexibility to add additional forces to the tube ranging from tube preloads and the support offsets to thermal loads. The forces currently being modeled in FIVDYNA are the random turbulence, steady drag force, fluid-elastic forces, support offset and pre-strain force (axial loads). This program models the vibration of tubes and calculates the structural dynamic characteristics, and interaction forces between the tube and the tube supports. These interaction forces are then used to calculate the work rate at the support and eventually the predicted depth of wear scar on the tube. A very good agreement is found with experiments and also other computer codes. (author)

  14. Three-dimensional linear fracture mechanics analysis by a displacement-hybrid finite-element model

    International Nuclear Information System (INIS)

    Atluri, S.N.; Kathiresan, K.; Kobayashi, A.S.

    1975-01-01

    This paper deals with a finite-element procedures for the calculation of modes I, II and III stress intensity factors, which vary, along an arbitrarily curved three-dimensional crack front in a structural component. The finite-element model is based on a modified variational principle of potential energy with relaxed continuity requirements for displacements at the inter-element boundary. The variational principle is a three-field principle, with the arbitrary interior displacements for the element, interelement boundary displacements, and element boundary tractions as variables. The unknowns in the final algebraic system of equations, in the present displacement hybrid finite element model, are the nodal displacements and the three elastic stress intensity factors. Special elements, which contain proper square root and inverse square root crack front variations in displacements and stresses, respectively, are used in a fixed region near the crack front. Interelement displacement compatibility is satisfied by assuming an independent interelement boundary displacement field, and using a Lagrange multiplier technique to enforce such interelement compatibility. These Lagrangean multipliers, which are physically the boundary tractions, are assumed from an equilibrated stress field derived from three-dimensional Beltrami (or Maxwell-Morera) stress functions that are complete. However, considerable care should be exercised in the use of these stress functions such that the stresses produced by any of these stress function components are not linearly dependent

  15. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-08

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle

  16. Dynamic Response Analysis of Linear Pulse Motor with Closed Loop Control

    OpenAIRE

    山本, 行雄; 山田, 一

    1989-01-01

    A linear pulse motor can translate digital signals into linear positions without a gear system. It is important to predict a dynamic response in order to the motor that has the good performance. In this report the maximum pulse rate and the maximum speed on the linear pulse motor are obtained by using the sampling theory.

  17. Determining the Number of Factors in P-Technique Factor Analysis

    Science.gov (United States)

    Lo, Lawrence L.; Molenaar, Peter C. M.; Rovine, Michael

    2017-01-01

    Determining the number of factors is a critical first step in exploratory factor analysis. Although various criteria and methods for determining the number of factors have been evaluated in the usual between-subjects R-technique factor analysis, there is still question of how these methods perform in within-subjects P-technique factor analysis. A…

  18. Disruptive Event Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-07-21

    This analysis report, ''Disruptive Event Biosphere Dose Conversion Factor Analysis'', is one of the technical reports containing documentation of the ERMYN (Environmental Radiation Model for Yucca Mountain Nevada) biosphere model for the geologic repository at Yucca Mountain, its input parameters, and the application of the model to perform the dose assessment for the repository. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of the two reports that develop biosphere dose conversion factors (BDCFs), which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the conceptual model as well as the mathematical model and lists its input parameters. Model input parameters are developed and described in detail in five analysis report (BSC 2003 [DIRS 160964], BSC 2003 [DIRS 160965], BSC 2003 [DIRS 160976], BSC 2003 [DIRS 161239], and BSC 2003 [DIRS 161241]). The objective of this analysis was to develop the BDCFs for the volcanic ash exposure scenario and the dose factors (DFs) for calculating inhalation doses during volcanic eruption (eruption phase of the volcanic event). The volcanic ash exposure scenario is hereafter referred to as the volcanic ash scenario. For the volcanic ash scenario, the mode of radionuclide release into the biosphere is a volcanic eruption through the repository with the resulting entrainment of contaminated waste in the tephra and the subsequent atmospheric transport and dispersion of contaminated material in

  19. Disruptive Event Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis report, ''Disruptive Event Biosphere Dose Conversion Factor Analysis'', is one of the technical reports containing documentation of the ERMYN (Environmental Radiation Model for Yucca Mountain Nevada) biosphere model for the geologic repository at Yucca Mountain, its input parameters, and the application of the model to perform the dose assessment for the repository. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of the two reports that develop biosphere dose conversion factors (BDCFs), which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the conceptual model as well as the mathematical model and lists its input parameters. Model input parameters are developed and described in detail in five analysis report (BSC 2003 [DIRS 160964], BSC 2003 [DIRS 160965], BSC 2003 [DIRS 160976], BSC 2003 [DIRS 161239], and BSC 2003 [DIRS 161241]). The objective of this analysis was to develop the BDCFs for the volcanic ash exposure scenario and the dose factors (DFs) for calculating inhalation doses during volcanic eruption (eruption phase of the volcanic event). The volcanic ash exposure scenario is hereafter referred to as the volcanic ash scenario. For the volcanic ash scenario, the mode of radionuclide release into the biosphere is a volcanic eruption through the repository with the resulting entrainment of contaminated waste in the tephra and the subsequent atmospheric transport and dispersion of contaminated material in the biosphere. The biosphere process

  20. A SOCIOLOGICAL ANALYSIS OF THE CHILDBEARING COEFFICIENT IN THE ALTAI REGION BASED ON METHOD OF FUZZY LINEAR REGRESSION

    Directory of Open Access Journals (Sweden)

    Sergei Vladimirovich Varaksin

    2017-06-01

    Full Text Available Purpose. Construction of a mathematical model of the dynamics of childbearing change in the Altai region in 2000–2016, analysis of the dynamics of changes in birth rates for multiple age categories of women of childbearing age. Methodology. A auxiliary analysis element is the construction of linear mathematical models of the dynamics of childbearing by using fuzzy linear regression method based on fuzzy numbers. Fuzzy linear regression is considered as an alternative to standard statistical linear regression for short time series and unknown distribution law. The parameters of fuzzy linear and standard statistical regressions for childbearing time series were defined with using the built in language MatLab algorithm. Method of fuzzy linear regression is not used in sociological researches yet. Results. There are made the conclusions about the socio-demographic changes in society, the high efficiency of the demographic policy of the leadership of the region and the country, and the applicability of the method of fuzzy linear regression for sociological analysis.

  1. Comparison between time-step-integration and probabilistic methods in seismic analysis of a linear structure

    International Nuclear Information System (INIS)

    Schneeberger, B.; Breuleux, R.

    1977-01-01

    Assuming that earthquake ground motion is a stationary time function, the seismic analysis of a linear structure can be done by probailistic methods using the 'power spectral density function' (PSD), instead of applying the more traditional time-step-integration using earthquake time histories (TH). A given structure was analysed both by PSD and TH methods computing and comparing 'floor response spectra'. The analysis using TH was performed for two different TH and different frequency intervals for the 'floor-response-spectra'. The analysis using PSD first produced PSD functions of the responses of the floors and these were then converted into 'foor-response-spectra'. Plots of the resulting 'floor-response-spectra' show: (1) The agreement of TH and PSD results is quite close. (2) The curves produced by PSD are much smoother than those produced by TH and mostly form an enelope of the latter. (3) The curves produced by TH are quite jagged with the location and magnitude of the peaks depending on the choice of frequencies at which the 'floor-response-spectra' were evaluated and on the choice of TH. (Auth.)

  2. Neck-focused panic attacks among Cambodian refugees; a logistic and linear regression analysis.

    Science.gov (United States)

    Hinton, Devon E; Chhean, Dara; Pich, Vuth; Um, Khin; Fama, Jeanne M; Pollack, Mark H

    2006-01-01

    Consecutive Cambodian refugees attending a psychiatric clinic were assessed for the presence and severity of current--i.e., at least one episode in the last month--neck-focused panic. Among the whole sample (N=130), in a logistic regression analysis, the Anxiety Sensitivity Index (ASI; odds ratio=3.70) and the Clinician-Administered PTSD Scale (CAPS; odds ratio=2.61) significantly predicted the presence of current neck panic (NP). Among the neck panic patients (N=60), in the linear regression analysis, NP severity was significantly predicted by NP-associated flashbacks (beta=.42), NP-associated catastrophic cognitions (beta=.22), and CAPS score (beta=.28). Further analysis revealed the effect of the CAPS score to be significantly mediated (Sobel test [Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173-1182]) by both NP-associated flashbacks and catastrophic cognitions. In the care of traumatized Cambodian refugees, NP severity, as well as NP-associated flashbacks and catastrophic cognitions, should be specifically assessed and treated.

  3. A comb-sampling method for enhanced mass analysis in linear electrostatic ion traps

    Energy Technology Data Exchange (ETDEWEB)

    Greenwood, J. B.; Kelly, O.; Calvert, C. R.; Duffy, M. J.; King, R. B.; Belshaw, L.; Graham, L.; Alexander, J. D.; Williams, I. D. [Centre for Plasma Physics, School of Mathematics and Physics, Queen' s University Belfast, Belfast BT7 1NN (United Kingdom); Bryan, W. A. [Department of Physics, Swansea University, Swansea SA2 8PP (United Kingdom); Turcu, I. C. E.; Cacho, C. M.; Springate, E. [Central Laser Facility, STFC Rutherford Appleton Laboratory, Didcot, Oxfordshire OX11 0QX (United Kingdom)

    2011-04-15

    In this paper an algorithm for extracting spectral information from signals containing a series of narrow periodic impulses is presented. Such signals can typically be acquired by pickup detectors from the image-charge of ion bunches oscillating in a linear electrostatic ion trap, where frequency analysis provides a scheme for high-resolution mass spectrometry. To provide an improved technique for such frequency analysis, we introduce the CHIMERA algorithm (Comb-sampling for High-resolution IMpulse-train frequency ExtRAaction). This algorithm utilizes a comb function to generate frequency coefficients, rather than using sinusoids via a Fourier transform, since the comb provides a superior match to the data. This new technique is developed theoretically, applied to synthetic data, and then used to perform high resolution mass spectrometry on real data from an ion trap. If the ions are generated at a localized point in time and space, and the data is simultaneously acquired with multiple pickup rings, the method is shown to be a significant improvement on Fourier analysis. The mass spectra generated typically have an order of magnitude higher resolution compared with that obtained from fundamental Fourier frequencies, and are absent of large contributions from harmonic frequency components.

  4. Classification of Surface and Deep Soil Samples Using Linear Discriminant Analysis

    International Nuclear Information System (INIS)

    Wasim, M.; Ali, M.; Daud, M.

    2015-01-01

    A statistical analysis was made of the activity concentrations measured in surface and deep soil samples for natural and anthropogenic gamma-emitting radionuclides. Soil samples were obtained from 48 different locations in Gilgit, Pakistan covering about 50 km/sup 2/ areas at an average altitude of 1550 m above sea level. From each location two samples were collected: one from the top soil (2-6 cm) and another from a depth of 6-10 cm. Four radionuclides including /sup 226/Ra, /sup 232/Th, /sup 40/K and /sup 137/Cs were quantified. The data was analyzed using t-test to find out activity concentration difference between the surface and depth samples. At the surface, the median activity concentrations were 23.7, 29.1, 4.6 and 115 Bq kg/sup -1/ for 226Ra, 232Th, 137Cs and 40K respectively. For the same radionuclides, the activity concentrations were respectively 25.5, 26.2, 2.9 and 191 Bq kg/sup -1/ for the depth samples. Principal component analysis (PCA) was applied to explore patterns within the data. A positive significant correlation was observed between the radionuclides /sup 226/Ra and /sup 232/Th. The data from PCA was further utilized in linear discriminant analysis (LDA) for the classification of surface and depth samples. LDA classified surface and depth samples with good predictability. (author)

  5. Stability and performance analysis of a jump linear control system subject to digital upsets

    Science.gov (United States)

    Wang, Rui; Sun, Hui; Ma, Zhen-Yang

    2015-04-01

    This paper focuses on the methodology analysis for the stability and the corresponding tracking performance of a closed-loop digital jump linear control system with a stochastic switching signal. The method is applied to a flight control system. A distributed recoverable platform is implemented on the flight control system and subject to independent digital upsets. The upset processes are used to stimulate electromagnetic environments. Specifically, the paper presents the scenarios that the upset process is directly injected into the distributed flight control system, which is modeled by independent Markov upset processes and independent and identically distributed (IID) processes. A theoretical performance analysis and simulation modelling are both presented in detail for a more complete independent digital upset injection. The specific examples are proposed to verify the methodology of tracking performance analysis. The general analyses for different configurations are also proposed. Comparisons among different configurations are conducted to demonstrate the availability and the characteristics of the design. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61403395), the Natural Science Foundation of Tianjin, China (Grant No. 13JCYBJC39000), the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry, China, the Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation of China (Grant No. 104003020106), and the Fund for Scholars of Civil Aviation University of China (Grant No. 2012QD21x).

  6. Linear Covariance Analysis For Proximity Operations Around Asteroid 2008 EV5

    Science.gov (United States)

    Wright, Cinnamon A.; Bhatt, Sagar; Woffinden, David; Strube, Matthew; D'Souza, Christopher; DeWeese, Keith

    2015-01-01

    The NASA initiative to collect an asteroid the Asteroid Robotic Redirect Mission (ARRM) is currently investigating the option of retrieving a boulder off an asteroid, demonstrating planetary defense with an enhanced gravity tractor technique and returning it to a lunar orbit. Techniques for accomplishing this are being investigated by the Satellite Servicing Capabilities Office (SSOO) and NASA GSFC in colloboration with JPL, NASA, JSC, LaRC, and Draper Laboratories Inc. Two critical phases of the mission are the descent to the boulder and the Enhanced Gravity Tractor-enhanced gravity tractor demonstration. A linear covariance analysis was done for these phases to assess the feasibility of these concepts with the proposed design of the sensor and actuaor suite of the Asteroid Redirect Vehicle (ARV). The sensor suite for this analysis will include a wide field of view camera, Lidar, and a MMU. The proposed asteroid of interest is currently the C-type asteroid 2008 EV5, a carbonaceous chondrite that is of high interest to the scientific community. This paper will present an overview of the analysis discuss sensor and actuator models and address the feasibility of descending to the boulder within the requirements as the feasibility of maintaining the halo orbit in order to demonstrate the Enhanced Gravity Tractor-enhanced gravity tractory technique.

  7. Study on Brain Dynamics by Non Linear Analysis of Music Induced EEG Signals

    Science.gov (United States)

    Banerjee, Archi; Sanyal, Shankha; Patranabis, Anirban; Banerjee, Kaushik; Guhathakurta, Tarit; Sengupta, Ranjan; Ghosh, Dipak; Ghose, Partha

    2016-02-01

    Music has been proven to be a valuable tool for the understanding of human cognition, human emotion, and their underlying brain mechanisms. The objective of this study is to analyze the effect of Hindustani music on brain activity during normal relaxing conditions using electroencephalography (EEG). Ten male healthy subjects without special musical education participated in the study. EEG signals were acquired at the frontal (F3/F4) lobes of the brain while listening to music at three experimental conditions (rest, with music and without music). Frequency analysis was done for the alpha, theta and gamma brain rhythms. The finding shows that arousal based activities were enhanced while listening to Hindustani music of contrasting emotions (romantic/sorrow) for all the subjects in case of alpha frequency bands while no significant changes were observed in gamma and theta frequency ranges. It has been observed that when the music stimulus is removed, arousal activities as evident from alpha brain rhythms remain for some time, showing residual arousal. This is analogous to the conventional 'Hysteresis' loop where the system retains some 'memory' of the former state. This is corroborated in the non linear analysis (Detrended Fluctuation Analysis) of the alpha rhythms as manifested in values of fractal dimension. After an input of music conveying contrast emotions, withdrawal of music shows more retention as evidenced by the values of fractal dimension.

  8. Design and analysis of linear oscillatory single-phase permanent magnet generator for free-piston stirling engine systems

    Science.gov (United States)

    Kim, Jeong-Man; Choi, Jang-Young; Lee, Kyu-Seok; Lee, Sung-Ho

    2017-05-01

    This study focuses on the design and analysis of a linear oscillatory single-phase permanent magnet generator for free-piston stirling engine (FPSE) systems. In order to implement the design of linear oscillatory generator (LOG) for suitable FPSEs, we conducted electromagnetic analysis of LOGs with varying design parameters. Then, detent force analysis was conducted using assisted PM. Using the assisted PM gave us the advantage of using mechanical strength by detent force. To improve the efficiency, we conducted characteristic analysis of eddy-current loss with respect to the PM segment. Finally, the experimental result was analyzed to confirm the prediction of the FEA.

  9. Design and analysis of linear oscillatory single-phase permanent magnet generator for free-piston stirling engine systems

    Directory of Open Access Journals (Sweden)

    Jeong-Man Kim

    2017-05-01

    Full Text Available This study focuses on the design and analysis of a linear oscillatory single-phase permanent magnet generator for free-piston stirling engine (FPSE systems. In order to implement the design of linear oscillatory generator (LOG for suitable FPSEs, we conducted electromagnetic analysis of LOGs with varying design parameters. Then, detent force analysis was conducted using assisted PM. Using the assisted PM gave us the advantage of using mechanical strength by detent force. To improve the efficiency, we conducted characteristic analysis of eddy-current loss with respect to the PM segment. Finally, the experimental result was analyzed to confirm the prediction of the FEA.

  10. Exploratory Bi-Factor Analysis: The Oblique Case

    Science.gov (United States)

    Jennrich, Robert I.; Bentler, Peter M.

    2012-01-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford ("Psychometrika" 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler ("Psychometrika" 76:537-549, 2011) introduced an exploratory form of bi-factor…

  11. Exploratory factor analysis in Rehabilitation Psychology: a content analysis.

    Science.gov (United States)

    Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N

    2014-11-01

    Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.

  12. Exploratory Bi-factor Analysis: The Oblique Case

    OpenAIRE

    Jennrich, Robert L.; Bentler, Peter M.

    2011-01-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford (1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler (2011) introduced an exploratory form of bi-factor analysis that does not require one to provide an explicit bi-factor structure a priori. They use exploratory factor analysis and a bi-factor rotation criterion designed to produce a rotated loading mat...

  13. Quantitative analysis of results of quality control tests in linear accelerators used in radiotherapy; Analise quantitativa dos resultados de testes de controle de qualidade em aceleradores lineares usados em radioterapia

    Energy Technology Data Exchange (ETDEWEB)

    Passaro, Bruno M.; Rodrigues, Laura N. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Videira, Heber S., E-mail: bruno.passaro@gmail.com [Universidade de Sao Paulo (HCFMRP/USP), Sao Paulo, SP (Brazil). Faculdade de Medicina. Hospital das Clinicas

    2013-04-15

    The aim of this study is to assess and analyze the stability of the calibration factor of three linear accelerators, as well as the other dosimetric parameters normally included in a program of quality control in radiotherapy. The average calibration factors of the accelerators for the period of approximately four years for the Clinac 600C and Clinac 6EX were (0.998±0.012) and (0.996±0.014), respectively. For the Clinac 2100CD 6 MV and 15 MV was (1.008±0.009) and (1.006±0.010), respectively, in a period of approximately four years. The data of the calibration factors were divided into four subgroups for a more detailed analysis of behavior over the years. Through statistical analysis of calibration factors, we found that for the 600C and Clinacs 2100CD, is an expected probability that more than 90% of cases the values are within acceptable ranges according to TG-142, while for the Clinac 6EX is expected around 85% since this had several exchanges of accelerator components. The values of TPR20,10 of three accelerators are practically constant and within acceptable limits according to the TG-142. It can be concluded that a detailed study of data from the calibration factor of the accelerators and TPR{sub 20},{sub 10} from a quantitative point of view, is extremely useful in a quality assurance program. (author)

  14. A non linear analysis of human gait time series based on multifractal analysis and cross correlations

    International Nuclear Information System (INIS)

    Munoz-Diosdado, A

    2005-01-01

    We analyzed databases with gait time series of adults and persons with Parkinson, Huntington and amyotrophic lateral sclerosis (ALS) diseases. We obtained the staircase graphs of accumulated events that can be bounded by a straight line whose slope can be used to distinguish between gait time series from healthy and ill persons. The global Hurst exponent of these series do not show tendencies, we intend that this is because some gait time series have monofractal behavior and others have multifractal behavior so they cannot be characterized with a single Hurst exponent. We calculated the multifractal spectra, obtained the spectra width and found that the spectra of the healthy young persons are almost monofractal. The spectra of ill persons are wider than the spectra of healthy persons. In opposition to the interbeat time series where the pathology implies loss of multifractality, in the gait time series the multifractal behavior emerges with the pathology. Data were collected from healthy and ill subjects as they walked in a roughly circular path and they have sensors in both feet, so we have one time series for the left foot and other for the right foot. First, we analyzed these time series separately, and then we compared both results, with direct comparison and with a cross correlation analysis. We tried to find differences in both time series that can be used as indicators of equilibrium problems

  15. A non linear analysis of human gait time series based on multifractal analysis and cross correlations

    Energy Technology Data Exchange (ETDEWEB)

    Munoz-Diosdado, A [Department of Mathematics, Unidad Profesional Interdisciplinaria de Biotecnologia, Instituto Politecnico Nacional, Av. Acueducto s/n, 07340, Mexico City (Mexico)

    2005-01-01

    We analyzed databases with gait time series of adults and persons with Parkinson, Huntington and amyotrophic lateral sclerosis (ALS) diseases. We obtained the staircase graphs of accumulated events that can be bounded by a straight line whose slope can be used to distinguish between gait time series from healthy and ill persons. The global Hurst exponent of these series do not show tendencies, we intend that this is because some gait time series have monofractal behavior and others have multifractal behavior so they cannot be characterized with a single Hurst exponent. We calculated the multifractal spectra, obtained the spectra width and found that the spectra of the healthy young persons are almost monofractal. The spectra of ill persons are wider than the spectra of healthy persons. In opposition to the interbeat time series where the pathology implies loss of multifractality, in the gait time series the multifractal behavior emerges with the pathology. Data were collected from healthy and ill subjects as they walked in a roughly circular path and they have sensors in both feet, so we have one time series for the left foot and other for the right foot. First, we analyzed these time series separately, and then we compared both results, with direct comparison and with a cross correlation analysis. We tried to find differences in both time series that can be used as indicators of equilibrium problems.

  16. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    Science.gov (United States)

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  17. The Langley Stability and Transition Analysis Code (LASTRAC) : LST, Linear and Nonlinear PSE for 2-D, Axisymmetric, and Infinite Swept Wing Boundary Layers

    Science.gov (United States)

    Chang, Chau-Lyan

    2003-01-01

    During the past two decades, our understanding of laminar-turbulent transition flow physics has advanced significantly owing to, in a large part, the NASA program support such as the National Aerospace Plane (NASP), High-speed Civil Transport (HSCT), and Advanced Subsonic Technology (AST). Experimental, theoretical, as well as computational efforts on various issues such as receptivity and linear and nonlinear evolution of instability waves take part in broadening our knowledge base for this intricate flow phenomenon. Despite all these advances, transition prediction remains a nontrivial task for engineers due to the lack of a widely available, robust, and efficient prediction tool. The design and development of the LASTRAC code is aimed at providing one such engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. LASTRAC was written from scratch based on the state-of-the-art numerical methods for stability analysis and modem software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory (LST) or linear parabolized stability equations (LPSE) method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. Coupled with the built-in receptivity model that is currently under development, the nonlinear PSE method offers a synergistic approach to predict transition onset for a given disturbance environment based on first principles. This paper describes the governing equations, numerical methods, code development, and case studies for the current release of LASTRAC. Practical applications of LASTRAC are demonstrated for linear stability calculations, N-factor transition correlation, non-linear breakdown simulations, and controls of stationary crossflow instability in supersonic swept wing boundary

  18. Non-linear analysis of wave progagation using transform methods and plates and shells using integral equations

    Science.gov (United States)

    Pipkins, Daniel Scott

    Two diverse topics of relevance in modern computational mechanics are treated. The first involves the modeling of linear and non-linear wave propagation in flexible, lattice structures. The technique used combines the Laplace Transform with the Finite Element Method (FEM). The procedure is to transform the governing differential equations and boundary conditions into the transform domain where the FEM formulation is carried out. For linear problems, the transformed differential equations can be solved exactly, hence the method is exact. As a result, each member of the lattice structure is modeled using only one element. In the non-linear problem, the method is no longer exact. The approximation introduced is a spatial discretization of the transformed non-linear terms. The non-linear terms are represented in the transform domain by making use of the complex convolution theorem. A weak formulation of the resulting transformed non-linear equations yields a set of element level matrix equations. The trial and test functions used in the weak formulation correspond to the exact solution of the linear part of the transformed governing differential equation. Numerical results are presented for both linear and non-linear systems. The linear systems modeled are longitudinal and torsional rods and Bernoulli-Euler and Timoshenko beams. For non-linear systems, a viscoelastic rod and Von Karman type beam are modeled. The second topic is the analysis of plates and shallow shells under-going finite deflections by the Field/Boundary Element Method. Numerical results are presented for two plate problems. The first is the bifurcation problem associated with a square plate having free boundaries which is loaded by four, self equilibrating corner forces. The results are compared to two existing numerical solutions of the problem which differ substantially. linear model are compared to those

  19. Analysis of Design Variables of Annular Linear Induction Electromagnetic Pump using an MHD Model

    Energy Technology Data Exchange (ETDEWEB)

    Kwak, Jae Sik; Kim, Hee Reyoung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2015-05-15

    The generated force is affected by lots of factors including electrical input, hydrodynamic flow, geometrical shape, and so on. These factors, which are the design variables of an ALIP, should be suitably analyzed to optimally design an ALIP. Analysis on the developed pressure and efficiency of the ALIP according to the change of design variables is required for the ALIP satisfying requirements. In this study, the design variables of the ALIP are analyzed by using ideal MHD analysis model. Electromagnetic force and efficiency are derived by analyzing the main design variables such as pump core length, inner core diameter, flow gap and turns of coils. The developed pressure and efficiency of the ALIP were derived and analyzed on the change of the main variables such as pump core length, inner core diameter, flow gap, and turns of coils of the ALIP.

  20. Factors affecting optimal linear endovenous energy density for endovenous laser ablation in incompetent lower limb truncal veins - A review of the clinical evidence.

    Science.gov (United States)

    Cowpland, Christine A; Cleese, Amy L; Whiteley, Mark S

    2017-06-01

    Objectives The objective is to identify the factors that affect the optimal linear endovenous energy density (LEED) to ablate incompetent truncal veins. Methods We performed a literature review of clinical studies, which reported truncal vein ablation rates and LEED. A PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis) flow diagram documents the search strategy. We analysed 13 clinical papers which fulfilled the criteria to be able to compare results of great saphenous vein occlusion as defined by venous duplex ultrasound, with the LEED used in the treatment. Results Evidence suggests that the optimal LEED for endovenous laser ablation of the great saphenous vein is >80 J/cm and water might have a lower optimal LEED. A LEED 80 J/cm and <95 J/cm based on current evidence for shorter wavelength lasers. There is evidence that longer wavelength lasers may be effective at LEEDs of <85 J/cm.

  1. Analysis of Instantaneous Linear, Nonlinear and Complex Cardiovascular Dynamics from Videophotoplethysmography.

    Science.gov (United States)

    Valenza, Gaetano; Iozzia, Luca; Cerina, Luca; Mainardi, Luca; Barbieri, Riccardo

    2018-05-01

    There is a fast growing interest in the use of non-contact devices for health and performance assessment in humans. In particular, the use of non-contact videophotoplethysmography (vPPG) has been recently demonstrated as a feasible way to extract cardiovascular information. Nevertheless, proper validation of vPPG-derived heartbeat dynamics is still missing. We aim to an in-depth validation of time-varying, linear and nonlinear/complex dynamics of the pulse rate variability extracted from vPPG. We apply inhomogeneous pointprocess nonlinear models to assess instantaneous measures defined in the time, frequency, and bispectral domains as estimated through vPPG and standard ECG. Instantaneous complexity measures, such as the instantaneous Lyapunov exponents and the recently defined inhomogeneous point-process approximate and sample entropy, were estimated as well. Video recordings were processed using our recently proposed method based on zerophase principal component analysis. Experimental data were gathered from 60 young healthy subjects (age: 24±3 years) undergoing postural changes (rest-to-stand maneuver). Group averaged results show that there is an overall agreement between linear and nonlinear/complexity indices computed from ECG and vPPG during resting state conditions. However, important differences are found, particularly in the bispectral and complexity domains, in recordings where the subjects has been instructed to stand up. Although significant differences exist between cardiovascular estimates from vPPG and ECG, it is very promising that instantaneous sympathovagal changes, as well as time-varying complex dynamics, were correctly identified, especially during resting state. In addition to a further improvement of the video signal quality, more research is advocated towards a more precise estimation of cardiovascular dynamics by a comprehensive nonlinear/complex paradigm specifically tailored to the non-contact quantification. Schattauer GmbH.

  2. Multi-task linear programming discriminant analysis for the identification of progressive MCI individuals.

    Directory of Open Access Journals (Sweden)

    Guan Yu

    Full Text Available Accurately identifying mild cognitive impairment (MCI individuals who will progress to Alzheimer's disease (AD is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI and fluorodeoxyglucose positron emission tomography (FDG-PET. However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI subjects and 226 stable MCI (sMCI subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images and also the single-task classification method (using only MRI or only subjects with both MRI and

  3. Multi-task linear programming discriminant analysis for the identification of progressive MCI individuals.

    Science.gov (United States)

    Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang

    2014-01-01

    Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images

  4. Quantitative analysis of eyes and other optical systems in linear optics.

    Science.gov (United States)

    Harris, William F; Evans, Tanya; van Gool, Radboud D

    2017-05-01

    To show that 14-dimensional spaces of augmented point P and angle Q characteristics, matrices obtained from the ray transference, are suitable for quantitative analysis although only the latter define an inner-product space and only on it can one define distances and angles. The paper examines the nature of the spaces and their relationships to other spaces including symmetric dioptric power space. The paper makes use of linear optics, a three-dimensional generalization of Gaussian optics. Symmetric 2 × 2 dioptric power matrices F define a three-dimensional inner-product space which provides a sound basis for quantitative analysis (calculation of changes, arithmetic means, etc.) of refractive errors and thin systems. For general systems the optical character is defined by the dimensionally-heterogeneous 4 × 4 symplectic matrix S, the transference, or if explicit allowance is made for heterocentricity, the 5 × 5 augmented symplectic matrix T. Ordinary quantitative analysis cannot be performed on them because matrices of neither of these types constitute vector spaces. Suitable transformations have been proposed but because the transforms are dimensionally heterogeneous the spaces are not naturally inner-product spaces. The paper obtains 14-dimensional spaces of augmented point P and angle Q characteristics. The 14-dimensional space defined by the augmented angle characteristics Q is dimensionally homogenous and an inner-product space. A 10-dimensional subspace of the space of augmented point characteristics P is also an inner-product space. The spaces are suitable for quantitative analysis of the optical character of eyes and many other systems. Distances and angles can be defined in the inner-product spaces. The optical systems may have multiple separated astigmatic and decentred refracting elements. © 2017 The Authors Ophthalmic & Physiological Optics © 2017 The College of Optometrists.

  5. NBLDA: negative binomial linear discriminant analysis for RNA-Seq data.

    Science.gov (United States)

    Dong, Kai; Zhao, Hongyu; Tong, Tiejun; Wan, Xiang

    2016-09-13

    RNA-sequencing (RNA-Seq) has become a powerful technology to characterize gene expression profiles because it is more accurate and comprehensive than microarrays. Although statistical methods that have been developed for microarray data can be applied to RNA-Seq data, they are not ideal due to the discrete nature of RNA-Seq data. The Poisson distribution and negative binomial distribution are commonly used to model count data. Recently, Witten (Annals Appl Stat 5:2493-2518, 2011) proposed a Poisson linear discriminant analysis for RNA-Seq data. The Poisson assumption may not be as appropriate as the negative binomial distribution when biological replicates are available and in the presence of overdispersion (i.e., when the variance is larger than or equal to the mean). However, it is more complicated to model negative binomial variables because they involve a dispersion parameter that needs to be estimated. In this paper, we propose a negative binomial linear discriminant analysis for RNA-Seq data. By Bayes' rule, we construct the classifier by fitting a negative binomial model, and propose some plug-in rules to estimate the unknown parameters in the classifier. The relationship between the negative binomial classifier and the Poisson classifier is explored, with a numerical investigation of the impact of dispersion on the discriminant score. Simulation results show the superiority of our proposed method. We also analyze two real RNA-Seq data sets to demonstrate the advantages of our method in real-world applications. We have developed a new classifier using the negative binomial model for RNA-seq data classification. Our simulation results show that our proposed classifier has a better performance than existing works. The proposed classifier can serve as an effective tool for classifying RNA-seq data. Based on the comparison results, we have provided some guidelines for scientists to decide which method should be used in the discriminant analysis of RNA-Seq data

  6. Design and Experiment Analysis of a Direct-Drive Wave Energy Converter with a Linear Generator

    Directory of Open Access Journals (Sweden)

    Jing Zhang

    2018-03-01

    Full Text Available Coastal waves are an abundant nonpolluting and renewable energy source. A wave energy converter (WEC must be designed for efficient and steady operation in highly energetic ocean environments. A direct-drive wave energy conversion (D-DWEC system with a tubular permanent magnet linear generator (TPMLG on a wind and solar photovoltaic complementary energy generation platform is proposed to improve the conversion efficiency and reduce the complexity and device volume of WECs. The operating principle of D-DWECs is introduced, and detailed analyses of the proposed D-DWEC’s floater system, wave force characteristics, and conversion efficiency conducted using computational fluid dynamics are presented. A TPMLG with an asymmetric slot structure is designed to increase the output electric power, and detailed analyses of the magnetic field distribution, detent force characteristics, and no-load and load performances conducted using finite element analysis are discussed. The TPMLG with an asymmetric slot, which produces the same power as the TPMLG with a symmetric slot, has one fifth detent force of the latter. An experiment system with a prototype of the TPMLG with a symmetric slot is used to test the simulation results. The experiment and analysis results agree well. Therefore, the proposed D-DWEC fulfills the requirements of WEC systems.

  7. Linear analysis of the Richtmyer-Meshkov instability in shock-flame interactions

    Science.gov (United States)

    Massa, L.; Jha, P.

    2012-05-01

    Shock-flame interactions enhance supersonic mixing and detonation formation. Therefore, their analysis is important to explosion safety, internal combustion engine performance, and supersonic combustor design. The fundamental process at the basis of the interaction is the Richtmyer-Meshkov instability supported by the density difference between burnt and fresh mixtures. In the present study we analyze the effect of reactivity on the Richtmyer-Meshkov instability with particular emphasis on combustion lengths that typify the scaling between perturbation growth and induction. The results of the present linear analysis study show that reactivity changes the perturbation growth rate by developing a pressure gradient at the flame surface. The baroclinic torque based on the density gradient across the flame acts to slow down the instability growth of high wave-number perturbations. A gasdynamic flame representation leads to the definition of a Peclet number representing the scaling between perturbation and thermal diffusion lengths within the flame. Peclet number effects on perturbation growth are observed to be marginal. The gasdynamic model also considers a finite flame Mach number that supports a separation between flame and contact discontinuity. Such a separation destabilizes the interface growth by augmenting the tangential shear.

  8. Linear triangle finite element formulation for multigroup neutron transport analysis with anisotropic scattering

    Energy Technology Data Exchange (ETDEWEB)

    Lillie, R.A.; Robinson, J.C.

    1976-05-01

    The discrete ordinates method is the most powerful and generally used deterministic method to obtain approximate solutions of the Boltzmann transport equation. A finite element formulation, utilizing a canonical form of the transport equation, is here developed to obtain both integral and pointwise solutions to neutron transport problems. The formulation is based on the use of linear triangles. A general treatment of anisotropic scattering is included by employing discrete ordinates-like approximations. In addition, multigroup source outer iteration techniques are employed to perform group-dependent calculations. The ability of the formulation to reduce substantially ray effects and its ability to perform streaming calculations are demonstrated by analyzing a series of test problems. The anisotropic scattering and multigroup treatments used in the development of the formulation are verified by a number of one-dimensional comparisons. These comparisons also demonstrate the relative accuracy of the formulation in predicting integral parameters. The applicability of the formulation to nonorthogonal planar geometries is demonstrated by analyzing a hexagonal-type lattice. A small, high-leakage reactor model is analyzed to investigate the effects of varying both the spatial mesh and order of angular quadrature. This analysis reveals that these effects are more pronounced in the present formulation than in other conventional formulations. However, the insignificance of these effects is demonstrated by analyzing a realistic reactor configuration. In addition, this final analysis illustrates the importance of incorporating anisotropic scattering into the finite element formulation. 8 tables, 29 figures.

  9. Linear triangle finite element formulation for multigroup neutron transport analysis with anisotropic scattering

    International Nuclear Information System (INIS)

    Lillie, R.A.; Robinson, J.C.

    1976-05-01

    The discrete ordinates method is the most powerful and generally used deterministic method to obtain approximate solutions of the Boltzmann transport equation. A finite element formulation, utilizing a canonical form of the transport equation, is here developed to obtain both integral and pointwise solutions to neutron transport problems. The formulation is based on the use of linear triangles. A general treatment of anisotropic scattering is included by employing discrete ordinates-like approximations. In addition, multigroup source outer iteration techniques are employed to perform group-dependent calculations. The ability of the formulation to reduce substantially ray effects and its ability to perform streaming calculations are demonstrated by analyzing a series of test problems. The anisotropic scattering and multigroup treatments used in the development of the formulation are verified by a number of one-dimensional comparisons. These comparisons also demonstrate the relative accuracy of the formulation in predicting integral parameters. The applicability of the formulation to nonorthogonal planar geometries is demonstrated by analyzing a hexagonal-type lattice. A small, high-leakage reactor model is analyzed to investigate the effects of varying both the spatial mesh and order of angular quadrature. This analysis reveals that these effects are more pronounced in the present formulation than in other conventional formulations. However, the insignificance of these effects is demonstrated by analyzing a realistic reactor configuration. In addition, this final analysis illustrates the importance of incorporating anisotropic scattering into the finite element formulation. 8 tables, 29 figures

  10. Optimization and Analysis of a U-Shaped Linear Piezoelectric Ultrasonic Motor Using Longitudinal Transducers.

    Science.gov (United States)

    Yu, Hongpeng; Quan, Qiquan; Tian, Xinqi; Li, He

    2018-03-07

    A novel U-shaped piezoelectric ultrasonic motor that mainly focused on miniaturization and high power density was proposed, fabricated, and tested in this work. The longitudinal vibrations of the transducers were excited to form the elliptical movements on the driving feet. Finite element method (FEM) was used for design and analysis. The resonance frequencies of the selected vibration modes were tuned to be very close to each other with modal analysis and the movement trajectories of the driving feet were gained with transient simulation. The vibration modes and the mechanical output abilities were tested to evaluate the proposed motor further by a prototype. The maximum output speed was tested to be 416 mm/s, the maximum thrust force was 21 N, and the maximum output power was 5.453 W under frequency of 29.52 kHz and voltage of 100 V rms . The maximum output power density of the prototype reached 7.59 W/kg, which was even greater than a previous similar motor under the exciting voltage of 200 V rms . The proposed motor showed great potential for linear driving of large thrust force and high power density.

  11. Two-dimensional statistical linear discriminant analysis for real-time robust vehicle-type recognition

    Science.gov (United States)

    Zafar, I.; Edirisinghe, E. A.; Acar, S.; Bez, H. E.

    2007-02-01

    Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic License Plate Recognition (ALPR) systems. Several car MMR systems have been proposed in literature. However these approaches are based on feature detection algorithms that can perform sub-optimally under adverse lighting and/or occlusion conditions. In this paper we propose a real time, appearance based, car MMR approach using Two Dimensional Linear Discriminant Analysis that is capable of addressing this limitation. We provide experimental results to analyse the proposed algorithm's robustness under varying illumination and occlusions conditions. We have shown that the best performance with the proposed 2D-LDA based car MMR approach is obtained when the eigenvectors of lower significance are ignored. For the given database of 200 car images of 25 different make-model classifications, a best accuracy of 91% was obtained with the 2D-LDA approach. We use a direct Principle Component Analysis (PCA) based approach as a benchmark to compare and contrast the performance of the proposed 2D-LDA approach to car MMR. We conclude that in general the 2D-LDA based algorithm supersedes the performance of the PCA based approach.

  12. Kovasznay modes in the linear stability analysis of self-similar ablation flows

    International Nuclear Information System (INIS)

    Lombard, V.

    2008-12-01

    Exact self-similar solutions of gas dynamics equations with nonlinear heat conduction for semi-infinite slabs of perfect gases are used for studying the stability of ablative flows in inertial confinement fusion, when a shock wave propagates in front of a thermal front. Both the similarity solutions and their linear perturbations are numerically computed with a dynamical multi-domain Chebyshev pseudo-spectral method. Laser-imprint results, showing that maximum amplification occurs for a laser-intensity modulation of zero transverse wavenumber have thus been obtained (Abeguile et al. (2006); Clarisse et al. (2008)). Here we pursue this approach by proceeding for the first time to an analysis of perturbations in terms of Kovasznay modes. Based on the analysis of two compressible and incompressible flows, evolution equations of vorticity, acoustic and entropy modes are proposed for each flow region and mode couplings are assessed. For short times, perturbations are transferred from the external surface to the ablation front by diffusion and propagate as acoustic waves up to the shock wave. For long times, the shock region is governed by the free propagation of acoustic waves. A study of perturbations and associated sources allows us to identify strong mode couplings in the conduction and ablation regions. Moreover, the maximum instability depends on compressibility. Finally, a comparison with experiments of flows subjected to initial surface defects is initiated. (author)

  13. Diffuse Optical Tomography for Brain Imaging: Continuous Wave Instrumentation and Linear Analysis Methods

    Science.gov (United States)

    Giacometti, Paolo; Diamond, Solomon G.

    Diffuse optical tomography (DOT) is a functional brain imaging technique that measures cerebral blood oxygenation and blood volume changes. This technique is particularly useful in human neuroimaging measurements because of the coupling between neural and hemodynamic activity in the brain. DOT is a multichannel imaging extension of near-infrared spectroscopy (NIRS). NIRS uses laser sources and light detectors on the scalp to obtain noninvasive hemodynamic measurements from spectroscopic analysis of the remitted light. This review explains how NIRS data analysis is performed using a combination of the modified Beer-Lambert law (MBLL) and the diffusion approximation to the radiative transport equation (RTE). Laser diodes, photodiode detectors, and optical terminals that contact the scalp are the main components in most NIRS systems. Placing multiple sources and detectors over the surface of the scalp allows for tomographic reconstructions that extend the individual measurements of NIRS into DOT. Mathematically arranging the DOT measurements into a linear system of equations that can be inverted provides a way to obtain tomographic reconstructions of hemodynamics in the brain.

  14. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation

    International Nuclear Information System (INIS)

    Zhao, Zhanqi; Möller, Knut; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich

    2014-01-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton–Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR C ) and (4) GREIT with individual thorax geometry (GR T ). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal–Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. (paper)

  15. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  16. Disruptive Event Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-08

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2004 [DIRS 169671]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis''. The objective of this

  17. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  18. Disruptive Event Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2004 [DIRS 169671]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis''. The objective of this analysis was to develop the BDCFs for the volcanic ash

  19. Mehar Methods for Fuzzy Optimal Solution and Sensitivity Analysis of Fuzzy Linear Programming with Symmetric Trapezoidal Fuzzy Numbers

    Directory of Open Access Journals (Sweden)

    Sukhpreet Kaur Sidhu

    2014-01-01

    Full Text Available The drawbacks of the existing methods to obtain the fuzzy optimal solution of such linear programming problems, in which coefficients of the constraints are represented by real numbers and all the other parameters as well as variables are represented by symmetric trapezoidal fuzzy numbers, are pointed out, and to resolve these drawbacks, a new method (named as Mehar method is proposed for the same linear programming problems. Also, with the help of proposed Mehar method, a new method, much easy as compared to the existing methods, is proposed to deal with the sensitivity analysis of the same type of linear programming problems.

  20. Idiopathic linear IgA bullous dermatosis: prognostic factors based on a case series of 72 adults.

    Science.gov (United States)

    Gottlieb, J; Ingen-Housz-Oro, S; Alexandre, M; Grootenboer-Mignot, S; Aucouturier, F; Sbidian, E; Tancrede, E; Schneider, P; Regnier, E; Picard-Dahan, C; Begon, E; Pauwels, C; Cury, K; Hüe, S; Bernardeschi, C; Ortonne, N; Caux, F; Wolkenstein, P; Chosidow, O; Prost-Squarcioni, C

    2017-07-01

    Linear IgA bullous dermatosis (LABD) is a clinically and immunologically heterogeneous, subepidermal, autoimmune bullous disease (AIBD), for which the long-term evolution is poorly described. To investigate the clinical and immunological characteristics, follow-up and prognostic factors of adult idiopathic LABD. This retrospective study, conducted in our AIBD referral centre, included adults, diagnosed between 1995 and 2012, with idiopathic LABD, defined as pure or predominant IgA deposits by direct immunofluorescence. Clinical, histological and immunological findings were collected from charts. Standard histology was systematically reviewed, and indirect immunofluorescence (IIF) on salt-split skin (SSS) and immunoblots (IBs) on amniotic membrane extracts using anti-IgA secondary antibodies were performed, when biopsies and sera obtained at diagnosis were available. Prognostic factors for complete remission (CR) were identified using univariate and multivariate analyses. Of the 72 patients included (median age 54 years), 60% had mucous membrane (MM) involvement. IgA IIF on SSS was positive for 21 of 35 patients tested; 15 had epidermal and dermal labellings. Immunoelectron microscopy performed on the biopsies of 31 patients labelled lamina lucida (LL) (26%), lamina densa (23%), anchoring-fibril zone (AFz) (19%) and LL+AFz (23%). Of the 34 IgA IBs, 22 were positive, mostly for LAD-1/LABD97 (44%) and full-length BP180 (33%). The median follow-up was 39 months. Overall, 24 patients (36%) achieved sustained CR, 19 (29%) relapsed and 35% had chronic disease. CR was significantly associated with age > 70 years or no MM involvement. No prognostic immunological factor was identified. Patients with LABD who are < 70 years old and have MM involvement are at risk for chronic evolution. © 2017 British Association of Dermatologists.