Model correction factor method for system analysis
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager; Johannesen, Johannes M.
2000-01-01
The Model Correction Factor Method is an intelligent response surface method based on simplifiedmodeling. MCFM is aimed for reliability analysis in case of a limit state defined by an elaborate model. Herein it isdemonstrated that the method is applicable for elaborate limit state surfaces on which...... severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... surface than existing in the idealized model....
Unascertained Factor Method of Dynamic Characteristic Analysis for Antenna Structures
Institute of Scientific and Technical Information of China (English)
ZHU Zeng-qing; LIANG Zhen-tao; CHEN Jian-jun
2008-01-01
The dynamic characteristic analysis model of antenna structures is built, in which the structural physical parameters and geometrical dimensions are all considered as unascertained variables, And a structure dynamic characteristic analysis method based on the unascertained factor method is given. The computational expression of structural characteristic is developed by the mathematics expression of unascertained factor and the principles of unascertained rational numbers arithmetic. An example is given, in which the possible values and confidence degrees of the unascertained structure characteristics are obtained. The calculated results show that the method is feasible and effective.
A Comparison of Imputation Methods for Bayesian Factor Analysis Models
Merkle, Edgar C.
2011-01-01
Imputation methods are popular for the handling of missing data in psychology. The methods generally consist of predicting missing data based on observed data, yielding a complete data set that is amiable to standard statistical analyses. In the context of Bayesian factor analysis, this article compares imputation under an unrestricted…
Institute of Scientific and Technical Information of China (English)
MA Juan; CHEN Jian-jun; XU Ya-lan; JIANG Tao
2006-01-01
A new fuzzy stochastic finite element method based on the fuzzy factor method and random factor method is given and the analysis of structural dynamic characteristic for fuzzy stochastic truss structures is presented. Considering the fuzzy randomness of the structural physical parameters and geometric dimensions simultaneously, the structural stiffness and mass matrices are constructed based on the fuzzy factor method and random factor method; from the Rayleigh's quotient of structural vibration, the structural fuzzy random dynamic characteristic is obtained by means of the interval arithmetic;the fuzzy numeric characteristics of dynamic characteristic are then derived by using the random variable's moment function method and algebra synthesis method. Two examples are used to illustrate the validity and rationality of the method given. The advantage of this method is that the effect of the fuzzy randomness of one of the structural parameters on the fuzzy randomness of the dynamic characteristic can be reflected expediently and objectively.
Analysis of Social Cohesion in Health Data by Factor Analysis Method: The Ghanaian Perspective
Saeed, Bashiru I. I.; Xicang, Zhao; Musah, A. A. I.; Abdul-Aziz, A. R.; Yawson, Alfred; Karim, Azumah
2013-01-01
We investigated the study of the overall social cohesion of Ghanaians. In this study, we considered the paramount interest of the involvement of Ghanaians in their communities, their views of other people and institutions, and their level of interest in both local and national politics. The factor analysis method was employed for analysis using R…
Directory of Open Access Journals (Sweden)
H Kamani
2016-03-01
Full Text Available Backgrounds and Objectives: Quantity of trace metals in the wet precipitation can illustrate the environmental pollution of different urban areas. Up to now, there is no study regarding the chemistry of wet precipitation in Tehran .The objectives of this study are: measurement of heavy metal concentrations and identification of the main factors affecting the heavy metal concentrations in wet precipitation using factor analysis method. Materials and Methods: This was a cross-sectional study in which measurements of heavy metals were performed in 53 wet precipitation samples collected from a central site of Tehran City, capital of Iran. The samples were collected during November to May in 2010, 2011 and 2012 on the roof of the student’s dormitory building of Tehran University of Medical Sciences and then the concentration of heavy metals in each sample was measured with ICP-MS. Results: pH ranged from 4.2 to 7.1 with a mean value of 5.1 indicating in acidic range. Result of EF calculations revealed that samples were not enriched with Fe and Cr but were enriched with Zn, Cd, Ni, Pb and Cu. Factor Component Analysis with Varimax normalized rotation showed Al, Fe and Cr are originated from crustal source and Zn, Cd, Ni, Pb and Cu are originated from anthropogenic sources. Conclusion: EF and acidic pH values indicate Tehran is under the influence of extremely anthropogenic activities. Large number of vehicles and industrial activity in the city are undoubtedly responsible for the emission of a wide range of pollution.
Directory of Open Access Journals (Sweden)
Bruce Weaver
2014-09-01
Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.
A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods
Ritter, Nicola L.
2012-01-01
Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…
Analysis of cultural development of Isfahan city Using Factor analysis method
Directory of Open Access Journals (Sweden)
J.Mohammadi
2013-01-01
Full Text Available Extended abstract1-IntroductionCultural spaces are consideredas one of the main factors for development. Cultural development is a qualitative and valuable process that for assessing it, quantitative indicators in cultural planning are used to obtain development objectives in the pattern of goods and services. The aim of the study is to determine and analyze cultural development level and regional inequality of different districts of Isfahan using factor analysis technique. The statistical population of the study is 14 districts of Isfahan municipality. The dominant approach ofthis study is quantitative – description and analytical. In this study, 35 indices have been summarized by factor analysis method and have been reduced to 5 factors and combined in significant ones and delivered.2 – Theoretical basesThe most important objectives of spatial planning, considering limitation of resources, are optimum distributions of facilities and services among different locations in which people live. To do this,there is a need to identify different locations in terms of having different facilities and services, so that developed locations are specified and planners can proceed to do something for spatial equilibrium and reducing privileged distance between districts.The present study has been conducted to reach to an equal development in Isfahan urban districts following identifying the situation and the manner of distributing development facilities cultural selected indices in different districts.3 – DiscussionCultural development of societies is evaluated by considering the changes and improvement of its indices and measured by quantitative frames. Cultural development indices are the most important tools for cultural planning in a special district in a society. In this study, cultural development indices have been used to determine the levels of districts. By using factor analysis model, the share of influential factors in the cultural
An Improvement of the Harman-Fukuda-Method for the Minres Solution in Factor Analysis.
Hafner, Robert
1981-01-01
The method proposed by Harman and Fukuda to treat the so-called Heywood case in the minres method in factor analysis (i.e., the case where the resulting communalities are greater than one) involves the frequent solution of eigenvalue problems. A simple method to treat this problem is presented. (Author/JKS)
Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues
Energy Technology Data Exchange (ETDEWEB)
Ronald Laurids Boring
2010-11-01
This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.
SEISMIC RANDOM VIBRATION ANALYSIS OF STOCHASTIC STRUCTURES USING RANDOM FACTOR METHOD
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Seismic random vibration analysis of stochastic truss structures is presented. A new method called random factor method is used for dynamic analysis of structures with uncertain parameters, due to variability in their material properties and geometry. Using the random factor method, the natural frequencies and modeshapes of a stochastic structure can be respectively described by the product of two parts, corresponding to the random factors of the structural parameters with uncertainty, and deterministic values of the natural frequencies and modeshapes obtained by conventional finite element analysis. The stochastic truss structure is subjected to stationary or non-stationary random earthquake excitation. Computational expressions for the mean and standard deviation of the mean square displacement and mean square stress are developed by means of the random variable's functional moment method and the algebra synthesis method. An antenna and a truss bridge are used as practical engineering examples to illustrate the application of the random factor method in the seismic response analysis of random structures under stationary or non-stationary random earthquake excitation.
Köse, Alper
2014-01-01
The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…
An Introduction To Multi-Battery Factor Analysis: Overcoming Method Artefacts.
Directory of Open Access Journals (Sweden)
Gavin T L Brown
2007-05-01
Full Text Available Examination of participants' responses to factor or scale scores provides useful insights, but analysis of such scores from multiple measures or batteries is sometimes confounded by methodological artefacts. This paper provides a short primer into the use of multi-trait, multi-method (MTMM correlational analysis and multi-battery factor analysis (MBFA. The principles of both procedures are outlined and a case study is provided from the author's research into 233 teachers' responses to 22 scale scores drawn from five batteries. The batteries were independently developed measures of teachers' thinking about the nature and purpose of assessment, teaching, learning, curriculum, and teacher efficacy. Detailed procedures for using Cudeck's (1982 MBFACT software are provided. Both MTMM and MBFA analyses identified an appropriate common trait across the five batteries, whereas joint factor analysis of the 22 scale scores confounded the common trait with a battery or method artefact. When researchers make use of multiple measures, they ought to take into account the impact of method artefacts when analyzing scale scores from multiple batteries. The multi-battery factor analysis procedure and MBFACT software provide a robust procedure for exploring how scales inter-relate.
Energy Technology Data Exchange (ETDEWEB)
Kim, Ar Ryum; Jang, Inseok Jang; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejon (Korea, Republic of); Park, Jinkyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Jong Hyun [KEPCO, Ulsan (Korea, Republic of)
2015-05-15
The purpose of HRA implementation is 1) to achieve the human factor engineering (HFE) design goal of providing operator interfaces that will minimize personnel errors and 2) to conduct an integrated activity to support probabilistic risk assessment (PRA). For these purposes, various HRA methods have been developed such as technique for human error rate prediction (THERP), simplified plant analysis risk human reliability assessment (SPAR-H), cognitive reliability and error analysis method (CREAM) and so on. In performing HRA, such conditions that influence human performances have been represented via several context factors called performance shaping factors (PSFs). PSFs are aspects of the human's individual characteristics, environment, organization, or task that specifically decrements or improves human performance, thus respectively increasing or decreasing the likelihood of human errors. Most HRA methods evaluate the weightings of PSFs by expert judgment and explicit guidance for evaluating the weighting is not provided. It has been widely known that the performance of the human operator is one of the critical factors to determine the safe operation of NPPs. HRA methods have been developed to identify the possibility and mechanism of human errors. In performing HRA methods, the effect of PSFs which may increase or decrease human error should be investigated. However, the effect of PSFs were estimated by expert judgment so far. Accordingly, in order to estimate the effect of PSFs objectively, the quantitative framework to estimate PSFs by using PSF profiles is introduced in this paper.
Comparison of Seven Methods for Boolean Factor Analysis and Their Evaluation by Information Gain.
Frolov, Alexander A; Húsek, Dušan; Polyakov, Pavel Yu
2016-03-01
An usual task in large data set analysis is searching for an appropriate data representation in a space of fewer dimensions. One of the most efficient methods to solve this task is factor analysis. In this paper, we compare seven methods for Boolean factor analysis (BFA) in solving the so-called bars problem (BP), which is a BFA benchmark. The performance of the methods is evaluated by means of information gain. Study of the results obtained in solving BP of different levels of complexity has allowed us to reveal strengths and weaknesses of these methods. It is shown that the Likelihood maximization Attractor Neural Network with Increasing Activity (LANNIA) is the most efficient BFA method in solving BP in many cases. Efficacy of the LANNIA method is also shown, when applied to the real data from the Kyoto Encyclopedia of Genes and Genomes database, which contains full genome sequencing for 1368 organisms, and to text data set R52 (from Reuters 21578) typically used for label categorization.
Directory of Open Access Journals (Sweden)
Jayakishan Meher
2012-08-01
Full Text Available Correlation between gene expression profiles to disease or different developmental stages of a cell through microarray data and its analysis has been a great deal in molecular biology. As the microarray data have thousands of genes and very few sample, thus efficient feature extraction and computational method development is necessary for the analysis. In this paper we have proposed an effective feature extraction method based on factor analysis (FA with discrete wavelet transform (DWT to detect informative genes. Radial basis function neural network (RBFNN classifier is used to efficiently predict the sample class which has a low complexity than other classifier. The potential of the proposed approach is evaluated through an exhaustive study by many benchmark datasets. The experimental results show that the proposed method can be a useful approach for cancer classification.
Classification of ECG signals using LDA with factor analysis method as feature reduction technique.
Kaur, Manpreet; Arora, A S
2012-11-01
The analysis of ECG signal, especially the QRS complex as the most characteristic wave in ECG, is a widely accepted approach to study and to classify cardiac dysfunctions. In this paper, first wavelet coefficients calculated for QRS complex are taken as features. Next, factor analysis procedures without rotation and with orthogonal rotation (varimax, equimax and quartimax) are used for feature reduction. The procedure uses the 'Principal Component Method' to estimate component loadings. Further, classification has been done with a LDA classifier. The MIT-BIH arrhythmia database is used and five types of beats (normal, PVC, paced, LBBB and RBBB) are considered for analysis. Accuracy, sensitivity and positive predictivity are performance parameters used for comparing performance of feature reduction techniques. Results demonstrate that the equimax rotation method yields maximum average accuracy of 99.056% for unknown data sets among other used methods.
Institute of Scientific and Technical Information of China (English)
Liang GUO; Ying ZHAO; Peng WANG
2012-01-01
In this paper, an artificial neural network model was built to predict the Chemical Oxygen Demand （CODMn） measured by permanganate index in Songhua River. To enhance the prediction accuracy, principal factors were determined through the analysis of the weight relation between influencing factors and forecasting object using cluster analysis method, which optimized the topological structure of the prediction model input items of the artificial neural network. It was shown that application of the principal factors in water quality prediction model can improve its forecasting skill significantly through the comparison between results of prediction by artificial neural network and the measurements of the CODMn. This methodology is also applicable to various water quality prediction targets of other water bodies and it is valuable for theoretical study and practical application.
Method for exploiting bias in factor analysis using constrained alternating least squares algorithms
Keenan, Michael R.
2008-12-30
Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.
Gorsuch, Richard L
2013-01-01
Comprehensive and comprehensible, this classic covers the basic and advanced topics essential for using factor analysis as a scientific tool in psychology, education, sociology, and related areas. Emphasizing the usefulness of the techniques, it presents sufficient mathematical background for understanding and sufficient discussion of applications for effective use. This includes not only theory but also the empirical evaluations of the importance of mathematical distinctions for applied scientific analysis.
Markle, Gail
2017-01-01
Undergraduate social science research methods courses tend to have higher than average rates of failure and withdrawal. Lack of success in these courses impedes students' progression through their degree programs and negatively impacts institutional retention and graduation rates. Grounded in adult learning theory, this mixed methods study…
Sousa, L B; Hamawaki, O T; Nogueira, A P O; Batista, R O; Oliveira, V M; Hamawaki, R L
2015-10-19
In the final phases of new soybean cultivar development, lines are cultivated in several locations across multiple seasons with the intention of identifying and selecting superior genotypes for quantitative traits. In this context, this study aimed to study the genotype-by-environment interaction for the trait grain yield (kg/ha), and to evaluate the adaptability and stability of early-cycle soybean genotypes using the additive main effects and multiplicative interaction (AMMI) analysis, genotype main effects and genotype x environment interaction (GGE) biplot, and factor analysis methods. Additionally, the efficiency of these methods was compared. The experiments were carried out in five cities in the State of Mato Grosso: Alto Taquari, Lucas do Rio Verde, Sinop, Querência, and Rondonópolis, in the 2011/2012 and 2012/2013 seasons. Twenty-seven early-cycle soybean genotypes were evaluated, consisting of 22 lines developed by Universidade Federal de Uberlândia (UFU) soybean breeding program, and five controls: UFUS Carajás, MSOY 6101, MSOY 7211, UFUS Guarani, and Riqueza. Significant and complex genotype-by-environment interactions were observed. The AMMI model presented greater efficiency by retaining most of the variation in the first two main components (61.46%), followed by the GGE biplot model (57.90%), and factor analysis (54.12%). Environmental clustering among the methodologies was similar, and was composed of one environmental group from one location but from different seasons. Genotype G5 presented an elevated grain yield, and high adaptability and stability as determined by the AMMI, factor analysis, and GGE biplot methodologies.
Tucker3 method in biometrical research: Analysis of experiments with three factors, using R
de Araújo, Lúcio B.; Oliveira, Manuela M.; Dias, Carlos Tadeu dos S.; Oliveira, Amílcar; Oliveira, Teresa A.
2012-09-01
The present work aims to propose a systematic study and interpretation of a variable response in relation to three factors, using a model of Joint Table Analysis, the Tucker3 model, as well as the joint biplot graph. The proposed method seems efficient and suitable for separating standard technical response, and the pattern of noise contained in a three inputs table, as well as allows its interpretation. The joint plot graph facilitates the study and interpretation of the data structure and provides additional information on these. In our application the aim is to identify the combinations of genotypes, locations and years that contribute or not to a high yield of bean cultivars.
Bauer, Matthias; Bertagnolli, Helmut
2007-12-13
The transition-metal alkoxide yttrium 2-methoxyethoxide Y(OEtOMe)(3) in solution is studied as a model system of the large class of alkoxide precursors used in the sol-gel process by means of EXAFS spectroscopy. The discussion is focused on the amplitude reduction factor S (2)(0) and the cumulant expansion method. If asymmetry is present in the radial distribution function, the determination of the correct structural model can only be achieved by balancing multiple Gaussian shell fits against only one shell fit with a third cumulant C3. A method to identify the best model, based on statistical parameters of the EXAFS fit, is proposed and checked with two well-known reference compounds, Y(5)O(O(i)Pr)(13) and Y(acac)(3).3H(2)O, and applied to the structurally unknown solution of Y(OEtOMe)(3) in 2-methoxyethanol. The two references are also used to discuss the transferability of S(2)(0) values, determined from reference compounds to unknown samples. A model-free procedure to identify the correct amplitude reduction factor S(2)(0) by making use of fits with different k-weighting schemes is critically investigated. This procedure, which does not require any crystallographic data, is used for the case of Y(OEtOMe)(3) in solution, where significant differences of the amplitude reducing factor of both the oxygen and yttrium shell in comparison to the reference Y(5)O(O(i)Pr)(13) were found. With such a detailed analysis of EXAFS data, a reliable characterization of Y(OEtOMe)3 in 2-methoxyethanol by means of EXAFS spectroscopy is possible. The decameric structure unit found in solid Y(OEtOMe)(3) is not preserved, but rather, a pentameric framework similar to that in Y5O(O(i)Pr)(13) is formed.
Contributing death factors in very low-birth-weight infants by path method analysis
Directory of Open Access Journals (Sweden)
Morteza Ghojazadeh
2014-01-01
Full Text Available Background: Neonatal deaths account for 40% of deaths under the age of 5 years worldwide. Therefore, efforts to achieve the UN Millennium Development Goal 4 of reducing childhood mortality by two-thirds by 2015 are focused on reducing neonatal deaths in high-mortality countries. The aim of present study was to determine death factors among very low-birth-weight infants by path method analysis. Materials and Methods: In this study, medical records of 2,135 infants admitted between years 2008 and 2010 in neonatal intense care unit of Alzahra Educational-Medical centre (Tabriz, Iran were analysed by path method using statistical software SPSS 18. Results: Variables such as duration of hospitalisation, birth weight, gestational age have negative effect on infant mortality, and gestational blood pressure has positive direct effect on infant mortality that at whole represented 66.5% of infant mortality variance (F = 1018, P < 0.001. Gestational age termination in the positive form through birth weight, and also gestational blood pressure in negative form through hospitalisation period had indirect effect on infant mortality. Conclusion: The results of the study indicated that the duration of low-birth-weight infant′s hospitalisation is also associated with infant′s mortality (coefficient -0.7; P < 0.001. This study revealed that among the maternal factors only gestational blood pressure was in relationship with infants′ mortality.
Visible/near-infrared (Vis/NIR) spectroscopy with wavelength range between 400 and 2500 nm combined with factor analysis method was tested to predict quality attributes of chicken breast fillets. Quality attributes, including color (L*, a*, b*), pH, and drip loss were analyzed using factor analysis ...
Factors and methods of analysis and estimation of furniture making enterprises competitiveness
Directory of Open Access Journals (Sweden)
Vitaliy Aleksandrovich Zhigarev
2015-06-01
Full Text Available Objective to describe the author39s methodology for estimating the furnituremaking enterprises competitiveness with a view to carry out the economic evaluation of the efficiency of furniture production the evaluation of the internal component of the furniture production efficiency the identification of factors influencing the efficiency of furnituremaking companies and areas for improving it through improvements in product range production and sales policy of the enterprise. The research subject is modern methods and principles of competitiveness management applicable in a rapidly changing market environment. Methods in general the research methodology consists of six stages differentiated by methods objectives and required outcomes. The first stage of the research was to study the nature of demand within the target market of a furnituremaking enterprise. The second stage was to study the expenditures of a furnituremaking enterprise for implementing individual production and sales strategies. The third stage was to study competition in the market. The fourth stage was the analysis of possibilities of a furnituremaking enterprise in producing and selling furniture in terms of factor values combinations. The fifth stage was the reexamination of the demand with a view to its distribution according to the factor space. The final sixth stage was processing of data obtained at the previous stages and carrying out the necessary calculations. Results in general the above methodology of economic evaluation of the efficiency of furniture production based on the previously developed model gives the managers of enterprises an algorithm for assessing both market and firmlevel component of the furniture production efficiency allowing the subsequent identification and evaluation of the efficiency factors and the development of measures to improve the furniture production and sale efficiency as well as the assortment rationalization production and sales policy
Energy Technology Data Exchange (ETDEWEB)
Casey, S.M.
1980-06-01
The purpose of this document is to provide an overview of the recommended activities and methods to be employed by a team of human factors engineers during the development of a nuclear waste retrieval system. This system, as it is presently conceptualized, is intended to be used for the removal of storage canisters (each canister containing a spent fuel rod assembly) located in an underground salt bed depository. This document, and the others in this series, have been developed for the purpose of implementing human factors engineering principles during the design and construction of the retrieval system facilities and equipment. The methodology presented has been structured around a basic systems development effort involving preliminary development, equipment development, personnel subsystem development, and operational test and evaluation. Within each of these phases, the recommended activities of the human engineering team have been stated, along with descriptions of the human factors engineering design techniques applicable to the specific design issues. Explicit examples of how the techniques might be used in the analysis of human tasks and equipment required in the removal of spent fuel canisters have been provided. Only those techniques having possible relevance to the design of the waste retrieval system have been reviewed. This document is intended to provide the framework for integrating human engineering with the rest of the system development effort. The activities and methodologies reviewed in this document have been discussed in the general order in which they will occur, although the time frame (the total duration of the development program in years and months) in which they should be performed has not been discussed.
Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin
2017-04-01
In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification.
Reliability Analysis of a Composite Blade Structure Using the Model Correction Factor Method
DEFF Research Database (Denmark)
Dimitrov, Nikolay Krasimiroy; Friis-Hansen, Peter; Berggreen, Christian
2010-01-01
This paper presents a reliability analysis of a composite blade profile. The so-called Model Correction Factor technique is applied as an effective alternate approach to the response surface technique. The structural reliability is determined by use of a simplified idealised analytical model which...... in a probabilistic sense is model corrected so that it, close to the design point, represents the same structural behaviour as a realistic FE model. This approach leads to considerable improvement of computational efficiency over classical response surface methods, because the numerically “cheap” idealistic model...... is used as the response surface, while the time-consuming detailed model is called only a few times until the simplified model is calibrated to the detailed model....
Functional Factor Analysis In Sesame Under Water - Limiting Stress: New Concept On An Old Method
Directory of Open Access Journals (Sweden)
Mansouri Sadollah
2014-12-01
Full Text Available Multivariate statistical analysis, through their ability to extract hidden relationship between various traits, has a wide application in breeding programs. Having physiological concept on the multivariate analysis, factor analysis was used to extract differential relationships between different components involving in assimilate partitioning in sesame under regular irrigation regime and limited irrigation. The analysis revealed that under regular irrigation regime, the stored and/or currently produced assimilates are allocated to the filling seeds. However, incidence of water shortage in the beginning of flowering time make shifts in assimilate partitioning from formation of new seeds or capsules to the not-matured pre-formed seeds, which results in seeds with more nutrient storage. This indicates the requirement for change in breeding strategies under sub-optimal condition. The possible common language between factor concept in multivariate analysis, QTLs in genetics, and transcription factors in molecular biology is indicated.
Franck-Condon Factors for Diatomics: Insights and Analysis Using the Fourier Grid Hamiltonian Method
Ghosh, Supriya; Dixit, Mayank Kumar; Bhattacharyya, S. P.; Tembe, B. L.
2013-01-01
Franck-Condon factors (FCFs) play a crucial role in determining the intensities of the vibrational bands in electronic transitions. In this article, a relatively simple method to calculate the FCFs is illustrated. An algorithm for the Fourier Grid Hamiltonian (FGH) method for computing the vibrational wave functions and the corresponding energy…
Information quality for PPC: analysis of influence factors and diagnosis method proposal
Directory of Open Access Journals (Sweden)
Eduardo Belmonte Möller
2013-03-01
Full Text Available One of the most important problems that companies have to face for Production Planning and Control (PPC is the lack of Information Quality (IQ to support the decision making process. In this context, determining which factors affect the IQ necessary for PPC is a key issue. This paper presents a case study that analyzes the main influence factors on IQ. Furthermore, this paper proposes a method to perform diagnosis about these influence factors in companies. To analyze the IQ influence factors, a classification based on the sociotechnical approach was used and a set of matrixes that relate the IQ influence factors with the different functional areas that support the PPC were proposed. Results enable to identify which are the main factors and the functional areas and processes that can be improved to increase the IQ for PPC. This paper presents also practical results that show how to drive an IQ diagnosis.
Bayesian Exploratory Factor Analysis
DEFF Research Database (Denmark)
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.;
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corr......This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...
Reedy, Jill; Wirfält, Elisabet; Flood, Andrew; Mitrou, Panagiota N; Krebs-Smith, Susan M; Kipnis, Victor; Midthune, Douglas; Leitzmann, Michael; Hollenbeck, Albert; Schatzkin, Arthur; Subar, Amy F
2010-02-15
The authors compared dietary pattern methods-cluster analysis, factor analysis, and index analysis-with colorectal cancer risk in the National Institutes of Health (NIH)-AARP Diet and Health Study (n = 492,306). Data from a 124-item food frequency questionnaire (1995-1996) were used to identify 4 clusters for men (3 clusters for women), 3 factors, and 4 indexes. Comparisons were made with adjusted relative risks and 95% confidence intervals, distributions of individuals in clusters by quintile of factor and index scores, and health behavior characteristics. During 5 years of follow-up through 2000, 3,110 colorectal cancer cases were ascertained. In men, the vegetables and fruits cluster, the fruits and vegetables factor, the fat-reduced/diet foods factor, and all indexes were associated with reduced risk; the meat and potatoes factor was associated with increased risk. In women, reduced risk was found with the Healthy Eating Index-2005 and increased risk with the meat and potatoes factor. For men, beneficial health characteristics were seen with all fruit/vegetable patterns, diet foods patterns, and indexes, while poorer health characteristics were found with meat patterns. For women, findings were similar except that poorer health characteristics were seen with diet foods patterns. Similarities were found across methods, suggesting basic qualities of healthy diets. Nonetheless, findings vary because each method answers a different question.
Model Correction Factor Method
DEFF Research Database (Denmark)
Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes
1997-01-01
The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... statebased on an idealized mechanical model to be adapted to the original limit state by the model correction factor. Reliable approximations are obtained by iterative use of gradient information on the original limit state function analogously to previous response surface approaches. However, the strength...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...
Bayesian Exploratory Factor Analysis
DEFF Research Database (Denmark)
Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.
2014-01-01
This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor......, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates...
Casey, S. M.
1980-06-01
The nuclear waste retrieval system intended to be used for the removal of storage canisters (each canister containing a spent fuel rod assembly) located in an underground salt bed depository is discussed. The implementation of human factors engineering principles during the design and construction of the retrieval system facilities and equipment is reported. The methodology is structured around a basic system development effort involving preliminary development, equipment development, personnel subsystem development, and operational test and evaluation. Examples of application of the techniques in the analysis of human tasks, and equipment required in the removal of spent fuel canisters is provided. The framework for integrating human engineering with the rest of the system development effort is documented.
Toman, Blaza; Nelson, Michael A.; Bedner, Mary
2017-06-01
Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).
Toman, Blaza; Nelson, Michael A; Bedner, Mary
2017-06-01
Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).
Study on the Characteristic Organic Compounds in Red Tide by Factor Analysis Method
Institute of Scientific and Technical Information of China (English)
赵明桥; 李攻科; 张展霞
2004-01-01
Factor analysis is used to study the organic compounds that have high degree of correlation with biomass in algal blooming. Based on this correlation, they are named characteristic organic compounds. The compounds found are sequalene (SQU), cedrol (CED), 2, 5-cyclohexadiene-1, 4-dione, 2, 6-bis(1, 1-dimthylethyl )(PBQ), phenol, 2, 6-bis (1, 1-dimethylethy-4-methyl) (BHT), 3-t-butyl-4-hydroxyanisole ( BHA ), 1, 2-benzenedicarboxylie acid, bis-( 2-methyl propyl ) ester (DIBP), dibutyl phthalate (DNBP), respectively. Monitoring the variations of concentration of these characteristic organic compounds in seawater may provide scientific basis for studying and forecasting red tides.
Wetzel, Angela Payne
Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across the continuum of medical education had not been previously identified. Therefore, the purpose for this study was critical review of instrument development articles employing exploratory factor or principal component analysis published in medical education (2006--2010) to describe and assess the reporting of methods and validity evidence based on the Standards for Educational and Psychological Testing and factor analysis best practices. Data extraction of 64 articles measuring a variety of constructs that have been published throughout the peer-reviewed medical education literature indicate significant errors in the translation of exploratory factor analysis best practices to current practice. Further, techniques for establishing validity evidence tend to derive from a limited scope of methods including reliability statistics to support internal structure and support for test content. Instruments reviewed for this study lacked supporting evidence based on relationships with other variables and response process, and evidence based on consequences of testing was not evident. Findings suggest a need for further professional development within the medical education researcher community related to (1) appropriate factor analysis methodology and reporting and (2) the importance of pursuing multiple sources of reliability and validity evidence to construct a well-supported argument for the inferences made from the instrument. Medical education researchers and educators should be cautious in adopting instruments from the literature and carefully review available evidence. Finally, editors and reviewers are encouraged to recognize
Akaike, Hirotugu
1987-01-01
The Akaike Information Criterion (AIC) was introduced to extend the method of maximum likelihood to the multimodel situation. Use of the AIC in factor analysis is interesting when it is viewed as the choice of a Bayesian model; thus, wider applications of AIC are possible. (Author/GDC)
Cragun, Deborah; Pal, Tuya; Vadaparampil, Susan T; Baldwin, Julie; Hampel, Heather; DeBate, Rita D
2016-07-01
Qualitative comparative analysis (QCA) was developed over 25 years ago to bridge the qualitative and quantitative research gap. Upon searching PubMed and the Journal of Mixed Methods Research, this review identified 30 original research studies that utilized QCA. Perceptions that QCA is complex and provides few relative advantages over other methods may be limiting QCA adoption. Thus, to overcome these perceptions, this article demonstrates how to perform QCA using data from fifteen institutions that implemented universal tumor screening (UTS) programs to identify patients at high risk for hereditary colorectal cancer. In this example, QCA revealed a combination of conditions unique to effective UTS programs. Results informed additional research and provided a model for improving patient follow-through after a positive screen.
DEFF Research Database (Denmark)
Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian
2013-01-01
Reliability analysis of fiber-reinforced composite structures is a relatively unexplored field, and it is therefore expected that engineers and researchers trying to apply such an approach will meet certain challenges until more knowledge is accumulated. While doing the analyses included...... in the present paper, the authors have experienced some of the possible pitfalls on the way to complete a precise and robust reliability analysis for layered composites. Results showed that in order to obtain accurate reliability estimates it is necessary to account for the various failure modes described...... by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...
Foundations of factor analysis
Mulaik, Stanley A
2009-01-01
Introduction Factor Analysis and Structural Theories Brief History of Factor Analysis as a Linear Model Example of Factor AnalysisMathematical Foundations for Factor Analysis Introduction Scalar AlgebraVectorsMatrix AlgebraDeterminants Treatment of Variables as Vectors Maxima and Minima of FunctionsComposite Variables and Linear Transformations Introduction Composite Variables Unweighted Composite VariablesDifferentially Weighted Composites Matrix EquationsMulti
Riley, Richard D; Elia, Eleni G; Malin, Gemma; Hemming, Karla; Price, Malcolm P
2015-07-30
A prognostic factor is any measure that is associated with the risk of future health outcomes in those with existing disease. Often, the prognostic ability of a factor is evaluated in multiple studies. However, meta-analysis is difficult because primary studies often use different methods of measurement and/or different cut-points to dichotomise continuous factors into 'high' and 'low' groups; selective reporting is also common. We illustrate how multivariate random effects meta-analysis models can accommodate multiple prognostic effect estimates from the same study, relating to multiple cut-points and/or methods of measurement. The models account for within-study and between-study correlations, which utilises more information and reduces the impact of unreported cut-points and/or measurement methods in some studies. The applicability of the approach is improved with individual participant data and by assuming a functional relationship between prognostic effect and cut-point to reduce the number of unknown parameters. The models provide important inferential results for each cut-point and method of measurement, including the summary prognostic effect, the between-study variance and a 95% prediction interval for the prognostic effect in new populations. Two applications are presented. The first reveals that, in a multivariate meta-analysis using published results, the Apgar score is prognostic of neonatal mortality but effect sizes are smaller at most cut-points than previously thought. In the second, a multivariate meta-analysis of two methods of measurement provides weak evidence that microvessel density is prognostic of mortality in lung cancer, even when individual participant data are available so that a continuous prognostic trend is examined (rather than cut-points). © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Mixed-Methods Analysis of Factors Impacting Use of a Postoperative mHealth App
Scott, Aaron R; Alore, Elizabeth A; Naik, Aanand D; Berger, David H
2017-01-01
Background Limited communication and care coordination following discharge from hospitals may contribute to surgical complications. Smartphone apps offer a novel mechanism for communication and care coordination. However, factors which may affect patient app use in a postoperative, at-home setting are poorly understood. Objective The objectives of this study were to (1) gauge interest in smartphone app use among patients after colorectal surgery and (2) better understand factors affecting patient app use in a postoperative, at-home setting. Methods A prospective feasibility study was performed at a hospital that principally serves low socioeconomic status patients. After colorectal surgery, patients were enrolled and given a smartphone app, which uses previously validated content to provide symptom-based recommendations. Patients were instructed to use the app daily for 14 days after discharge. Demographics and usability data were collected at enrollment. Usability was measured with the System Usability Scale (SUS). At follow-up, the SUS was repeated and patients underwent a structured interview covering ease of use, willingness to use, and utility of use. Two members of the research team independently reviewed the field notes from follow-up interviews and extracted the most consistent themes. Chart and app log reviews identified clinical endpoints. Results We screened 115 patients, enrolled 20 patients (17.4%), and completed follow-up interviews with 17 patients (85%). Reasons for nonenrollment included: failure to meet inclusion criteria (47/115, 40.9%), declined to participate (26/115, 22.6%), and other reasons (22/115, 19.1%). There was no difference in patient ratings between usability at first-use and after extended use, with SUS scores greater than the 95th percentile at both time points. Despite high usability ratings, 6/20 (30%) of patients never used the app at home after hospital discharge and 2/20 (10%) only used the app once. Interviews revealed three
Lin, Chun-Li; Yu, Jian-Hong; Liu, Heng-Liang; Lin, Chih-Hao; Lin, Yang-Sung
2010-08-10
This study determines the relative effects of changes in bone/mini-screw osseointegration and mini-screw design factors (length, diameter, thread shape, thread depth, material, head diameter and head exposure length) on the biomechanical response of a single mini-screw insertion. Eighteen CAD and finite element (FE) models corresponding to a Taguchi L(18) array were constructed to perform numerical simulations to simulate mechanical responses of a mini-screw placed in a cylindrical bone. The Taguchi method was employed to determine the significance of each design factor in controlling strain. Simulation results indicated that mini-screw material, screw exposure length and screw diameter were the major factors affecting bone strain, with percentage contributions of 63%, 24% and 7%, respectively. Bone strain decreased obviously when screw material had the high elastic modulus of stainless/titanium alloys, a small exposure length and a large diameter. Other factors had no significant on bone strain. The FE analysis combined with the Taguchi method efficiently identified the relative contributions of several mini-screw design factors, indicating that using a strong stainless/titanium alloys as screw material is advantageous, and increase in mechanical stability can be achieved by reducing the screw exposure length. Simulation results also revealed that mini-screw and bone surface contact can provide sufficient mechanical retention to perform immediately load in clinical treatment.
Argentero, P; Candura, S M
2009-01-01
The OSFA (Objective Stress Factors Analysis) method is an approach to stress evaluation based on objective risk factors recording, according to the Italian law (legislative decree 81/08) as well as to national and international guidelines. The method evaluates the work conditions recognized as hazardous for the workers' psychophysical health. It comprises two main phases: phase A (company data analysis) and phase B (analysis of work-related stress conditions). Particularly, phase B is centred on the work conditions peculiar to the different organizational units, and it is conducted by means of structured interviews to experienced employees who know the specific company reality. The interviews, based on a 72 items questionnaire, take into consideration four main work aspects: organization, social environment, safety, and management. The final version of the instrument has been tested on 13 medium-small companies of Lumbardy (Italy), operating in various fields, with a number of employees ranging from 5 to 107 (median = 37). These first OSFA method experimentations allowed to verify its adequacy in relation to the exhaustiveness of the examined areas, the intelligibility of the items, and their capacity to discriminate the stress risk factors peculiar to the various productive activities. The preliminary results indicate that the described approach is easy to apply, and favourably accepted by employers and workers for its objectivity. Additionally, the OSFA method allows to plan preventive and ameliorative interventions, according to both the legislative decree 81/08 and the European agreement of October 8, 2004. Finally, the information obtained can represent the basis for a further stress risk evaluation through subjective evaluation methods.
Directory of Open Access Journals (Sweden)
Valavanis Ioannis K
2010-09-01
Full Text Available Abstract Background Obesity is a multifactorial trait, which comprises an independent risk factor for cardiovascular disease (CVD. The aim of the current work is to study the complex etiology beneath obesity and identify genetic variations and/or factors related to nutrition that contribute to its variability. To this end, a set of more than 2300 white subjects who participated in a nutrigenetics study was used. For each subject a total of 63 factors describing genetic variants related to CVD (24 in total, gender, and nutrition (38 in total, e.g. average daily intake in calories and cholesterol, were measured. Each subject was categorized according to body mass index (BMI as normal (BMI ≤ 25 or overweight (BMI > 25. Two artificial neural network (ANN based methods were designed and used towards the analysis of the available data. These corresponded to i a multi-layer feed-forward ANN combined with a parameter decreasing method (PDM-ANN, and ii a multi-layer feed-forward ANN trained by a hybrid method (GA-ANN which combines genetic algorithms and the popular back-propagation training algorithm. Results PDM-ANN and GA-ANN were comparatively assessed in terms of their ability to identify the most important factors among the initial 63 variables describing genetic variations, nutrition and gender, able to classify a subject into one of the BMI related classes: normal and overweight. The methods were designed and evaluated using appropriate training and testing sets provided by 3-fold Cross Validation (3-CV resampling. Classification accuracy, sensitivity, specificity and area under receiver operating characteristics curve were utilized to evaluate the resulted predictive ANN models. The most parsimonious set of factors was obtained by the GA-ANN method and included gender, six genetic variations and 18 nutrition-related variables. The corresponding predictive model was characterized by a mean accuracy equal of 61.46% in the 3-CV testing sets
Kim, Kyungsik; Lee, Dong-In
2013-04-01
There is considerable interest in cross-correlations in collective modes of real data from atmospheric geophysics, seismology, finance, physiology, genomics, and nanodevices. If two systems interact mutually, that interaction gives rise to collective modes. This phenomenon is able to be analyzed using the cross-correlation of traditional methods, random matrix theory, and the detrended cross-correlation analysis method. The detrended cross-correlation analysis method was used in the past to analyze several models such as autoregressive fractionally integrated moving average processes, stock prices and their trading volumes, and taxi accidents. Particulate matter is composed of the organic and inorganic mixtures such as the natural sea salt, soil particle, vehicles exhaust, construction dust, and soot. The PM10 is known as the particle with the aerodynamic diameter (less than 10 microns) that is able to enter the human respiratory system. The PM10 concentration has an effect on the climate change by causing an unbalance of the global radiative equilibrium through the direct effect that blocks the stoma of plants and cuts off the solar radiation, different from the indirect effect that changes the optical property of clouds, cloudiness, and lifetime of clouds. Various factors contribute to the degree of the PM10 concentration. Notable among these are the land-use types, surface vegetation coverage, as well as meteorological factors. In this study, we analyze and simulate cross-correlations in time scales between the PM10 concentration and the meteorological factor (among temperature, wind speed and humidity) using the detrended cross-correlation analysis method through the removal of specific trends at eight cities in the Korean peninsula. We divide time series data into Asian dust events and non-Asian dust events to analyze the change of meteorological factors on the fluctuation of PM10 the concentration during Asian dust events. In particular, our result is
Kim, Nam Ah; Lim, Dae Gon; Lim, Jun Yeul; Kim, Ki Hyun; Jeong, Seong Hoon
2015-02-01
Correlation of thermodynamic and secondary structural stability of proteins at various buffer pHs was investigated using differential scanning calorimetry (DSC), dynamic light scattering (DLS) and attenuated total reflection Fourier-transform infrared spectroscopy (ATR FT-IR). Recombinant human epithelial growth factor (rhEGF) was selected as a model protein at various pHs and in different buffers, including phosphate, histidine, citrate, HEPES and Tris. Particle size and zeta potential of rhEGF at each selected pH of buffer were observed by DLS. Four factors were used to characterize the biophysical stability of rhEGF in solution: temperature at maximum heat flux (Tm), intermolecular β-sheet contents, zeta size and zeta potential. It was possible to predict the apparent isoelectric point (pI) of rhEGF as 4.43 by plotting pH against zeta potential. When the pH of the rhEGF solution increased or decreased from pI, the absolute zeta potential increased indicating a reduced possibility of protein aggregation, since Tm increased and β-sheet contents decreased. The contents of induced intermolecular β-sheet in Tris and HEPES buffers were the lowest. Thermodynamic stability of rhEGF markedly increased when pH is higher than 6.2 in histidine buffer where Tm of first transition was all above 70 °C. Moreover, rhEGF in Tris buffer was more thermodynamically stable than in HEPES with higher zeta potential. Tris buffer at pH 7.2 was concluded to be the most favorable.
Directory of Open Access Journals (Sweden)
Mehdi Fallah Jelodar
2016-01-01
Full Text Available Bank branches have a vital role in the economy of all countries. They collect assets from various sources and put them in the hand of those sectors that need liquidity. Due to the limited financial and human resources and capitals and also because of the unlimited and new customers’ needs and strong competition between banks and financial and credit institutions, the purpose of this study is to provide an answer to the question of which of the factors affecting performance, creating value, and increasing shareholder dividends are superior to others and consequently managers should pay more attention to them. Therefore, in this study, the factors affecting performance (efficiency in the areas of management, personnel, finance, and customers were segmented and obtained results were ranked using both methods of Data Envelopment Analysis and hierarchical analysis. In both of these methods, the leadership style in the area of management; the recruitment and resource allocation in the area of financing; the employees’ satisfaction, dignity, and self-actualization in the area of employees; and meeting the new needs of customers got more weights.
Isaacson, Eugene
1994-01-01
This excellent text for advanced undergraduates and graduate students covers norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, and other topics. It offers a careful analysis and stresses techniques for developing new methods, plus many examples and problems. 1966 edition.
Mixed-Methods Analysis of Factors Impacting Use of a Postoperative mHealth App.
Scott, Aaron R; Alore, Elizabeth A; Naik, Aanand D; Berger, David H; Suliburk, James W
2017-02-08
Limited communication and care coordination following discharge from hospitals may contribute to surgical complications. Smartphone apps offer a novel mechanism for communication and care coordination. However, factors which may affect patient app use in a postoperative, at-home setting are poorly understood. The objectives of this study were to (1) gauge interest in smartphone app use among patients after colorectal surgery and (2) better understand factors affecting patient app use in a postoperative, at-home setting. A prospective feasibility study was performed at a hospital that principally serves low socioeconomic status patients. After colorectal surgery, patients were enrolled and given a smartphone app, which uses previously validated content to provide symptom-based recommendations. Patients were instructed to use the app daily for 14 days after discharge. Demographics and usability data were collected at enrollment. Usability was measured with the System Usability Scale (SUS). At follow-up, the SUS was repeated and patients underwent a structured interview covering ease of use, willingness to use, and utility of use. Two members of the research team independently reviewed the field notes from follow-up interviews and extracted the most consistent themes. Chart and app log reviews identified clinical endpoints. We screened 115 patients, enrolled 20 patients (17.4%), and completed follow-up interviews with 17 patients (85%). Reasons for nonenrollment included: failure to meet inclusion criteria (47/115, 40.9%), declined to participate (26/115, 22.6%), and other reasons (22/115, 19.1%). There was no difference in patient ratings between usability at first-use and after extended use, with SUS scores greater than the 95th percentile at both time points. Despite high usability ratings, 6/20 (30%) of patients never used the app at home after hospital discharge and 2/20 (10%) only used the app once. Interviews revealed three themes related to app use: (1
Analysis of factors influencing fire damage to concrete using nonlinear resonance vibration method
Energy Technology Data Exchange (ETDEWEB)
Park, Gang Kyu; Park, Sun Jong; Kwak, Hyo Gyoung [Civil and Environmental Engineering, Korea Advanced Institute of Science and Technology, KAIST, Daejeon (Korea, Republic of); Yim, Hong Jae [Dept. of Construction and Disaster Prevention Engineering, Kyungpook National University, Sangju (Korea, Republic of)
2015-04-15
In this study, the effects of different mix proportions and fire scenarios (exposure temperatures and post-fire-curing periods) on fire-damaged concrete were analyzed using a nonlinear resonance vibration method based on nonlinear acoustics. The hysteretic nonlinearity parameter was obtained, which can sensitively reflect the damage level of fire-damaged concrete. In addition, a splitting tensile strength test was performed on each fire-damaged specimen to evaluate the residual property. Using the results, a prediction model for estimating the residual strength of fire-damaged concrete was proposed on the basis of the correlation between the hysteretic nonlinearity parameter and the ratio of splitting tensile strength.
Directory of Open Access Journals (Sweden)
Sri Wahyuni
2016-11-01
Full Text Available This study aims to measure the efficiency of profit and examine the factors that affect the efficiency of shariah banks profit in Indonesia such as the size of banks, risk financing, and capital adequacy. This study used the Shariah banks in Indonesia, during the period of 2010-2014. These shariah banks were selected as the sample Commercial shariah banks and Shariah Business Units. This study uses three stages of research. First, it measures the efficiency of profit using a parametric approach that is Stochastic Frontier Approach (SFA. Secondly, its uses regression profit efficiency scores with various determinants of profit efficiency. The third phase is testing the efficiency score during the global crisis (2008-2009 and after the global crisis period (2010-2014. It shows that in overall there occurred profit efficiency in the shariah banks in Indonesia as it was indicated by the score of profit efficiency that is less than one. The inefficiency occurred in both Shariah banks and shariah business units. Bank size has a positive impact on profit efficiency. The bigger the bank, the better profit efficiency is. It can be implied that this research provides the managers the clues that shariah banks should improve their profit efficiency management. For Bank Indonesia, they can use this evidence to design policies that can encourage profit efficiency in shariah banks.
Factorization method of quadratic template
Kotyrba, Martin
2017-07-01
Multiplication of two numbers is a one-way function in mathematics. Any attempt to distribute the outcome to its roots is called factorization. There are many methods such as Fermat's factorization, Dixońs method or quadratic sieve and GNFS, which use sophisticated techniques fast factorization. All the above methods use the same basic formula differing only in its use. This article discusses a newly designed factorization method. Effective implementation of this method in programs is not important, it only represents and clearly defines its properties.
Wetzel, Angela Payne
2011-01-01
Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across…
Factorization Method in Quantum Mechanics
Dong, Shi-Hai
2007-01-01
This Work introduces the factorization method in quantum mechanics at an advanced level with an aim to put mathematical and physical concepts and techniques like the factorization method, Lie algebras, matrix elements and quantum control at the Reader’s disposal. For this purpose a comprehensive description is provided of the factorization method and its wide applications in quantum mechanics which complements the traditional coverage found in the existing quantum mechanics textbooks. Related to this classic method are the supersymmetric quantum mechanics, shape invariant potentials and group theoretical approaches. It is no exaggeration to say that this method has become the milestone of these approaches. In fact the Author’s driving force has been his desire to provide a comprehensive review volume that includes some new and significant results about the factorization method in quantum mechanics since the literature is inundated with scattered articles in this field, and to pave the Reader’s way into ...
DEFF Research Database (Denmark)
Dimitrov, Nikolay Krasimiroy; Friis-Hansen, Peter; Berggreen, Christian
2009-01-01
This paper presents a reliability analysis of a composite blade profile. The so-called Model Correction Factor technique is applied as an effective alternate approach to the response surface technique. The structural reliability is determined by use of a simplified idealised analytical model which...
An easy guide to factor analysis
Kline, Paul
2014-01-01
Factor analysis is a statistical technique widely used in psychology and the social sciences. With the advent of powerful computers, factor analysis and other multivariate methods are now available to many more people. An Easy Guide to Factor Analysis presents and explains factor analysis as clearly and simply as possible. The author, Paul Kline, carefully defines all statistical terms and demonstrates step-by-step how to work out a simple example of principal components analysis and rotation. He further explains other methods of factor analysis, including confirmatory and path analysis, a
First course in factor analysis
Comrey, Andrew L
2013-01-01
The goal of this book is to foster a basic understanding of factor analytic techniques so that readers can use them in their own research and critically evaluate their use by other researchers. Both the underlying theory and correct application are emphasized. The theory is presented through the mathematical basis of the most common factor analytic models and several methods used in factor analysis. On the application side, considerable attention is given to the extraction problem, the rotation problem, and the interpretation of factor analytic results. Hence, readers are given a background of
Directory of Open Access Journals (Sweden)
Semra Boran
2007-09-01
Full Text Available Taguchi Method and Regression Analysis have wide spread applications in statistical researches. It can be said that Taguchi Method is one of the most frequently used method especially in optimization problems. But applications of this method are not common in food industry . In this study, optimal operating parameters were determined for industrial size fluidized bed dryer by using Taguchi method. Then the effects of operating parameters on activity value (the quality chracteristic of this problem were calculated by regression analysis. Finally, results of two methods were compared.To summarise, average activity value was found to be 660 for the 400 kg loading and average drying time 26 minutes by using the factors and levels taken from application of Taguchi Method. Whereas, in normal conditions (with 600 kg loading average activity value was found to be 630 and drying time 28 minutes. Taguchi Method application caused 15 % rise in activity value.
ANALYSIS OF MULTISCALE METHODS
Institute of Scientific and Technical Information of China (English)
Wei-nan E; Ping-bing Ming
2004-01-01
The heterogeneous multiscale method gives a general framework for the analysis of multiscale methods. In this paper, we demonstrate this by applying this framework to two canonical problems: The elliptic problem with multiscale coefficients and the quasicontinuum method.
Directory of Open Access Journals (Sweden)
Y. Oraman G. Unakıtan E. Yılmaz B. Başaran
2011-01-01
Full Text Available The aim of this study is to evaluate consumer behaviour towards factors affecting purchase decision of some traditional food products and grouping those consumer attitudes. The original data was obtained from results of a survey conducted in 14 different districts in Tekirdag. The survey consisted of face-to-face interviews conducted within a total sample of 166 households randomly. In the study, multidimensional scaling analysis is used for evaluating the effective factors of consumer preferences for traditional products and grouping their preferences. It is found that taste, food safety and freshness have similar effects on consumer preferences for yogurt, molasses and noodles. Price has an important and positive loaded effect on other variables for the three products.
Haghighi, Mona; Johnson, Suzanne Bennett; Qian, Xiaoning; Lynch, Kristian F; Vehik, Kendra; Huang, Shuai
2016-01-01
Regression models are extensively used in many epidemiological studies to understand the linkage between specific outcomes of interest and their risk factors. However, regression models in general examine the average effects of the risk factors and ignore subgroups with different risk profiles. As a result, interventions are often geared towards the average member of the population, without consideration of the special health needs of different subgroups within the population. This paper demonstrates the value of using rule-based analysis methods that can identify subgroups with heterogeneous risk profiles in a population without imposing assumptions on the subgroups or method. The rules define the risk pattern of subsets of individuals by not only considering the interactions between the risk factors but also their ranges. We compared the rule-based analysis results with the results from a logistic regression model in The Environmental Determinants of Diabetes in the Young (TEDDY) study. Both methods detected a similar suite of risk factors, but the rule-based analysis was superior at detecting multiple interactions between the risk factors that characterize the subgroups. A further investigation of the particular characteristics of each subgroup may detect the special health needs of the subgroup and lead to tailored interventions.
DEFF Research Database (Denmark)
Romano, Rosaria; Næs, Tormod; Brockhoff, Per Bruun
2015-01-01
in the use of the scale with reference to the existing structure of relationships between sensory descriptors. The multivariate assessor model will be tested on a data set from milk. Relations between the proposed model and other multiplicative models like parallel factor analysis and analysis of variance...
Comparison Research of Statistical Method in Exploratory Factor Analysis%探索性因子分析中统计方法的比对研究
Institute of Scientific and Technical Information of China (English)
王娜
2014-01-01
Taking the selection of statistical method in exploratory factor analysis as the target, and in view of the distortion of calculation result due to improper selection of statistical methods in many literature using exploratory factor analysis, comparison research is done to the 15 statistical methods and suggestions are presented.%本文以探索性因子分析中统计方法的选择为目标，针对在许多应用探索性因子分析的文献中存在的统计方法选择不当致使计算结果失真这一现象，对探索性因子分析中的15种统计方法进行了比对研究，并给出了建议。
Geng, L.; Liu, J.
The accuracy of damage alarming and damage identification of bridge structural and the reliability of reliability evaluation and the fatigue life evaluation depend on the data of sensor. At present, sample calibration is only done hi the initial installation of sensors of the sensing system for long-term monitoring of the bridge and generally is not done in using. So whether the system is stable and reliable is unable to be known. In this paper, on the model of connected liquid level sensor system, value traceability approach of sensor system online measuring method is studied by the field experiment and theory research based on technical index of measurement (output displacement) proposed by output parameters of integrated deflection sensing system. Applicable connected liquid level sensor system online measuring method is established and influential factors and the uncertainty are analyzed. Finally the applicable scope of the method is obtained. This paper has made the beneficial exploration on online measurement of the instrument after the installation of the bridge health monitoring system.
Energy Technology Data Exchange (ETDEWEB)
Li, Xiongwei; Wang, Zhe, E-mail: zhewang@tsinghua.edu.cn; Fu, Yangting; Li, Zheng; Ni, Weidou
2014-09-01
Quantitative measurement of carbon content in coal is essentially important for coal property analysis. However, quantitative measurement of carbon content in coal using laser-induced breakdown spectroscopy (LIBS) suffered from low measurement accuracy due to measurement uncertainty as well as the matrix effects. In this study, our previously proposed spectrum standardization method and dominant factor based partial least square (PLS) method were combined to improve the measurement accuracy of carbon content in coal using LIBS. The combination model utilized the spectrum standardization method to accurately calculate dominant carbon concentration as the dominant factor, and then applied PLS with full spectrum information to correct residual errors. The combination model was applied to measure the carbon content in 24 bituminous coal samples. Results demonstrated that the combination model can further improve measurement accuracy compared with the spectrum standardization model and the dominant factor based PLS model, in which the dominant factor was calculated using traditional univariate method. The coefficient of determination, root-mean-square error of prediction, and average relative error for the combination model were 0.99, 1.63%, and 1.82%, respectively. The values for the spectrum standardization model were 0.90, 2.24%, and 2.75%, respectively, whereas those for the dominant factor based PLS model were 0.99, 2.66%, and 3.64%, respectively. The results indicate that LIBS has great potential to be applied for the coal analysis. - Highlights: • Spectrum standardization method is utilized to establish a more accurate dominant factor model. • PLS algorithm is applied to further compensate for residual errors using the entire spectrum information. • Measurement accuracy is improved.
Martin, Fernando; Kim, Min-Su; Hovemann, Bernard; Alcorta, Esther
2002-01-01
Olfactory information is transmitted to the brain using combinatorial receptor codes; consequently, a single reception element can be activated by different odorants. Several methods have been applied to describe from a functional point of view those odorants sharing olfactory reception components. A genetic approach in Drosophila melanogaster used correlation between behavioral responses to different odorants for deducing common olfactory pathway-genes. A factor analysis applied to behavioral responses to five odorants of 27 antennal enhancer-trap lines revealed three components, explaining 82.1% of the total observed variance. A first factor affects simultaneously the response to ethyl acetate, propionaldehyde, and acetone. A second factor was related to responses to ethyl acetate, ethyl alcohol, and acetone, and, finally, the third factor associates responses to acetic acid and ethyl acetate. They contribute by 35.1%, 36.9%, and 28%, respectively, to the explained variance.
The Hull Method for Selecting the Number of Common Factors
Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L.
2011-01-01
A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…
On the factorization method in quantum mechanics
Rosas-Ortiz, J O
1998-01-01
New exactly solvable problems have already been studied by using a modification of the factorization method introduced by Mielnik. We review this method and its connection with the traditional factorization method. The survey includes the discussion on a generalization of the factorization energies used in the traditional Infeld and Hull method.
Methods of Multivariate Analysis
Rencher, Alvin C
2012-01-01
Praise for the Second Edition "This book is a systematic, well-written, well-organized text on multivariate analysis packed with intuition and insight . . . There is much practical wisdom in this book that is hard to find elsewhere."-IIE Transactions Filled with new and timely content, Methods of Multivariate Analysis, Third Edition provides examples and exercises based on more than sixty real data sets from a wide variety of scientific fields. It takes a "methods" approach to the subject, placing an emphasis on how students and practitioners can employ multivariate analysis in real-life sit
Directory of Open Access Journals (Sweden)
A. P. Shavrin
2011-01-01
Full Text Available The aim – the study of latent relationships between indicators of the thickness of intima-media (CMM and infectious, immune, inflammatory and metabolic factors in patients with varying degrees of severity of vascular changes in these multivariate methods of statistical analysis.Materials and methods. Study included 220 patients (mean age – 43,9 ± 0,5 years who were divided into 3 groups. Group 1 consisted of thepatients with no risk factors of cardiovascular disease (CVD, the 2nd – the presence of the above factors, in third – with atherosclerotic plaques in the carotid artery. Every patient had conducted a comprehensive survey, which included an ultrasound of vessels on the apparatus Aloka 5000 with the measurement of the thickness of KIM, the study of lipid panel, the definition of C-reactive protein and cytokines – tumor necrosis factor-α, interferon-γ, interleukin-1, -8, -4, antibodies to cytomegalovirus immunoglobulin (CMV, herpes simplex virus type 1, C. pneumoniae, H. pylori and β-hemolytic streptococcus group A. The immune system status was assessed by indicators of innate and acquired immunity.Results. According to cluster analysis, all groups of patients revealed the presence of close relationships with linear thickness KIM, infectious, immune and metabolic markers, and in patients with atherosclerotic plaques in blood vessels links with indicators of inflammation are additionally found. Using factor analysis latent variables exist revealed, consisting of indices and thickness of the CMM, in group 1 – blood lipids, in the 2nd – infectious factors (CMV, C. pneumoniae and immune parameters. In the 3rd group vascular wall was linked with infectious diseases, immune and inflammatory indices and blood lipids, and systolic and diastolic blood pressure.Conclusion. The closest relationship with vascular wall of the studied parameters was observed in patients with risk factors of cardiovasculardisease, and in the
LENUS (Irish Health Repository)
Hehir, Mark P
2013-10-01
Obstetric anal sphincter injury (OASIS) represents a major cause of maternal morbidity and is a risk factor for the development of fecal incontinence. We set out to analyze the incidence of OASIS and its association with mode of delivery in two large obstetric hospitals across an 8-year study period.
A New Factorization Method to Factorize RSA Public Key Encryption
Directory of Open Access Journals (Sweden)
Bhagvant Ram Ambedkar
2011-11-01
Full Text Available The security of public key encryption such as RSA scheme relied on the integer factoring problem. The security of RSA algorithm is based on positive integer N, because each transmitting node generates pair of keys such as public and private. Encryption and decryption of any message depends on N. Where, N is the product of two prime numbers and pair of key generation is dependent on these prime numbers. The factorization of N is very intricate. In this paper a New Factorization method is proposed to obtain the factor of positive integer N. The proposed work focuses on factorization of all trivial and nontrivial integer numbers and requires fewer steps for factorization process of RSA modulus N. The New Factorization method is based on Pollard rho factorization method. Experimental results shown that factorization speed is fast as compare existing methods.
Transcription factors - Methods and protocols
Directory of Open Access Journals (Sweden)
CarloAlberto Redi
2011-03-01
Full Text Available A hearty wellcome to prof. Higgins editorial toil: a necessary tool for those colleagues (young and older fighting each day with the transcription factor they are involved with. In fact, the book is a full coverage compendium of state of the art papers dealing with practical thecniques and theoretical concepts about transcription factors. Each of the chapters (twenty-four is written by colleagues already working with one of the many trascription factors we become acquainted with. For the sake of the reader the volume is divided in four parts: Part I is a brief (when compared to the others three ! introductory presentation of the shuttling (i.e., transcription factor nuclear-cytoplasmic trafficking achieved by three reviews presentation of this biologically critical phenomenon. Part II (nine chapters is devoted to the necessary techniques to study nuclear translocation ...............
DEFF Research Database (Denmark)
Chen, Zhe; Wang, L.; Yeh, T-H.
2009-01-01
Due to the recent price spike of the international oil and the concern of global warming, the development and deployment of renewable energy become one of the most important energy policies around the globe. Currently, there are different capacities and hub heights for commercial wind turbine...... generators (WTGs). To fully capture wind energy, different wind farms (WFs) should select adequate capacity of WTGs to effectively harvest wind energy and maximize their economic benefit. To establish selection criterion, this paper first derives the equations for capacity factor (CF) and pairing performance...
Energy Technology Data Exchange (ETDEWEB)
Lee, Suk Jun [Dept. of Biomedical Laboratory Science, College of Health Science, Cheongju University, Cheongju (Korea, Republic of); Yu, Seung Man [Dept. of Radiological Science, College of Health Science, Gimcheon University, Gimcheon (Korea, Republic of)
2016-06-15
The purpose of this study was examined the measurement error factor on AMARES of jMRUI method for magnetic resonance spectroscopy (MRS) quantitative analysis by skilled and unskilled observer method and identified the reason of independent observers. The Point-resolved spectroscopy sequence was used to acquired magnetic resonance spectroscopy data of 10 weeks male Sprague-Dawley rat liver. The methylene protons ((-CH2-)n) of 1.3 ppm and water proton (H2O) of 4.7 ppm ratio was calculated by LCModel software for using the reference data. The seven unskilled observers were calculated total lipid (methylene/water) using the jMRUI AMARES technique twice every 1 week, and we conducted interclass correlation coefficient (ICC) statistical analysis by SPSS software. The inter-observer reliability (ICC) of Cronbach's alpha value was less than 0.1. The average value of seven observer's total lipid (0.096±0.038) was 50% higher than LCModel reference value. The jMRUI AMARES analysis method is need to minimize the presence of the residual metabolite by identified metabolite MRS profile in order to obtain the same results as the LCModel.
Directory of Open Access Journals (Sweden)
Terezinha Ferreira de Oliveira
2006-12-01
Full Text Available The present work uses multivariate statistical analysis as a form of establishing the main sources of error in the Quantitative Phase Analysis (QPA using the Rietveld method. The quantitative determination of crystalline phases using x ray powder diffraction is a complex measurement process whose results are influenced by several factors. Ternary mixtures of Al2O3, MgO and NiO were prepared under controlled conditions and the diffractions were obtained using the Bragg-Brentano geometric arrangement. It was possible to establish four sources of critical variations: the experimental absorption and the scale factor of NiO, which is the phase with the greatest linear absorption coefficient of the ternary mixture; the instrumental characteristics represented by mechanical errors of the goniometer and sample displacement; the other two phases (Al2O3 and MgO; and the temperature and relative humidity of the air in the laboratory. The error sources excessively impair the QPA with the Rietveld method. Therefore it becomes necessary to control them during the measurement procedure.
Institute of Scientific and Technical Information of China (English)
QIU Li-min; LIU Miao; WANG Ju; ZHANG Sheng-nan; FANG Chun-sheng
2012-01-01
In order to identify the day and night pollution sources of PM10 in ambient air in Longyan City,the authors analyzed the elemental composition of respirable particulate matters in the day and night ambient air samples and various pollution sources which were collected in January 2010 in Longyan with inductivity coupled plasma-mass spectrometry(ICP-MS).Then chemical mass balance(CMB)model and factor analysis(FA)method were applied to comparatively study the inorganic components in the sources and receptor samples.The results of factor analysis show that the major sources were road dust,waste incineration and mixed sources which contained automobile exhaust,soil dust/secondary dust and coal dust during the daytime in Longyan City,China.There are two major sources of pollution which are soil dust and mixture sources of automobile exhaust and secondary dust during the night in Longyan.The results of CMB show that the major sources are secondary dust,automobile exhaust and road dust during the daytime in Longyan.The major sources are secondary dust,soil dust and automobile exhaust during the night in Longyan.The results of the two methods are similar to each other and the results will guide us to plan to control the PM10 pollution sources in Longyan.
PROGNOSTIC FACTORS ANALYSIS FOR STAGEⅠ RECTAL CANCER
Institute of Scientific and Technical Information of China (English)
武爱文; 顾晋; 薛钟麒; 王怡; 徐光炜
2001-01-01
To explore the death-related factors of stageⅠrectal cancer patients. Methods: 89 cases of stage I rectal cancer patients between 1985 and 2000 were retrospectively studied for prognostic factors. Factors including age, gender, tumor size, circumferential occupation, gross type, pathological type, depth of tumor invasion, surgical procedure, adjuvant chemotherapy and postoperative complication were chosen for cox multivariate analysis (forward procedure) using Spss software (10.0 version). Results: multivariate analysis demonstrated that muscular invasion was an independent negative prognostic factor for stageⅠrectal cancer patients (P=0.003). Conclusion: Muscular invasion is a negative prognostic factor for stage I rectal cancer patients.
Factor analysis of multivariate data
Digital Repository Service at National Institute of Oceanography (India)
Fernandes, A.A.; Mahadevan, R.
A brief introduction to factor analysis is presented. A FORTRAN program, which can perform the Q-mode and R-mode factor analysis and the singular value decomposition of a given data matrix is presented in Appendix B. This computer program, uses...
Factor Analysis of Intern Effectiveness
Womack, Sid T.; Hannah, Shellie Louise; Bell, Columbus David
2012-01-01
Four factors in teaching intern effectiveness, as measured by a Praxis III-similar instrument, were found among observational data of teaching interns during the 2010 spring semester. Those factors were lesson planning, teacher/student reflection, fairness & safe environment, and professionalism/efficacy. This factor analysis was as much of a…
Factor analysis and missing data
Kamakura, WA; Wedel, M
2000-01-01
The authors study the estimation of factor models and the imputation of missing data and propose an approach that provides direct estimates of factor weights without the replacement of missing data with imputed values. First, the approach is useful in applications of factor analysis in the presence
Jin, H.; Shinotsuka, H.; Yoshikawa, H.; Iwai, H.; Tanuma, S.; Tougaard, S.
2010-04-01
The energy loss functions (ELFs) and optical constants of Si and SiO2 were obtained from quantitative analysis of reflection electron energy loss spectroscopy (REELS) by a new approach. In order to obtain the ELF, which is directly related to the optical constants, we measured series of angular and energy dependent REELS spectra for Si and SiO2. The λ(E )K(ΔE) spectra, which are the product of the inelastic mean free path (IMFP) and the differential inverse IMFP, were obtained from the measured REELS spectra. We used the factor analysis (FA) method to analyze series of λ(E )K(ΔE) spectra for various emission angles at fixed primary beam energy to separate the surface-loss and bulk-loss components. The extracted bulk-loss components enable to obtain the ELFs of Si and SiO2, which are checked by oscillator strength-sum and perfect-screening-sum rules. The real part of the reciprocal of the complex dielectric function was determined by Kramers-Kronig analysis of the ELFs. Subsequently, the optical constants of Si and SiO2 were calculated. The resulting optical constants in terms of the refractive index and the extinction coefficient for Si and SiO2 are in good agreement with Palik's reference data. The results demonstrate the general applicability of FA as an efficient method to obtain the bulk ELF and to determine the optical properties from REELS measurements.
DEFF Research Database (Denmark)
Olivarius, Signe
of the transcriptome, 5’ end capture of RNA is combined with next-generation sequencing for high-throughput quantitative assessment of transcription start sites by two different methods. The methods presented here allow for functional investigation of coding as well as noncoding RNA and contribute to future......While increasing evidence appoints diverse types of RNA as key players in the regulatory networks underlying cellular differentiation and metabolism, the potential functions of thousands of conserved RNA structures encoded in mammalian genomes remain to be determined. Since the functions of most...... RNAs rely on interactions with proteins, the establishment of protein-binding profiles is essential for the characterization of RNAs. Aiming to facilitate RNA analysis, this thesis introduces proteomics- as well as transcriptomics-based methods for the functional characterization of RNA. First, RNA...
Multiple factor analysis by example using R
Pagès, Jérôme
2014-01-01
Multiple factor analysis (MFA) enables users to analyze tables of individuals and variables in which the variables are structured into quantitative, qualitative, or mixed groups. Written by the co-developer of this methodology, Multiple Factor Analysis by Example Using R brings together the theoretical and methodological aspects of MFA. It also includes examples of applications and details of how to implement MFA using an R package (FactoMineR).The first two chapters cover the basic factorial analysis methods of principal component analysis (PCA) and multiple correspondence analysis (MCA). The
Synthesizing Regression Results: A Factored Likelihood Method
Wu, Meng-Jia; Becker, Betsy Jane
2013-01-01
Regression methods are widely used by researchers in many fields, yet methods for synthesizing regression results are scarce. This study proposes using a factored likelihood method, originally developed to handle missing data, to appropriately synthesize regression models involving different predictors. This method uses the correlations reported…
SWOT ANALYSIS ON SAMPLING METHOD
Directory of Open Access Journals (Sweden)
CHIS ANCA OANA
2014-07-01
Full Text Available Audit sampling involves the application of audit procedures to less than 100% of items within an account balance or class of transactions. Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors. It should be applied correctly for a fair view of financial statements, to satisfy the needs of all financial users. In order to be applied correctly the method must be understood by all its users and mainly by auditors. Otherwise the risk of not applying it correctly would cause loose of reputation and discredit, litigations and even prison. Since there is not a unitary practice and methodology for applying the technique, the risk of incorrectly applying it is pretty high. The SWOT analysis is a technique used that shows the advantages, disadvantages, threats and opportunities. We applied SWOT analysis in studying the sampling method, from the perspective of three players: the audit company, the audited entity and users of financial statements. The study shows that by applying the sampling method the audit company and the audited entity both save time, effort and money. The disadvantages of the method are difficulty in applying and understanding its insight. Being largely used as an audit method and being a factor of a correct audit opinion, the sampling method’s advantages, disadvantages, threats and opportunities must be understood by auditors.
Statistical methods for bioimpedance analysis
Directory of Open Access Journals (Sweden)
Christian Tronstad
2014-04-01
Full Text Available This paper gives a basic overview of relevant statistical methods for the analysis of bioimpedance measurements, with an aim to answer questions such as: How do I begin with planning an experiment? How many measurements do I need to take? How do I deal with large amounts of frequency sweep data? Which statistical test should I use, and how do I validate my results? Beginning with the hypothesis and the research design, the methodological framework for making inferences based on measurements and statistical analysis is explained. This is followed by a brief discussion on correlated measurements and data reduction before an overview is given of statistical methods for comparison of groups, factor analysis, association, regression and prediction, explained in the context of bioimpedance research. The last chapter is dedicated to the validation of a new method by different measures of performance. A flowchart is presented for selection of statistical method, and a table is given for an overview of the most important terms of performance when evaluating new measurement technology.
Factors Influencing Acceptance Of Contraceptive Methods
Directory of Open Access Journals (Sweden)
Anita Gupta
1997-04-01
Full Text Available Research Problem: What are the factors influencing acceptance of contraceptive methods.Objective: To study the determinants influencing contraceptive acceptance.Study design: Population based cross - sectional study.Setting: Rural area of East DelhiParticipants: Married women in the reproductive age group.Sample:Stratified sampling technique was used to draw the sample.Sample Size: 328 married women of reproductive age group.Study Variables: Socio-economic status, Type of contraceptive, Family size, Male child.Outcome Variables: Acceptance of contraceptivesStatistical Analysis: By proportions.Result: Prevalence of use of contraception at the time of data collection was 40.5%. Tubectomy and vasectomy were most commonly used methods. (59.4%, n - 133. Educational status of the women positively influenced the contraceptive acceptance but income did not. Desire for more children was single most important deterrent for accepting contraception.Recommendations:(i Traditional method of contraception should be given more attention.(ii Couplesshould be brought in the contraceptive use net at the early stage of marriage.
Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo
2015-01-01
Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…
New implementations of the 2-factor method
Izmailov, A. F.
2015-06-01
The so-called 2-factor method was designed for finding singular solutions to nonlinear equations. New ways of implementing this method are proposed. So far, the known variants of the method used a very laborious iteration. Its implementation requires that the singular value decomposition be calculated for the derivative of the equation at hand. The new economical implementation is based on the Gaussian elimination with pivoting. In addition, the potentials for the globalization of convergence of the method are examined. In total, the proposed tools convert the conceptual sketch of the 2-factor method into a truly practical algorithm.
基于标签和因子分析的协同推荐方法%Collaborative Recommendation Method Based on Tags and Factor Analysis
Institute of Scientific and Technical Information of China (English)
蔡国永; 吕瑞; 樊永显
2015-01-01
根据在线社区中群体的历史行为进行物品(或信息)推荐是当前研究热点之一，传统推荐算法都面临数据稀疏性问题的挑战。针对传统推荐算法知识表示的局限性进行了研究，提出了一种基于标签系统的用户行为知识表示法，把用户在物品上历史行为的统计，转化为对用户在物品标签上的统计，从而缓解数据稀疏的情况。为了降低标签维度过高导致的计算复杂性问题，提出了采用因子分析法，抽取出潜在重要且稳定的特征因子向量来最终表示用户的历史行为，并据此度量用户行为在特征因子向量上的相似性。最后采用协同过滤的思想给出了一种新的协同推荐方法。通过在真实数据集上的大量对比实验，表明该方法在处理具有稀疏性的数据集时，总是能保持更高且更稳定的推荐准确率。%Item (or information) recommendation is one of hot research topics currently. However the is-sue of sparseness in dataset challenges all traditional recommendation algorithms. Limitations of knowl-edge representation in traditional recommendation algorithms were studied. The tag-system-based knowl-edge to represent information of each user’s behavior was proposed. That it the account on user’s behav-ior on items is transferred to an account on a user’s behavior on tags. To decrease the computation com-plexity on high dimensional tag-based datasets, a factor analysis method was taken to extract those most important latent factors to represent users’ behaviors. Based on each user’s representing vector of latent factors, a new way was given to compute similarities among users. By incorporating this similarity meas-ure, a new collaborative recommendation method with low sensitivity to sparseness was built to meet the need of practical and dynamic datasets. Experiments were carried on real-world datasets to compare the proposed method with other state
Institute of Scientific and Technical Information of China (English)
郑雪峰; 邹长武; 印红玲
2011-01-01
The cluster analysis and factor analysis methods were successfully used to solve the problem of multicollinearity in CMB ( chemical mass balance)model. First the cluster analysis was used to analyse the colinearity among the emission sources, then the main factors of the emission sources with strong colinearity were selected by principal component analysis. Bring them and other single sources into the CMB to calculate.Finally returned the main factors contribution to single source of mixed dust sources, the contribution of each emission sources could be acquired.Compared with the other methods,the results showed that the analytical results were realistic and the method was feasible.%运用聚类分析扣因子分析来解决大气颗粒物源解析CMB模型(化学质量平衡模型)在解析混合尘源中遇到的共线性问题.即通过聚类分析对排放源进行共线性强弱分类,根据分析结果对其中共线性较强的一类(扬尘类源)提取主因子,并和其他独立尘源共同带入CMB模型进行计算.最后将主因子贡献量返回,得到各个源贡献值.通过和其他方法的效果进行比较,结果表明,该方法解析结果符合实际,具有可行性.
Transforming Rubrics Using Factor Analysis
Baryla, Ed; Shelley, Gary; Trainor, William
2012-01-01
Student learning and program effectiveness is often assessed using rubrics. While much time and effort may go into their creation, it is equally important to assess how effective and efficient the rubrics actually are in terms of measuring competencies over a number of criteria. This study demonstrates the use of common factor analysis to identify…
Exploratory factor analysis in Rehabilitation Psychology: a content analysis.
Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N
2014-11-01
Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.
METHODS OF MAGNETOTELLURIC ANALYSIS
Magnetotelluric prospecting is a method of geophysical exploration that makes use of the fluctuations in the natural electric and magnetic fields...function of the conductivity structure of the earth’s substrata. This report describes some new methods for analyzing and interpreting magnetotelluric
Practical Considerations for Using Exploratory Factor Analysis in Educational Research
Beavers, Amy S.; Lounsbury, John W.; Richards, Jennifer K.; Huck, Schuyler W.; Skolits, Gary J.; Esquivel, Shelley L.
2013-01-01
The uses and methodology of factor analysis are widely debated and discussed, especially the issues of rotational use, methods of confirmatory factor analysis, and adequate sample size. The variety of perspectives and often conflicting opinions can lead to confusion among researchers about best practices for using factor analysis. The focus of the…
基于因子分析法的厦门港竞争力分析%The Analysis of Xiamen Harbor Competitiveness Based of Factor Analysis Method
Institute of Scientific and Technical Information of China (English)
林秋云
2015-01-01
文中以厦门港作为研究对象，然后运用因子分析法中的主成分分析法，构建评价港口竞争力指标体系，应用spss分析软件对厦门港及其主要竞争港口上海港、高雄港、广州港这四大港口的竞争力展开分析，最终得到四大港口竞争力由大到小的排列顺序。该分析结果有助于厦门港将来选择发展方向，制定发展措施。%The article uses Xiamen Harbor as the research object, and then applies Principal Component Analysis to evaluation Xiamen Harbor competitiveness.First, we establish some evaluation indexes and then use SPSS to analyze Xiamen Harbor and its major competitors including Shanghai Harbor,Kaohsiung Harbor,Guangzhou Harbor.Finally,we have the sequence of the harbors competitiveness.The result is helpful to the future development direction of Xiamen Harbor.
Li, Xiongwei; Fu, Yangting; Li, Zheng; Ni, Weidou
2014-01-01
Successful quantitative measurement of carbon content in coal using laser-induced breakdown spectroscopy (LIBS) is suffered from relatively low precision and accuracy. In the present work, the spectrum standardization method was combined with the dominant factor based partial least square (PLS) method to improve the measurement accuracy of carbon content in coal by LIBS. The combination model employed the spectrum standardization method to convert the carbon line intensity into standard state for more accurately calculating the dominant carbon concentration, and then applied PLS with full spectrum information to correct the residual errors. The combination model was applied to the measurement of carbon content for 24 bituminous coal samples. The results demonstrated that the combination model could further improve the measurement accuracy compared with both our previously established spectrum standardization model and dominant factor based PLS model using spectral area normalized intensity for the dominant fa...
Methods in algorithmic analysis
Dobrushkin, Vladimir A
2009-01-01
…helpful to any mathematics student who wishes to acquire a background in classical probability and analysis … This is a remarkably beautiful book that would be a pleasure for a student to read, or for a teacher to make into a year's course.-Harvey Cohn, Computing Reviews, May 2010
Chen, Jie; Hu, Jiangnan
2017-06-01
Industry 4.0 and lean production has become the focus of manufacturing. A current issue is to analyse the performance of the assembly line balancing. This study focus on distinguishing the factors influencing the assembly line balancing. The one-way ANOVA method is applied to explore the significant degree of distinguished factors. And regression model is built to find key points. The maximal task time (tmax ), the quantity of tasks (n), and degree of convergence of precedence graph (conv) are critical for the performance of assembly line balancing. The conclusion will do a favor to the lean production in the manufacturing.
A kernel version of spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
. Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...
COMPETITIVE INTELLIGENCE ANALYSIS - SCENARIOS METHOD
Directory of Open Access Journals (Sweden)
Ivan Valeriu
2014-07-01
Full Text Available Keeping a company in the top performing players in the relevant market depends not only on its ability to develop continually, sustainably and balanced, to the standards set by the customer and competition, but also on the ability to protect its strategic information and to know in advance the strategic information of the competition. In addition, given that economic markets, regardless of their profile, enable interconnection not only among domestic companies, but also between domestic companies and foreign companies, the issue of economic competition moves from the national economies to the field of interest of regional and international economic organizations. The stakes for each economic player is to keep ahead of the competition and to be always prepared to face market challenges. Therefore, it needs to know as early as possible, how to react to others’ strategy in terms of research, production and sales. If a competitor is planning to produce more and cheaper, then it must be prepared to counteract quickly this movement. Competitive intelligence helps to evaluate the capabilities of competitors in the market, legally and ethically, and to develop response strategies. One of the main goals of the competitive intelligence is to acknowledge the role of early warning and prevention of surprises that could have a major impact on the market share, reputation, turnover and profitability in the medium and long term of a company. This paper presents some aspects of competitive intelligence, mainly in terms of information analysis and intelligence generation. Presentation is theoretical and addresses a structured method of information analysis - scenarios method – in a version that combines several types of analysis in order to reveal some interconnecting aspects of the factors governing the activity of a company.
Socioeconomic Methods in Educational Analysis.
Weber, William H., III
This book explores the possibilities in a new approach to educational analysis--a fusion of methods drawn from economics, sociology, and social psychology. The author combines his explanation of socioeconomic analysis with the presentation of several examples that illustrate the application of his method to different analytical problems. The book…
Online Business Strategy Research Based on Factor Analysis Method%基于因子分析方法的网店经营策略研究
Institute of Scientific and Technical Information of China (English)
刘李; 张龙; 冯琪威
2014-01-01
当今社会，网购已经成为人们日常生活中不可或缺的部分。但是影响人们网购行为的因素很多，因此对于网店经营者来说，影响网店经营效益的变量有很多。本文通过建立因子分析模型，将众多影响网店经营效益的变量分成三类，通过因子旋转方法，得出各个变量与其归属因子的多项式关系，计算因子得分，依据每个因子的得分高低来指导网店经营者对其销售策略进行相应的调整，以获得更高的经营效益。%In today’ s society, online shopping has become an indispensable part in people 's daily life. However , there are many factors that affect people ’ s shopping behaviors .Therefore , there are many variables that influence online store management benefit for managers .A Factor Analysis Model was established in this ar-ticle .By rotating factors and calculating factor scores , three main factors that influence efficiency of online busi-ness from large number of factors were extracted .In this way, online business strategy can be adjusted and maxi-mum benefit can be achieved .
Energy Technology Data Exchange (ETDEWEB)
Dalsgaard Soerensen, J. [Aalborg Univ., Aalborg (Denmark); Friis-Hansen, P. [Technical Univ. Denmark, Lyngby (Denmark); Bloch, A.; Svejgaard Nielsen, J. [Ramboell, Esbjerg (Denmark)
2004-08-01
Different simple stochastic models for failure related to pushover collapse are investigated. Next, a method is proposed to estimate the reliability of real offshore jacket structures. The method is based on the Model Correction Factor Method and can be used to very efficiently to estimate the reliability for total failure/collapse of jacket type platforms with wave in deck loads. A realistic example is evaluated and it is seen that it is possible to perform probabilistic reliability analysis for collapse of a jacket type platform using the model correction factor method. The total number of deterministic, complicated, non-linear (RONJA) analysis is typically as low as 10. Such reliability analyses are recommended to be used in practical applications, especially for cases with wave in deck load, where the traditional RSR analyses give poor measures of the structural reliability. (au)
Institute of Scientific and Technical Information of China (English)
夏萌; 赵邦宏; 王俊芹
2015-01-01
农户贷款违约风险较大,使农村金融机构"惜贷"现象严重.文章基于对河北省农村信用社的调查,将从信用社贷款的农户按是否违约分为两类,用相关分析法分析影响农户贷款信用的主要影响因素.研究结果显示,农户的受教育程度,农户的家庭劳动力数量,农户的经营状况,农户信誉状况等对农户贷款是否违约有较大影响,是影响农户贷款信用的重要因素.%Farmers'loan default risk, make the rural financial institutions"credit crunch"phenomenon is serious. Based on the survey of rural credit cooperatives in Hebei Province, the loans from credit cooperatives farmers according to whether the breach of contract is divided into two categories, analysis of main factors affecting the farmers' credit by correlation anal-ysis method. The results showed that, education of farmers, the number of family labor, operating conditions of households, household credit status of farmers loan defaults have great influence, is an important factor affecting the farmers' credit.
ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE
Directory of Open Access Journals (Sweden)
Carmen BOGHEAN
2013-12-01
Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.
What Is Rotating in Exploratory Factor Analysis?
Directory of Open Access Journals (Sweden)
Jason W. Osborne
2015-01-01
Full Text Available Exploratory factor analysis (EFA is one of the most commonly-reported quantitative methodology in the social sciences, yet much of the detail regarding what happens during an EFA remains unclear. The goal of this brief technical note is to explore what - rotation- is, what exactly is rotating, and why we use rotation when performing EFAs. Some commentary about the relative utility and desirability of different rotation methods concludes the narrative.
Probabilistic methods for rotordynamics analysis
Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.
1991-01-01
This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.
An efficient block variant of robust structured multifrontal factorization method
Institute of Scientific and Technical Information of China (English)
Zuo Xian-Yu; Mo Ze-Yao; Gu Tong-Xiang
2013-01-01
Based on the two-dimensional three-temperature (2D3T) radiation diffusion equations and its discrete system,using the block diagonal structure of the three-temperature matrix,the reordering and symbolic decomposition parts of the RSMF method are replaced with corresponding block operation in order to improve the solution efficiency.We call this block form method block RSMF (in brief,BRSMF) method.The new BRSMF method not only makes the reordering and symbolic decomposition become more effective,but also keeps the cost of numerical factorization from increasing and ensures the precision of solution very well.The theoretical analysis of the computation complexity about the new BRSMF method shows that the solution efficiency about the BRSMF method is higher than the original RSMF method.The numerical experiments also show that the new BRSMF method is more effective than the original RSMF method.
Nonlinear programming analysis and methods
Avriel, Mordecai
2012-01-01
This text provides an excellent bridge between principal theories and concepts and their practical implementation. Topics include convex programming, duality, generalized convexity, analysis of selected nonlinear programs, techniques for numerical solutions, and unconstrained optimization methods.
Cost Analysis: Methods and Realities.
Cummings, Martin M.
1989-01-01
Argues that librarians need to be concerned with cost analysis of library functions and services because, in the allocation of resources, decision makers will favor library managers who demonstrate understanding of the relationships between costs and productive outputs. Factors that should be included in a reliable scheme for cost accounting are…
Riou, Marine; Ball, Stephen; Williams, Teresa A; Whiteside, Austin; O'Halloran, Kay L; Bray, Janet; Perkins, Gavin D; Cameron, Peter; Fatovich, Daniel M; Inoue, Madoka; Bailey, Paul; Brink, Deon; Smith, Karen; Della, Phillip; Finn, Judith
2017-07-09
Emergency telephone calls placed by bystanders are crucial to the recognition of out-of-hospital cardiac arrest (OHCA), fast ambulance dispatch and initiation of early basic life support. Clear and efficient communication between caller and call-taker is essential to this time-critical emergency, yet few studies have investigated the impact that linguistic factors may have on the nature of the interaction and the resulting trajectory of the call. This research aims to provide a better understanding of communication factors impacting on the accuracy and timeliness of ambulance dispatch. A dataset of OHCA calls and their corresponding metadata will be analysed from an interdisciplinary perspective, combining linguistic analysis and health services research. The calls will be transcribed and coded for linguistic and interactional variables and then used to answer a series of research questions about the recognition of OHCA and the delivery of basic life-support instructions to bystanders. Linguistic analysis of calls will provide a deeper understanding of the interactional dynamics between caller and call-taker which may affect recognition and dispatch for OHCA. Findings from this research will translate into recommendations for modifications of the protocols for ambulance dispatch and provide directions for further research. The study has been approved by the Curtin University Human Research Ethics Committee (HR128/2013) and the St John Ambulance Western Australia Research Advisory Group. Findings will be published in peer-reviewed journals and communicated to key audiences, including ambulance dispatch professionals. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Nasution, Inggrita Gusti Sari; Muchtar, Yasmin Chairunnisa
2013-01-01
This research is to study the factors which influence the business success of small business ‘processed rotan’. The data employed in the study are primary data within the period of July to August 2013, 30 research observations through census method. Method of analysis used in the study is multiple linear regressions. The results of analysis showed that the factors of labor, innovation and promotion have positive and significant influence on the business success of small busine...
Bayesian Methods for Statistical Analysis
Puza, Borek
2015-01-01
Bayesian methods for statistical analysis is a book on statistical methods for analysing a wide variety of data. The book consists of 12 chapters, starting with basic concepts and covering numerous topics, including Bayesian estimation, decision theory, prediction, hypothesis testing, hierarchical models, Markov chain Monte Carlo methods, finite population inference, biased sampling and nonignorable nonresponse. The book contains many exercises, all with worked solutions, including complete c...
Directory of Open Access Journals (Sweden)
Liu Haijun
2015-12-01
Full Text Available Accurate flatness measurement of silicon wafers is affected greatly by the gravity-induced deflection (GID of the wafers, especially for large and thin wafers. The three-point-support method is a preferred method for the measurement, in which the GID uniquely determined by the positions of the supports could be calculated and subtracted. The accurate calculation of GID is affected by the initial stress of the wafer and the positioning errors of the supports. In this paper, a finite element model (FEM including the effect of initial stress was developed to calculate GID. The influence of the initial stress of the wafer on GID calculation was investigated and verified by experiment. A systematic study of the effects of positioning errors of the support ball and the wafer on GID calculation was conducted. The results showed that the effect of the initial stress could not be neglected for ground wafers. The wafer positioning error and the circumferential error of the support were the most influential factors while the effect of the vertical positioning error was negligible in GID calculation.
Analysis of Precision of Activation Analysis Method
DEFF Research Database (Denmark)
Heydorn, Kaj; Nørgaard, K.
1973-01-01
The precision of an activation-analysis method prescribes the estimation of the precision of a single analytical result. The adequacy of these estimates to account for the observed variation between duplicate results from the analysis of different samples and materials, is tested by the statistic T......, which is shown to be approximated by a χ2 distribution. Application of this test to the results of determinations of manganese in human serum by a method of established precision, led to the detection of airborne pollution of the serum during the sampling process. The subsequent improvement in sampling...... conditions was shown to give not only increased precision, but also improved accuracy of the results....
Human Factors Analysis in Software Engineering
Institute of Scientific and Technical Information of China (English)
Xu Ren-zuo; Ma Ruo-feng; Liu Li-na; Xiong Zhong-wei
2004-01-01
The general human factors analysis analyzes human functions, effects and influence in a system. But in a narrow sense, it analyzes human influence upon the reliability of a system, it includes traditional human reliability analysis, human error analysis, man-machine interface analysis, human character analysis, and others. A software development project in software engineering is successful or not to be completely determined by human factors. In this paper, we discuss the human factors intensions, declare the importance of human factors analysis for software engineering by listed some instances. At last, we probe preliminarily into the mentality that a practitioner in software engineering should possess.
A factor analysis to detect factors influencing building national brand
Directory of Open Access Journals (Sweden)
Naser Azad
Full Text Available Developing a national brand is one of the most important issues for development of a brand. In this study, we present factor analysis to detect the most important factors in building a national brand. The proposed study uses factor analysis to extract the most influencing factors and the sample size has been chosen from two major auto makers in Iran called Iran Khodro and Saipa. The questionnaire was designed in Likert scale and distributed among 235 experts. Cronbach alpha is calculated as 84%, which is well above the minimum desirable limit of 0.70. The implementation of factor analysis provides six factors including “cultural image of customers”, “exciting characteristics”, “competitive pricing strategies”, “perception image” and “previous perceptions”.
Measurement Bias Detection through Factor Analysis
Barendse, M. T.; Oort, F. J.; Werner, C. S.; Ligtvoet, R.; Schermelleh-Engel, K.
2012-01-01
Measurement bias is defined as a violation of measurement invariance, which can be investigated through multigroup factor analysis (MGFA), by testing across-group differences in intercepts (uniform bias) and factor loadings (nonuniform bias). Restricted factor analysis (RFA) can also be used to detect measurement bias. To also enable nonuniform…
Exploratory Factor Analysis of African Self-Consciousness Scale Scores
Bhagwat, Ranjit; Kelly, Shalonda; Lambert, Michael C.
2012-01-01
This study replicates and extends prior studies of the dimensionality, convergent, and external validity of African Self-Consciousness Scale scores with appropriate exploratory factor analysis methods and a large gender balanced sample (N = 348). Viable one- and two-factor solutions were cross-validated. Both first factors overlapped significantly…
Multigroup Confirmatory Factor Analysis: Locating the Invariant Referent Sets
French, Brian F.; Finch, W. Holmes
2008-01-01
Multigroup confirmatory factor analysis (MCFA) is a popular method for the examination of measurement invariance and specifically, factor invariance. Recent research has begun to focus on using MCFA to detect invariance for test items. MCFA requires certain parameters (e.g., factor loadings) to be constrained for model identification, which are…
EXPLORATORY FACTOR ANALYSIS (EFA IN CONSUMER BEHAVIOR AND MARKETING RESEARCH
Directory of Open Access Journals (Sweden)
Marcos Pascual Soler
2012-06-01
Full Text Available Exploratory Factor Analysis (EFA is one of the most widely used statistical procedures in social research. The main objective of this work is to describe the most common practices used by researchers in the consumer behavior and marketing area. Through a literature review methodology the practices of AFE in five consumer behavior and marketing journals(2000-2010 were analyzed. Then, the choices made by the researchers concerning factor model, retention criteria, rotation, factors interpretation and other relevant issues to factor analysis were analized. The results suggest that researchers routinely conduct analyses using such questionable methods. Suggestions for improving the use of factor analysis and the reporting of results are presented and a checklist (Exploratory Factor Analysis Checklist, EFAC is provided to help editors, reviewers, and authors improve reporting exploratory factor analysis.
Factor analysis identifies subgroups of constipation
Institute of Scientific and Technical Information of China (English)
Philip G Dinning; Mike Jones; Linda Hunt; Sergio E Fuentealba; Jamshid Kalanter; Denis W King; David Z Lubowski; Nicholas J Talley; Ian J Cook
2011-01-01
AIM: To determine whether distinct symptom groupings exist in a constipated population and whether such grouping might correlate with quantifiable pathophysiological measures of colonic dysfunction. METHODS: One hundred and ninety-one patients presenting to a Gastroenterology clinic with constipation and 32 constipated patients responding to a newspaper advertisement completed a 53-item, wide-ranging selfreport questionnaire. One hundred of these patients had colonic transit measured scintigraphically. Factor analysis determined whether constipation-related symptoms grouped into distinct aspects of symptomatology. Cluster analysis was used to determine whether individual patients naturally group into distinct subtypes. RESULTS: Cluster analysis yielded a 4 cluster solution with the presence or absence of pain and laxative unresponsiveness providing the main descriptors. Amongst all clusters there was a considerable proportion of patients with demonstrable delayed colon transit, irritable bowel syndrome positive criteria and regular stool frequency. The majority of patients with these characteristics also reported regular laxative use. CONCLUSION: Factor analysis identified four constipation subgroups, based on severity and laxative unresponsiveness, in a constipated population. However, clear stratification into clinically identifiable groups remains imprecise.
Full Information Item Factor Analysis of the FCI
Hagedorn, Eric
2010-02-01
Traditional factor analytical methods, principal factors or principal components analysis, are inappropriate techniques for analyzing dichotomously scored responses to standardized tests or concept inventories because they lead to artifactual factors often referred to as ``difficulty factors.'' Full information item factor analysis (Bock, Gibbons and Muraki, 1988) based on Thurstone's multiple factor model and calculated using marginal maximum likelihood estimation, is an appropriate technique for such analyses. Force Concept Inventory (Hestenes, Wells and Swackhamer, 1992) data from 1582 university students completing an introductory physics course, was analyzed using the full information item factor analysis software TESTFACT v. 4. Analyzing the statistical significance of successive factors added to the model, using chi-squared statistics, led to a six factor model interpretable in terms of the conceptual dimensions of the FCI. )
Note on Integer Factoring Methods IV
Carella, N. A.
2008-01-01
This note continues the theoretical development of deterministic integer factorization algorithms based on systems of polynomials equations. The main result establishes a new deterministic time complexity bench mark in integer factorization.
A comparison of computational methods for identifying virulence factors.
Directory of Open Access Journals (Sweden)
Lu-Lu Zheng
Full Text Available Bacterial pathogens continue to threaten public health worldwide today. Identification of bacterial virulence factors can help to find novel drug/vaccine targets against pathogenicity. It can also help to reveal the mechanisms of the related diseases at the molecular level. With the explosive growth in protein sequences generated in the postgenomic age, it is highly desired to develop computational methods for rapidly and effectively identifying virulence factors according to their sequence information alone. In this study, based on the protein-protein interaction networks from the STRING database, a novel network-based method was proposed for identifying the virulence factors in the proteomes of UPEC 536, UPEC CFT073, P. aeruginosa PAO1, L. pneumophila Philadelphia 1, C. jejuni NCTC 11168 and M. tuberculosis H37Rv. Evaluated on the same benchmark datasets derived from the aforementioned species, the identification accuracies achieved by the network-based method were around 0.9, significantly higher than those by the sequence-based methods such as BLAST, feature selection and VirulentPred. Further analysis showed that the functional associations such as the gene neighborhood and co-occurrence were the primary associations between these virulence factors in the STRING database. The high success rates indicate that the network-based method is quite promising. The novel approach holds high potential for identifying virulence factors in many other various organisms as well because it can be easily extended to identify the virulence factors in many other bacterial species, as long as the relevant significant statistical data are available for them.
Analysis methods for airborne radioactivity
Ala-Heikkilä, Jarmo J
2008-01-01
High-resolution gamma-ray spectrometry is an analysis method well suitable for monitoring airborne radioactivity. Many of the natural radionuclides and a majority of anthropogenic nuclides are prominent gamma-ray emitters. With gamma-ray spectrometry different radionuclides are readily observed at minute concentrations that are far from health hazards. The gamma-ray spectrometric analyses applied in air monitoring programmes can be divided into particulate measurements and gas measurements. I...
Nonlinear programming analysis and methods
Avriel, Mordecai
2003-01-01
Comprehensive and complete, this overview provides a single-volume treatment of key algorithms and theories. The author provides clear explanations of all theoretical aspects, with rigorous proof of most results. The two-part treatment begins with the derivation of optimality conditions and discussions of convex programming, duality, generalized convexity, and analysis of selected nonlinear programs. The second part concerns techniques for numerical solutions and unconstrained optimization methods, and it presents commonly used algorithms for constrained nonlinear optimization problems. This g
Statistical inference of Minimum Rank Factor Analysis
Shapiro, A; Ten Berge, JMF
2002-01-01
For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the
Statistical inference of Minimum Rank Factor Analysis
Shapiro, A; Ten Berge, JMF
For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the
Gait analysis methods in rehabilitation
Directory of Open Access Journals (Sweden)
Baker Richard
2006-03-01
Full Text Available Abstract Introduction Brand's four reasons for clinical tests and his analysis of the characteristics of valid biomechanical tests for use in orthopaedics are taken as a basis for determining what methodologies are required for gait analysis in a clinical rehabilitation context. Measurement methods in clinical gait analysis The state of the art of optical systems capable of measuring the positions of retro-reflective markers placed on the skin is sufficiently advanced that they are probably no longer a significant source of error in clinical gait analysis. Determining the anthropometry of the subject and compensating for soft tissue movement in relation to the under-lying bones are now the principal problems. Techniques for using functional tests to determine joint centres and axes of rotation are starting to be used successfully. Probably the last great challenge for optical systems is in using computational techniques to compensate for soft tissue measurements. In the long term future it is possible that direct imaging of bones and joints in three dimensions (using MRI or fluoroscopy may replace marker based systems. Methods for interpreting gait analysis data There is still not an accepted general theory of why we walk the way we do. In the absence of this, many explanations of walking address the mechanisms by which specific movements are achieved by particular muscles. A whole new methodology is developing to determine the functions of individual muscles. This needs further development and validation. A particular requirement is for subject specific models incorporating 3-dimensional imaging data of the musculo-skeletal anatomy with kinematic and kinetic data. Methods for understanding the effects of intervention Clinical gait analysis is extremely limited if it does not allow clinicians to choose between alternative possible interventions or to predict outcomes. This can be achieved either by rigorously planned clinical trials or using
Correspondence factor analysis of steroid libraries.
Ojasoo, T; Raynaud, J P; Doré, J C
1995-06-01
The receptor binding of a library of 187 steroids to five steroid hormone receptors (estrogen, progestin, androgen, mineralocorticoid, and glucocorticoid) has been analyzed by correspondence factor analysis (CFA) in order to illustrate how the method could be used to derive structure-activity-relationships from much larger libraries. CFA is a cartographic multivariate technique that provides objective distribution maps of the data after reduction and filtering of redundant information and noise. The key to the analysis of very complex data tables is the formation of barycenters (steroids with one or more common structural fragments) that can be introduced into CFA analyses used as mathematical models. This is possible in CFA because the method uses X2-metrics and is based on the distributional equivalence of the rows and columns of the transformed data matrix. We have thus demonstrated, in purely objective statistical terms, the general conclusions on the specificity of various functional and other groups derived from prior analyses by expert intuition and reasoning. A finer analysis was made of a series of A-ring phenols showing the high degree of glucocorticoid receptor and progesterone receptor binding that can be generated by certain C-11-substitutions despite the presence of the phenolic A-ring characteristic of estrogen receptor-specific binding.
Factorization Method in Oscillator with the Aharonov-Casher System
Directory of Open Access Journals (Sweden)
J. Sadeghi
2014-01-01
mathematical foundation about factorization method. The factorization method helps us to obtain the energy spectrum and general wave function for the corresponding system in some spin condition. The factorization method leads us to obtain the raising and lowering operators for the Aharonov-Casher system. The corresponding operators give us the generators of the algebra.
Forns, Joan; Mandal, Siddhartha; Iszatt, Nina; Polder, Anuschka; Thomsen, Cathrine; Lyche, Jan Ludvig; Stigum, Hein; Vermeulen, Roel; Eggesbø, Merete
2016-01-01
BACKGROUND: The aim of this study was to assess the association between postnatal exposure to multiple persistent organic pollutants (POPs) measured in breast milk samples and early behavioral problems using statistical methods to deal with correlated exposure data. METHODS: We used data from the No
Housing price forecastability: A factor analysis
DEFF Research Database (Denmark)
Bork, Lasse; Møller, Stig Vinther
of the model stays high at longer horizons. The estimated factors are strongly statistically signi…cant according to a bootstrap resampling method which takes into account that the factors are estimated regressors. The simple three-factor model also contains substantial out-of-sample predictive power...
A Second Generation Nonlinear Factor Analysis.
Etezadi-Amoli, Jamshid; McDonald, Roderick P.
1983-01-01
Nonlinear common factor models with polynomial regression functions, including interaction terms, are fitted by simultaneously estimating the factor loadings and common factor scores, using maximum likelihood and least squares methods. A Monte Carlo study gives support to a conjecture about the form of the distribution of the likelihood ratio…
Kernel Factor Analysis Algorithm with Varimax
Institute of Scientific and Technical Information of China (English)
Xia Guoen; Jin Weidong; Zhang Gexiang
2006-01-01
Kernal factor analysis (KFA) with varimax was proposed by using Mercer kernel function which can map the data in the original space to a high-dimensional feature space, and was compared with the kernel principle component analysis (KPCA). The results show that the best error rate in handwritten digit recognition by kernel factor analysis with varimax (4.2%) was superior to KPCA (4.4%). The KFA with varimax could more accurately image handwritten digit recognition.
Data Analysis Methods for Paleogenomics
DEFF Research Database (Denmark)
Avila Arcos, Maria del Carmen
, thanks to the introduction of NGS and the implementation of data analysis methods specific for each project. Chapters 1 to 3 have been published in peer-reviewed journals and Chapter 4 is currently in review. Chapter 5 consists of a manuscript describing initial results of an ongoing research project...... (Danmarks Grundforskningfond) 'Centre of Excellence in GeoGenetics' grant, with additional funding provided by the Danish Council for Independent Research 'Sapere Aude' programme. The thesis comprises five chapters, all of which represent different projects that involved the analysis of massive amounts...... of sequence data, generated using next-generation sequencing (NGS) technologies, from either forensic (Chapter 1) or ancient (Chapters 2-5) materials. These chapters present projects very different in nature, reflecting the diversity of questions that have become possible to address in the ancient DNA field...
Creative Uses of Factor Analysis in Psychotherapy Research: Past Examples and Future Possibilities.
Adams, James M.
Factor analysis is a statistical method of reducing a set number of variables by finding similarities between them. This paper reviews the potential of factor analysis, focusing on exploratory factor analysis, in research on psychotherapy. Within the field of psychotherapy, the use of factor analysis can be classified into three groups. The first…
A factorization method for the classification of infrared spectra
Directory of Open Access Journals (Sweden)
Kammerer Bernd
2010-11-01
Full Text Available Abstract Background Bioinformatics data analysis often deals with additive mixtures of signals for which only class labels are known. Then, the overall goal is to estimate class related signals for data mining purposes. A convenient application is metabolic monitoring of patients using infrared spectroscopy. Within an infrared spectrum each single compound contributes quantitatively to the measurement. Results In this work, we propose a novel factorization technique for additive signal factorization that allows learning from classified samples. We define a composed loss function for this task and analytically derive a closed form equation such that training a model reduces to searching for an optimal threshold vector. Our experiments, carried out on synthetic and clinical data, show a sensitivity of up to 0.958 and specificity of up to 0.841 for a 15-class problem of disease classification. Using class and regression information in parallel, our algorithm outperforms linear SVM for training cases having many classes and few data. Conclusions The presented factorization method provides a simple and generative model and, therefore, represents a first step towards predictive factorization methods.
PERSONNEL DEMOTIVATING: THE REASONS, FACTORS, ELIMINATION METHODS
Kuznetsova Ekaterina Andreevna
2012-01-01
The motivation of the personnel in any economic conditions remains a leading link in an enterprise control system. At creation of system of motivation tracking of extent of its impact on productivity of work of the personnel is important. The boomerang effect which is shown in a demotivating of separate groups of the personnel is often observed. In article features of manifestation of demotivating factors at various stages of work of the personnel are analyzed, the circle of the reasons bring...
Institute of Scientific and Technical Information of China (English)
郑爽英; 叶晖; 周刚; 刘学通
2015-01-01
给水管网仿真是城市供水系统管理的一个重点内容，给水管网管段摩阻系数的正确确定直接关系到给水管网仿真的准确性。将给水管网的水力模型与反问题分析方法相结合，建立管段摩阻系数反分析模型，给出其变尺度求解方法和计算步骤。结合室内给水管网实验平台在多种工况下的实测数据，按照反分析模型及其求解方法计算管段摩阻系数，再以求得的管段摩阻系数为基础，利用EPANET软件计算管网的节点水头与管段流量，与实测值进行对比分析。并利用实际城市给水管网对模型的准确性进行验证。%Water network simulation is an important content of urban water system management To correctly determine the friction factors is directly related to the accuracy of simulation .Combining the hydraulic model of water network with the anti-analysis meth‐od ,the anti-analysis model for the friction factors of pipeline was established ,and the solution to the variable metric method and its calculation steps were offered .By using the measured data from the indoor water network experiment platform under a variety of working conditions ,the friction factors of pipeline is calculated according to the anti-analysis model and the analytic method .Based on the calculated friction factors ,the values of the node head and flow of networks can be computed by the software EPANET , which can be contrasted with the measured values through pressure gauges and flow meters .The actual urban water distribution net‐works was used to check the accuracy of anti-analysis model .
A survey on critical factors influencing new advertisement methods
Directory of Open Access Journals (Sweden)
Naser Azad
2013-02-01
Full Text Available Soft drink beverages are important part of many people’s foods and many prefer soft drink to water when they have dinner. Therefore, this business model can be considered as the longest lasting sector for many years and there has been not much change in these products. However, new methods of advertisement play important role for increasing market share. In this paper, we study the impact of new methods of advertisement in product development. The proposed study of this paper designs a questionnaire for one of Iranian soft drink producers, which consisted of 274 questions in Likert scale and uses factor analysis (FA to analyze the results. The study selects 250 people who live in city of Tehran, Iran and Cronbach alpha has been calculated as 0.88, which is well above the minimum desirable limit. According to our results, there were six important factors impacting in product development, including modern advertisement techniques, emotional impact, strategy of market leadership, pricing strategy, product life chain and supply entity. The most important factor loading in these six components include impact of social values, persuading unaware and uninformed customers, ability to monopolizing in production, improving pricing techniques, product life cycle and negative impact of high advertisement.
Garcia, Jeanette M; Sirard, John R; Deutsch, Nancy L; Weltman, Arthur
2016-08-01
(1) Determine the association between adolescent moderate-to-vigorous physical activity (MVPA) and screen time with their nominated friends' behaviors and (2) explore potential mechanisms of friends' social influences on MVPA and screen time. Participants consisted of 152 adolescents (mean age: 14.5 years, 53 % female, 50 % high school, 80 % Caucasian). MVPA was measured with an Actigraph GT3X+ accelerometer. Demographic and psychosocial variables were assessed via questionnaires. Participants nominated up to 5 friends who completed MVPA and screen time questionnaires. A subset of adolescents (n = 108) participated in focus groups that examined potential mechanism of friends' influence on MVPA and screen time. Multiple regression analysis examined the association of demographic, psychological, and nominated friend variables with participants' MVPA and screen time. NVivo 10.0 was used to analyze qualitative data. Greater levels of friends' MVPA was associated with greater levels of MVPA in both males (p associated with greater levels of screen time in males (p = .04) while psychosocial variables, such as increased screen time enjoyment, were associated with increased screen time in females (p = .01). School level was not associated with either MVPA or screen time. Focus group data indicated that friends positively influenced participants' MVPA through engaging in activity with participants, verbal encouragement, and modeling of MVPA. All participants preferred to be active with friends rather than alone, however, females preferred activity with a close friend while males preferred to be active with a group. Enjoyment of MVPA was the most cited reason for engaging in MVPA with friends. The majority of participants reported friends not having an influence on screen time. Adolescents with active friends are more likely to be physically active and spend less time engaging in screen-based behaviors. Interventions to increase MVPA in youth could be designed to
Kernel parameter dependence in spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...
Multistructure Statistical Model Applied To Factor Analysis
Bentler, Peter M.
1976-01-01
A general statistical model for the multivariate analysis of mean and covariance structures is described. Matrix calculus is used to develop the statistical aspects of one new special case in detail. This special case separates the confounding of principal components and factor analysis. (DEP)
Institute of Scientific and Technical Information of China (English)
赵守全; 赵常要; 朱兆荣
2016-01-01
This article is based on sound wave transmission method,analyzes the influence factors of pile foundation testing and points out the factors of the testing instrument,acoustic detecting tube material,temperature of testing environment, blocking pipe and tubes which are not parallel will affect the test results.Therefore,the influence degree of the influence factors on the test results and its mechanism are carried on analysis thoroughly,such as sound velocity anomaly problems caused by the acoustic detection tube is not parallel,although correction method is more,there are some limitations and shortcomings.The paper,adopts the correction method based on acoustic detecting tube space position relations,has achieved good correction effect,and consequently presents the corresponding processing methods or measures to reduce the influence degree of these factors on the test results,and finally the reliability of the testing results of sound wave transmis-sion method is improved.%本文就声波透射法在基桩检测中的影响因素这一问题进行了深入分析，指出了检测仪器、声测管材质、检测环境的温度、堵管、声测管不平行等因素都会对结果分析产生影响，并针对这些因素对检测结果的影响程度及机理进行了深入分析，如声测管不平行引起的声速异常问题，虽然修正方法较多，但都有一定的局限性和弊端，本文采用了基于声测管空间位置关系的修正方法取得了良好的修正效果，进一步给出了相应的处理方法或措施，使这些因素对检测结果的影响程度有效降低，提高了声波透射法检测结果的可信度。
Institute of Scientific and Technical Information of China (English)
吴红; 常飞; 李玉平
2011-01-01
Aiming at the problem of low literacy of graduate students, the paper uses qualitative and quantitative methods to analyze. First it combs social factors, college factors and individual factors which would have an effect on culturing intellectual property literacy. Then the paper converts the fishbone diagram into hierarchical structure model, structures judge matrix by subjective evaluation, expert evaluation and comprehensive analysis and carries out consistency test. Finally it calculates the importance of various factors relative to the target weight and through total sorting help to grasp the key factors more clearly. On that basis, it effectively promotes the intellectual property literacy of graduate students.%针对高校研究生知识产权素养偏低的问题，采用定性和定量相结合的方法进行分析。首先用鱼骨图法对影响知识产权素养培养的社会因素、高校因素、个体因素进行梳理，然后将鱼骨图转换成层次结构模型，通过主观评价、专家评判和综合分析构造判断矩阵并进行一致性检验，最后计算各个因素相对于目标问题的重要性权重，通过总排序找出关键影响因素，并为有效提升研究生知识产权素养提出建设性意见。
Signs and symptoms of acute mania: a factor analysis
Directory of Open Access Journals (Sweden)
de Silva Varuni A
2011-08-01
Full Text Available Abstract Background The major diagnostic classifications consider mania as a uni-dimensional illness. Factor analytic studies of acute mania are fewer compared to schizophrenia and depression. Evidence from factor analysis suggests more categories or subtypes than what is included in the classification systems. Studies have found that these factors can predict differences in treatment response and prognosis. Methods The sample included 131 patients consecutively admitted to an acute psychiatry unit over a period of one year. It included 76 (58% males. The mean age was 44.05 years (SD = 15.6. Patients met International Classification of Diseases-10 (ICD-10 clinical diagnostic criteria for a manic episode. Patients with a diagnosis of mixed bipolar affective disorder were excluded. Participants were evaluated using the Young Mania Rating Scale (YMRS. Exploratory factor analysis (principal component analysis was carried out and factors with an eigenvalue > 1 were retained. The significance level for interpretation of factor loadings was 0.40. The unrotated component matrix identified five factors. Oblique rotation was then carried out to identify three factors which were clinically meaningful. Results Unrotated principal component analysis extracted five factors. These five factors explained 65.36% of the total variance. Oblique rotation extracted 3 factors. Factor 1 corresponding to 'irritable mania' had significant loadings of irritability, increased motor activity/energy and disruptive aggressive behaviour. Factor 2 corresponding to 'elated mania' had significant loadings of elevated mood, language abnormalities/thought disorder, increased sexual interest and poor insight. Factor 3 corresponding to 'psychotic mania' had significant loadings of abnormalities in thought content, appearance, poor sleep and speech abnormalities. Conclusions Our findings identified three clinically meaningful factors corresponding to 'elated mania', 'irritable mania
Meta analysis of risk factors for colorectal cancer
Institute of Scientific and Technical Information of China (English)
Kun Chen; Jiong-Liang Qiu; Yang Zhang; Yu-Wan Zhao
2003-01-01
AIM: To study the risk factors for colorectal cancer in China.METHODS: A meta-analysis of the risk factors of colorectal cancer was conducted for 14 case-control studies, and reviewed 14 reports within 13 years which included 5034cases and 5205 controls. Dersimonian and Laird random effective models were used to process the results.RESULTS: Meta analysis of the 14 studies demonstrated that proper physical activites and dietary fibers were protective factors (pooled OR＜0.8), while fecal mucohemorrhage,chronic diarrhea and polyposis were highly associated with colorectal cancer (all pooled OR＞4). The stratified results showed that different OR values of some factors were due to geographic factors or different resourses.CONCLUSION: Risks of colorectal cancer are significantly associated with the histories of intestinal diseases or relative symptoms, high lipid diet, emotional trauma and family history of cancers. The suitable physical activities and dietary fibers are protective factors.
Model correction factor method for system analysis
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager; Johannesen, Johannes M.
2000-01-01
severallocally most central points exist without there being a simple geometric definition of the corresponding failuremodes such as is the case for collapse mechanisms in rigid plastic hinge models for frame structures. Taking as simplifiedidealized model a model of similarity with the elaborate model...... but with clearly defined failure modes, the MCFM can bestarted from each idealized single mode limit state in turn to identify a locally most central point on the elaborate limitstate surface. Typically this procedure leads to a fewer number of locally most central failure points on the elaboratelimit state...... surface than existing in the idealized model....
Analysis of Economic Factors Affecting Stock Market
Xie, Linyin
2010-01-01
This dissertation concentrates on analysis of economic factors affecting Chinese stock market through examining relationship between stock market index and economic factors. Six economic variables are examined: industrial production, money supply 1, money supply 2, exchange rate, long-term government bond yield and real estate total value. Stock market comprises fixed interest stocks and equities shares. In this dissertation, stock market is restricted to equity market. The stock price in thi...
Institute of Scientific and Technical Information of China (English)
李文生
2011-01-01
针对水质评价中指标个数过多,未考虑各评价指标之间的联系,以及不确定性水质评价计算方法原理深奥、计算复杂等缺点,在因子分析法的基础上,提出基于因子分析的综合指标评价法.该方法既具有因子分析的特点,同时通过综合指标的求解又能够对评价样本进行排序和分级,还能对水质样本的污染状况及主要污染物种类进行分析.经过检验表明:该方法能正确进行水质评价.由太子河辽阳段的实例研究证明:该方法直观、计算简便,是一种水质评价的有效方法.%Numerous disadvantages were found in the existing water quality evaluation method, such as too many indices needed, among which the relationship being neglected, the abstruseness in theory and complexity in calculation evaluating the water with uncertainty model. Based on the factor analysis method, a comprehensive index evaluation method was proposed. With this method, the evaluation samples were sorted and classed through the computing of the comprehensive index and the pollution condition and main pollutants were analyzed. The proposed method had been successfully used in the water quality evaluation of Taize River in Liaoyang region, which proved the effectiveness of this direct method featuring simple computing process.
Buckling analysis of composite cylindrical shell using numerical analysis method
Energy Technology Data Exchange (ETDEWEB)
Jung, Hae Young; Bae, Won Byung [Pusan Nat' l Univ., Busan (Korea, Republic of); Cho, Jong Rae [Korea Maritime Univ., Busan (Korea, Republic of); Lee, Woo Hyung [Underwater Vehicle Research Center, Busan (Korea, Republic of)
2012-01-15
The objective of this paper is to predict the buckling pressure of a composite cylindrical shell using buckling formulas (ASME 2007, NASA SP 8007) and finite element analysis. The model in this study uses a stacking angle of [0/90]12t and USN 125 composite material. All specimens were made using a prepreg method. First, finite element analysis was conducted, and the results were verified through comparison with the hydrostatic pressure bucking experiment results. Second, the values obtained from the buckling formula and the buckling pressure values obtained from the finite element analysis were compared as the stacking angle was changed in 5 .deg. increments from 20 .deg. to 90 .deg. The linear and nonlinear results of the finite element analysis were consistent with the results of the experiment, with a safety factor of 0.85-1. Based on the above result, the ASME 2007 formula, a simplified version of the NASA SP 8007 formula, is regarded as a buckling formula that provides a reliable safety factor.
Institute of Scientific and Technical Information of China (English)
薛玉花
2015-01-01
Objective: to analyze the infiuencing factors and prevention methods of simple obesity in children. Methods: cross sectional study was used, by the way of questionnaire survey to carry on the questionnaire survey to select 200 cases of simple obesity of children, parents, the content of the survey includes: parents in general, when a child is born, the current situation of diet and exercise habits. Result: the analysis of infiuencing factors of simple obesity in children with genetic factors, environmental factors, birth weight, eating habits, exercise habits. Conclusion: there are many factors that infiuence children simple obesity. in order to prevent and control obesity in children, we should strengthen the education of the parents, the parents of the enhancement of health consciousness, to fully understand the dangers of obesity, urging parents to help children develop good eating habits and exercise habits.%目的：分析儿童单纯性肥胖的影响因素及预防方法。方法采用现况调查法，采用问卷调查的方式对选取的200例单纯性肥胖儿童、家长进行问卷调查，问卷调查的内容包括：父母的一般情况、儿童出生时的情况、目前的饮食习惯和运动习惯等。结果儿童单纯性肥胖的影响因素有遗传因素、出生时体重、环境因素、饮食习惯、运动习惯等。结论儿童单纯性肥胖的影响因素有很多。为了预防和控制儿童肥胖，应当加强对家长进行教育，增强家长的健康意识，使其充分了解到肥胖的危害性，督促家长帮助儿童养成良好的饮食习惯和运动习惯。
Text mining factor analysis (TFA) in green tea patent data
Rahmawati, Sela; Suprijadi, Jadi; Zulhanif
2017-03-01
Factor analysis has become one of the most widely used multivariate statistical procedures in applied research endeavors across a multitude of domains. There are two main types of analyses based on factor analysis: Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA). Both EFA and CFA aim to observed relationships among a group of indicators with a latent variable, but they differ fundamentally, a priori and restrictions made to the factor model. This method will be applied to patent data technology sector green tea to determine the development technology of green tea in the world. Patent analysis is useful in identifying the future technological trends in a specific field of technology. Database patent are obtained from agency European Patent Organization (EPO). In this paper, CFA model will be applied to the nominal data, which obtain from the presence absence matrix. While doing processing, analysis CFA for nominal data analysis was based on Tetrachoric matrix. Meanwhile, EFA model will be applied on a title from sector technology dominant. Title will be pre-processing first using text mining analysis.
Directory of Open Access Journals (Sweden)
Marcelo P.A. Fleck
1998-06-01
Full Text Available OBJETIVO: Existem vários critérios para a escolha do número de componentes a serem mantidos na análise de componentes principais. Esta escolha pode dar-se por critérios arbitrários (critério de Kaiser p.ex. ou subjetivos (fatores interpretáveis. Apresenta-se o critério de simulação de Lébart e Dreyfus. MÉTODO: É gerada uma matriz de números aleatórios, sendo realizada a análise de componentes principais a partir dessa matriz. Os componentes extraídos de um conjunto de dados como este representam um limite inferior que deve ser ultrapassado para que um componente possa ser selecionado. Utiliza-se como exemplo a análise de componentes principais da escala de Hamilton para a depressão (17 itens numa amostra de 130 pacientes. RESULTADOS E CONCLUSÕES: O método de simulação é comparado com o método de Kaiser. É mostrado que o método de simulação mantém apenas os componentes clinicamente significativos ao contrário do método de Kaiser.OBJECTIVE: There are many methods to determine how many components should be retained in principal components analysis. This choice can be made on the basis of arbitrary (Kaiser or subjective (Interpretable factors criteria. This work presents the simulation criteria of Lébart e Dreyfus. The method create a matrix of randomized numbers and a principal component analysis is performed on the basis of this matrix. The components extracted from this data represent the cut off values. Those that exceed this cut off value should be retained. As an example, a principal component analysis is performed with the Hamilton depression rating scale (17 items on a sample of 130 subjects. RESULTS AND CONCLUSION: The Simulation method is compared with the Kaiser method and is shown that the Simulation method maintains the components clinically significant.
A Bayesian semiparametric factor analysis model for subtype identification.
Sun, Jiehuan; Warren, Joshua L; Zhao, Hongyu
2017-04-25
Disease subtype identification (clustering) is an important problem in biomedical research. Gene expression profiles are commonly utilized to infer disease subtypes, which often lead to biologically meaningful insights into disease. Despite many successes, existing clustering methods may not perform well when genes are highly correlated and many uninformative genes are included for clustering due to the high dimensionality. In this article, we introduce a novel subtype identification method in the Bayesian setting based on gene expression profiles. This method, called BCSub, adopts an innovative semiparametric Bayesian factor analysis model to reduce the dimension of the data to a few factor scores for clustering. Specifically, the factor scores are assumed to follow the Dirichlet process mixture model in order to induce clustering. Through extensive simulation studies, we show that BCSub has improved performance over commonly used clustering methods. When applied to two gene expression datasets, our model is able to identify subtypes that are clinically more relevant than those identified from the existing methods.
Determining Dimensionality of Exercise Readiness Using Exploratory Factor Analysis
Directory of Open Access Journals (Sweden)
Kelley Strohacker, Rebecca A. Zakrajsek
2016-06-01
Full Text Available Assessment of “exercise readiness” is a central component to the flexible non-linear periodization (FNLP method of organizing training workloads, but the underlying factor structure of this construct has not been empirically determined. The purpose of this study was to assess construct dimensionality of exercise readiness using exploratory factor analysis. The result of which serve as initial steps of developing a brief measure of exercise readiness. Participants consisted of students recruited from undergraduate Kinesiology courses at a racially diverse, southern University. Independent, anonymous online survey data were collected across three stages: 1 generation of item pool (n = 290, 2 assessment of face validity and refinement of item pool (n = 168, and 3 exploratory factor analysis (n = 684. A principal axis factor analysis was conducted with 41 items using oblique rotation (promax. Four statistically significant factors, as determined through parallel analysis, explained 61.5% of the variance in exercise readiness. Factor 1 contained items that represented vitality (e.g., lively, revived. Factor 2 items related to physical fatigue (e.g. tired, drained. Factors 3 and 4 were descriptive of, discomfort (e.g. pain, sick and health (i.e. healthy, fit, respectively. This inductive approach indicates that exercise readiness is comprised of four dimensions: vitality, physical fatigue, discomfort, and health. This finding supports readiness assessment techniques currently recommended for practitioners according to the FNLP model. These results serve as a theoretical foundation upon which to further develop and refine a brief survey instrument to measure exercise readiness.
Rapid coal proximate analysis by thermogravimetric method
Energy Technology Data Exchange (ETDEWEB)
Mao Jianxiong; Yang Dezhong; Zhao Baozhong
1987-09-01
A rapid coal proximate analysis by thermogravimetric analysis (TGA) can be used as an alternative method for the standard proximate analysis. This paper presents a program set up to rapidly perform coal proximate analysis by using a thermal analyzer and TGA module. A comparison between coal proximate analyses by standard method (GB) and TGA is also given. It shows that most data from TGA fall within the tolerance limit of standard method.
An effective method to accurately calculate the phase space factors for $\\beta^- \\beta^-$ decay
Neacsu, Andrei
2015-01-01
Accurate calculations of the electron phase space factors are necessary for reliable predictions of double-beta decay rates, and for the analysis of the associated electron angular and energy distributions. We present an effective method to calculate these phase space factors that takes into account the distorted Coulomb field of the daughter nucleus, yet allows one to easily calculate the phase space factors with good accuracy relative to the most exact methods available in the recent literature.
Biological Stability of Drinking Water: Controlling Factors, Methods, and Challenges
Prest, Emmanuelle I.
2016-02-01
Biological stability of drinking water refers to the concept of providing consumers with drinking water of same microbial quality at the tap as produced at the water treatment facility. However, uncontrolled growth of bacteria can occur during distribution in water mains and premise plumbing, and can lead to hygienic (e.g., development of opportunistic pathogens), aesthetic (e.g., deterioration of taste, odor, color) or operational (e.g., fouling or biocorrosion of pipes) problems. Drinking water contains diverse microorganisms competing for limited available nutrients for growth. Bacterial growth and interactions are regulated by factors, such as (i) type and concentration of available organic and inorganic nutrients, (ii) type and concentration of residual disinfectant, (iii) presence of predators, such as protozoa and invertebrates, (iv) environmental conditions, such as water temperature, and (v) spatial location of microorganisms (bulk water, sediment, or biofilm). Water treatment and distribution conditions in water mains and premise plumbing affect each of these factors and shape bacterial community characteristics (abundance, composition, viability) in distribution systems. Improved understanding of bacterial interactions in distribution systems and of environmental conditions impact is needed for better control of bacterial communities during drinking water production and distribution. This article reviews (i) existing knowledge on biological stability controlling factors and (ii) how these factors are affected by drinking water production and distribution conditions. In addition, (iii) the concept of biological stability is discussed in light of experience with well-established and new analytical methods, enabling high throughput analysis and in-depth characterization of bacterial communities in drinking water. We discussed, how knowledge gained from novel techniques will improve design and monitoring of water treatment and distribution systems in order
Biological Stability of Drinking Water: Controlling Factors, Methods, and Challenges
Prest, Emmanuelle I.; Hammes, Frederik; van Loosdrecht, Mark C. M.; Vrouwenvelder, Johannes S.
2016-01-01
Biological stability of drinking water refers to the concept of providing consumers with drinking water of same microbial quality at the tap as produced at the water treatment facility. However, uncontrolled growth of bacteria can occur during distribution in water mains and premise plumbing, and can lead to hygienic (e.g., development of opportunistic pathogens), aesthetic (e.g., deterioration of taste, odor, color) or operational (e.g., fouling or biocorrosion of pipes) problems. Drinking water contains diverse microorganisms competing for limited available nutrients for growth. Bacterial growth and interactions are regulated by factors, such as (i) type and concentration of available organic and inorganic nutrients, (ii) type and concentration of residual disinfectant, (iii) presence of predators, such as protozoa and invertebrates, (iv) environmental conditions, such as water temperature, and (v) spatial location of microorganisms (bulk water, sediment, or biofilm). Water treatment and distribution conditions in water mains and premise plumbing affect each of these factors and shape bacterial community characteristics (abundance, composition, viability) in distribution systems. Improved understanding of bacterial interactions in distribution systems and of environmental conditions impact is needed for better control of bacterial communities during drinking water production and distribution. This article reviews (i) existing knowledge on biological stability controlling factors and (ii) how these factors are affected by drinking water production and distribution conditions. In addition, (iii) the concept of biological stability is discussed in light of experience with well-established and new analytical methods, enabling high throughput analysis and in-depth characterization of bacterial communities in drinking water. We discussed, how knowledge gained from novel techniques will improve design and monitoring of water treatment and distribution systems in order
Biological Stability of Drinking Water: Controlling Factors, Methods, and Challenges.
Prest, Emmanuelle I; Hammes, Frederik; van Loosdrecht, Mark C M; Vrouwenvelder, Johannes S
2016-01-01
Biological stability of drinking water refers to the concept of providing consumers with drinking water of same microbial quality at the tap as produced at the water treatment facility. However, uncontrolled growth of bacteria can occur during distribution in water mains and premise plumbing, and can lead to hygienic (e.g., development of opportunistic pathogens), aesthetic (e.g., deterioration of taste, odor, color) or operational (e.g., fouling or biocorrosion of pipes) problems. Drinking water contains diverse microorganisms competing for limited available nutrients for growth. Bacterial growth and interactions are regulated by factors, such as (i) type and concentration of available organic and inorganic nutrients, (ii) type and concentration of residual disinfectant, (iii) presence of predators, such as protozoa and invertebrates, (iv) environmental conditions, such as water temperature, and (v) spatial location of microorganisms (bulk water, sediment, or biofilm). Water treatment and distribution conditions in water mains and premise plumbing affect each of these factors and shape bacterial community characteristics (abundance, composition, viability) in distribution systems. Improved understanding of bacterial interactions in distribution systems and of environmental conditions impact is needed for better control of bacterial communities during drinking water production and distribution. This article reviews (i) existing knowledge on biological stability controlling factors and (ii) how these factors are affected by drinking water production and distribution conditions. In addition, (iii) the concept of biological stability is discussed in light of experience with well-established and new analytical methods, enabling high throughput analysis and in-depth characterization of bacterial communities in drinking water. We discussed, how knowledge gained from novel techniques will improve design and monitoring of water treatment and distribution systems in order
Biological stability of drinking water: controlling factors, methods and challenges
Directory of Open Access Journals (Sweden)
Emmanuelle ePrest
2016-02-01
Full Text Available Biological stability of drinking water refers to the concept of providing consumers with drinking water of same microbial quality at the tap as produced at the water treatment facility. However, uncontrolled growth of bacteria can occur during distribution in water mains and premise plumbing, and can lead to hygienic (e.g. development of opportunistic pathogens, aesthetic (e.g. deterioration of taste, odour, colour or operational (e.g. fouling or biocorrosion of pipes problems. Drinking water contains diverse microorganisms competing for limited available nutrients for growth. Bacterial growth and interactions are regulated by factors such as (i type and concentration of available organic and inorganic nutrients, (ii type and concentration of residual disinfectant, (iii presence of predators such as protozoa and invertebrates, (iv environmental conditions such as water temperature, and (v spatial location of microorganisms (bulk water, sediment or biofilm. Water treatment and distribution conditions in water mains and premise plumbing affect each of these factors and shape bacterial community characteristics (abundance, composition, viability in distribution systems. Improved understanding of bacterial interactions in distribution systems and of environmental conditions impact is needed for better control of bacterial communities during drinking water production and distribution. This article reviews (i existing knowledge on biological stability controlling factors and (ii how these factors are affected by drinking water production and distribution conditions. In addition, (iii the concept of biological stability is discussed in light of experience with well-established and new analytical methods, enabling high throughput analysis and in-depth characterization of bacterial communities in drinking water. We discuss how knowledge gained from novel techniques will improve design and monitoring of water treatment and distribution systems in order to
Factor analysis improves the selection of prescribing indicators
DEFF Research Database (Denmark)
Rasmussen, Hanne Marie Skyggedal; Søndergaard, Jens; Sokolowski, Ineta
2006-01-01
OBJECTIVE: To test a method for improving the selection of indicators of general practitioners' prescribing. METHODS: We conducted a prescription database study including all 180 general practices in the County of Funen, Denmark, approximately 472,000 inhabitants. Principal factor analysis was used...... indicators directly quantifying choice of coxibs, indicators measuring expenditure per Defined Daily Dose, and indicators taking risk aspects into account, (2) "Frequent NSAID prescribing", comprising indicators quantifying prevalence or amount of NSAID prescribing, and (3) "Diverse NSAID choice", comprising...... appropriate and inappropriate prescribing, as revealed by the correlation of the indicators in the first factor. CONCLUSION: Correlation and factor analysis is a feasible method that assists the selection of indicators and gives better insight into prescribing patterns....
Environmental Performance in Countries Worldwide: Determinant Factors and Multivariate Analysis
Directory of Open Access Journals (Sweden)
Isabel Gallego-Alvarez
2014-11-01
Full Text Available The aim of this study is to analyze the environmental performance of countries and the variables that can influence it. At the same time, we performed a multivariate analysis using the HJ-biplot, an exploratory method that looks for hidden patterns in the data, obtained from the usual singular value decomposition (SVD of the data matrix, to contextualize the countries grouped by geographical areas and the variables relating to environmental indicators included in the environmental performance index. The sample used comprises 149 countries of different geographic areas. The findings obtained from the empirical analysis emphasize that socioeconomic factors, such as economic wealth and education, as well as institutional factors represented by the style of public administration, in particular control of corruption, are determinant factors of environmental performance in the countries analyzed. In contrast, no effect on environmental performance was found for factors relating to the internal characteristics of a country or political factors.
What Is Rotating in Exploratory Factor Analysis?
Osborne, Jason W.
2015-01-01
Exploratory factor analysis (EFA) is one of the most commonly-reported quantitative methodology in the social sciences, yet much of the detail regarding what happens during an EFA remains unclear. The goal of this brief technical note is to explore what "rotation" is, what exactly is rotating, and why we use rotation when performing…
Stepwise Variable Selection in Factor Analysis.
Kano, Yutaka; Harada, Akira
2000-01-01
Takes several goodness-of-fit statistics as measures of variable selection and develops backward elimination and forward selection procedures in exploratory factor analysis. A newly developed variable selection program, SEFA, can print several fit measures for a current model and models obtained by removing an internal variable or adding an…
Multilevel exploratory factor analysis of discrete data
Barendse, M.T.; Oort, F.J.; Jak, S.; Timmerman, M.E.
2013-01-01
Exploratory factor analysis (EFA) can be used to determine the dimensionality of a set of items. When data come from clustered subjects, such as pupils within schools or children within families, the hierarchical structure of the data should be taken into account. Standard multilevel EFA is only sui
An SPSSR -Menu for Ordinal Factor Analysis
Directory of Open Access Journals (Sweden)
Mario Basto
2012-01-01
Full Text Available Exploratory factor analysis is a widely used statistical technique in the social sciences. It attempts to identify underlying factors that explain the pattern of correlations within a set of observed variables. A statistical software package is needed to perform the calculations. However, there are some limitations with popular statistical software packages, like SPSS. The R programming language is a free software package for statistical and graphical computing. It offers many packages written by contributors from all over the world and programming resources that allow it to overcome the dialog limitations of SPSS. This paper offers an SPSS dialog written in theR programming language with the help of some packages, so that researchers with little or no knowledge in programming, or those who are accustomed to making their calculations based on statistical dialogs, have more options when applying factor analysis to their data and hence can adopt a better approach when dealing with ordinal, Likert-type data.
Probabilistic methods in combinatorial analysis
Sachkov, Vladimir N
2014-01-01
This 1997 work explores the role of probabilistic methods for solving combinatorial problems. These methods not only provide the means of efficiently using such notions as characteristic and generating functions, the moment method and so on but also let us use the powerful technique of limit theorems. The basic objects under investigation are nonnegative matrices, partitions and mappings of finite sets, with special emphasis on permutations and graphs, and equivalence classes specified on sequences of finite length consisting of elements of partially ordered sets; these specify the probabilist
An integrating factor matrix method to find first integrals
Energy Technology Data Exchange (ETDEWEB)
Saputra, K V I [Faculty of Science and Mathematics, University Pelita Harapan, Jl. MH Thamrin Boulevard, Tangerang Banten, 15811 (Indonesia); Quispel, G R W [Department of Mathematics and Statistical Science, La Trobe University, Bundoora 3086 (Australia); Van Veen, L, E-mail: kie.saputra@staff.uph.ed [Faculty of Science, University of Ontario Institute of Technology, 2000 Simcoe St N., Oshawa, Ontario, L1H 7K4 (Canada)
2010-06-04
In this paper we develop an integrating factor matrix method to derive conditions for the existence of first integrals. We use this novel method to obtain first integrals, along with the conditions for their existence, for two- and three-dimensional Lotka-Volterra systems with constant terms. The results are compared to previous results obtained by other methods.
An integrating factor matrix method to find first integrals
Saputra, K V I; van Veen, L
2010-01-01
In this paper we developed an integrating factor matrix method to derive conditions for the existence of first integrals. We use this novel method to obtain first integrals, along with the conditions for their existence, for two and three dimensional Lotka-Volterra systems with constant terms. The results are compared to previous results obtained by other methods.
Convergence analysis of combinations of different methods
Energy Technology Data Exchange (ETDEWEB)
Kang, Y. [Clarkson Univ., Potsdam, NY (United States)
1994-12-31
This paper provides a convergence analysis for combinations of different numerical methods for solving systems of differential equations. The author proves that combinations of two convergent linear multistep methods or Runge-Kutta methods produce a new convergent method of which the order is equal to the smaller order of the two original methods.
SWOT ANALYSIS ON SAMPLING METHOD
National Research Council Canada - National Science Library
CHIS ANCA OANA; BELENESI MARIOARA;
2014-01-01
.... Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors...
Prognostic Analysis System and Methods of Operation
MacKey, Ryan M. E. (Inventor); Sneddon, Robert (Inventor)
2014-01-01
A prognostic analysis system and methods of operating the system are provided. In particular, a prognostic analysis system for the analysis of physical system health applicable to mechanical, electrical, chemical and optical systems and methods of operating the system are described herein.
Institute of Scientific and Technical Information of China (English)
王书军
2014-01-01
目的：探讨慢性阻塞性肺疾病(COPD)的相关危险因素，提出相应的社区综合防治措施。方法：随机抽取辖区内居民健康档案中＞40岁COPD的高危人群，进行问卷调查及肺功能监测，了解COPD发病情况，采用多因素分析方法研究 COPD 的危险因素。结果：多因素分析显示年龄、吸烟指数、呼吸系统共患疾病、厨房通风情况是COPD的独立危险因素。结论：对社区内高危人群进行急性肺功能监测可提高COPD诊断率，针对相关危险因素提出综合性防治措施，能够减少疾病发生率。%Objective:To discuss the related risk factors of COPD and put forward corresponding community comprehensive prevention and control measures.Methods:Pepole within its jurisdiction community who older than 40 with high risk of COPD were randomly selected from residents' health records,then A questionnaire survey and pulmonary function monitoring were taken on them,in order to understand the COPD incidence,and multiplly analyze the risk factors of COPD.Results:The multi-factor analysis showed that age,smoking index,the respiratory system disease,kitchen ventilation were the independent risk factors of COPD.Conclusion:Giving acute pulmonary function monitoring for high-risk residents of the community can improve the diagnostic rate of COPD,and put forward the comprehensive measures of prevention according to the relevant risk factors.It can reduce the incidence of disease.
The Effects of Overextraction on Factor and Component Analysis.
Fava, J L; Velicer, W F
1992-07-01
The effects of overextracting factors and components within and between the methods of maximum likelihood factor analysis (MLFA) and principal component analysis (PCA) were examined. Computer-simulated data sets were generated to represent a range of factor and component patterns. Saturation (aij = .8, .6 & .4), sample size (N = 75, 150,225,450), and variable-to-component (factor) ratio (p:m = 12:1,6:1, & 4:1) were conditions manipulated. In Study 1, scores based on the incorrect patterns were correlated with correct scores within each method after each overextraction. In Study 2, scores were correlated between the methods of PCAand MLFA after each overextraction. Overextraction had a negative effect, but scores based on strong component and factor patterns displayed robustness to the effects of overextraction. Low item saturation and low sample size resulted in degraded score reproduction. Degradation was strongest for patterns that combined low saturation and low sample size. Component and factor scores were highly correlated even at maximal levels of overextraction. Dissimilarity between score methods was the greatest in conditions that combined low saturation and low sample size. Some guidelines for researchers concerning the effects of overextraction are noted, as well as some cautions in the interpretation of results.
Applying critical analysis - main methods
Directory of Open Access Journals (Sweden)
Miguel Araujo Alonso
2012-02-01
Full Text Available What is the usefulness of critical appraisal of literature? Critical analysis is a fundamental condition for the correct interpretation of any study that is subject to review. In epidemiology, in order to learn how to read a publication, we must be able to analyze it critically. Critical analysis allows us to check whether a study fulfills certain previously established methodological inclusion and exclusion criteria. This is frequently used in conducting systematic reviews although eligibility criteria are generally limited to the study design. Critical analysis of literature and be done implicitly while reading an article, as in reading for personal interest, or can be conducted in a structured manner, using explicit and previously established criteria. The latter is done when formally reviewing a topic.
Trial Sequential Methods for Meta-Analysis
Kulinskaya, Elena; Wood, John
2014-01-01
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…
Stochastic Analysis Method of Sea Environment Simulated by Numerical Models
Institute of Scientific and Technical Information of China (English)
刘德辅; 焦桂英; 张明霞; 温书勤
2003-01-01
This paper proposes the stochastic analysis method of sea environment simulated by numerical models, such as wave height, current field, design sea levels and longshore sediment transport. Uncertainty and sensitivity analysis of input and output factors of numerical models, their long-term distribution and confidence intervals are described in this paper.
Method of morphological analysis of enterprise management organizational structure
Heorhiadi, N.; Iwaszczuk, N.; Vilhutska, R.
2013-01-01
The essence of the method of morphological analysis of enterprise management organizational structure is described in the article. Setting levels of morphological decomposition and specification of sets of elements are necessary for morphological analysis. Based on empirical research identified factors that influence the formation and use of enterprises management organizational structures.
Nonlinear structural analysis using integrated force method
Indian Academy of Sciences (India)
N R B Krishnam Raju; J Nagabhushanam
2000-08-01
Though the use of the integrated force method for linear investigations is well-recognised, no efforts were made to extend this method to nonlinear structural analysis. This paper presents the attempts to use this method for analysing nonlinear structures. General formulation of nonlinear structural analysis is given. Typically highly nonlinear bench-mark problems are considered. The characteristic matrices of the elements used in these problems are developed and later these structures are analysed. The results of the analysis are compared with the results of the displacement method. It has been demonstrated that the integrated force method is equally viable and efficient as compared to the displacement method.
Risk Factors Analysis on Traumatic Brain Injury Prognosis
Institute of Scientific and Technical Information of China (English)
Xiao-dong Qu; Resha Shrestha; Mao-de Wang
2011-01-01
To investigate the independent risk factors of traumatic brain injury (TBI) prognosis.Methods A retrospective analysis was performed in 885 hospitalized TEl patients from January 1,2003 to January 1, 2010 in the First Affiliated Hospital of Medical College of Xi' an Jiaotong University. Single-factor and logistic regression analysis were conducted to evaluate the association of different variables with TBI outcome.Results The single-factor analysis revealed significant association between several variables and TEl outcome, including age (P=0.044 for the age group 40-60, P＜0.001 for the age group ≥60), complications (P＜0.001), cerebrospinal fluid leakage (P＜0.001), Glasgow Coma Scale (GCS) (P＜0.001), pupillary light reflex (P＜0.001), shock (P＜0.001), associated extra-cranial lesions (P=0.01), subdural hematoma (P＜0.001), cerebral contusion (P＜0.001), diffuse axonal injury (P＜0.001), and subarachnoid hemorrhage (P＜0.001), suggesting the influence of those factors on the prognosis of TBI. Furthermore, logistic regression analysis identified age, GCS score, pupillary light reflex, subdural hematoma, and subarachnoid hemorrhage as independent risk factors of TEl prognosis.Conclusion Age, GCS score, papillary light reflex, subdural hematoma, and subarachnoid hemorrhage may be risk factors influencing the prognosis of TEl. Paying attention to those factors might improve the outcome of TBI in clinical treatment.
Hoseinzade, Zohre; Mokhtari, Ahmad Reza
2017-10-01
Large numbers of variables have been measured to explain different phenomena. Factor analysis has widely been used in order to reduce the dimension of datasets. Additionally, the technique has been employed to highlight underlying factors hidden in a complex system. As geochemical studies benefit from multivariate assays, application of this method is widespread in geochemistry. However, the conventional protocols in implementing factor analysis have some drawbacks in spite of their advantages. In the present study, a geochemical dataset including 804 soil samples collected from a mining area in central Iran in order to search for MVT type Pb-Zn deposits was considered to outline geochemical analysis through various fractal methods. Routine factor analysis, sequential factor analysis, and staged factor analysis were applied to the dataset after opening the data with (additive logratio) alr-transformation to extract mineralization factor in the dataset. A comparison between these methods indicated that sequential factor analysis has more clearly revealed MVT paragenesis elements in surface samples with nearly 50% variation in F1. In addition, staged factor analysis has given acceptable results while it is easy to practice. It could detect mineralization related elements while larger factor loadings are given to these elements resulting in better pronunciation of mineralization.
Analysis of related risk factors for pancreatic fistula after pancreaticoduodenectomy
Institute of Scientific and Technical Information of China (English)
Qi-Song Yu; He-Chao Huang; Feng Ding; Xin-Bo Wang
2016-01-01
Objective:To explore the related risk factors for pancreatic fistula after pancreaticoduodenectomy to provide a theoretical evidence for effectively preventing the occurrence of pancreatic fistula.Methods:A total of 100 patients who were admitted in our hospital from January, 2012 to January, 2015 and had performed pancreaticoduodenectomy were included in the study. The related risk factors for developing pancreatic fistula were collected for single factor and Logistic multi-factor analysis.Results:Among the included patients, 16 had pancreatic fistula, and the total occurrence rate was 16% (16/100). The single-factor analysis showed that the upper abdominal operation history, preoperative bilirubin, pancreatic texture, pancreatic duct diameter, intraoperative amount of bleeding, postoperative hemoglobin, and application of somatostatin after operation were the risk factors for developing pancreatic fistula (P<0.05). The multi-factor analysis showed that the upper abdominal operation history, the soft pancreatic texture, small pancreatic duct diameter, and low postoperative hemoglobin were the dependent risk factors for developing pancreatic fistula (OR=4.162, 6.104, 5.613, 4.034,P<0.05).Conclusions:The occurrence of pancreatic fistula after pancreaticoduodenectomy is closely associated with the upper abdominal operation history, the soft pancreatic texture, small pancreatic duct diameter, and low postoperative hemoglobin; therefore, effective measures should be taken to reduce the occurrence of pancreatic fistula according to the patients’own conditions.
Mathematical economics methods in assessing the effects of institutional factors on foreign trade
Kazantseva, M. A.; Nepp, A. N.
2016-12-01
Foreign trade activity (FT) is an essential driver of economic development; therefore, factors affecting its efficiency should be analysed. Along with the conventional economic factors affecting FT development, a focus should be given to institutional factors, whose role also cannot be neglected. Recent studies show institutional factors to produce both qualitative and quantitative effects on a country's economic development, with various criteria and assessment approaches having been developed for their estimation. This paper classifies mathematical methods used to assess the effect of institutional factors on FT efficiency. An analysis of conventional mathematical models describing the relationship between institutional factors and FT indicators is provided. Mathematical methods are currently the major instrument for the analysis of FT parameters and their dependence on various external factors.
Hybrid methods for cybersecurity analysis :
Energy Technology Data Exchange (ETDEWEB)
Davis, Warren Leon,; Dunlavy, Daniel M.
2014-01-01
Early 2010 saw a signi cant change in adversarial techniques aimed at network intrusion: a shift from malware delivered via email attachments toward the use of hidden, embedded hyperlinks to initiate sequences of downloads and interactions with web sites and network servers containing malicious software. Enterprise security groups were well poised and experienced in defending the former attacks, but the new types of attacks were larger in number, more challenging to detect, dynamic in nature, and required the development of new technologies and analytic capabilities. The Hybrid LDRD project was aimed at delivering new capabilities in large-scale data modeling and analysis to enterprise security operators and analysts and understanding the challenges of detection and prevention of emerging cybersecurity threats. Leveraging previous LDRD research e orts and capabilities in large-scale relational data analysis, large-scale discrete data analysis and visualization, and streaming data analysis, new modeling and analysis capabilities were quickly brought to bear on the problems in email phishing and spear phishing attacks in the Sandia enterprise security operational groups at the onset of the Hybrid project. As part of this project, a software development and deployment framework was created within the security analyst work ow tool sets to facilitate the delivery and testing of new capabilities as they became available, and machine learning algorithms were developed to address the challenge of dynamic threats. Furthermore, researchers from the Hybrid project were embedded in the security analyst groups for almost a full year, engaged in daily operational activities and routines, creating an atmosphere of trust and collaboration between the researchers and security personnel. The Hybrid project has altered the way that research ideas can be incorporated into the production environments of Sandias enterprise security groups, reducing time to deployment from months and
Matrix factorization method for the Hamiltonian structure of integrable systems
Indian Academy of Sciences (India)
S Ghosh; B Talukdar; S Chakraborti
2003-07-01
We demonstrate that the process of matrix factorization provides a systematic mathematical method to investigate the Hamiltonian structure of non-linear evolution equations characterized by hereditary operators with Nijenhuis property.
Stegeman, Alwin
2016-01-01
In the common factor model the observed data is conceptually split into a common covariance producing part and an uncorrelated unique part. The common factor model is fitted to the data itself and a new method is introduced for the simultaneous estimation of loadings, unique variances, factor scores
Stegeman, Alwin
In the common factor model the observed data is conceptually split into a common covariance producing part and an uncorrelated unique part. The common factor model is fitted to the data itself and a new method is introduced for the simultaneous estimation of loadings, unique variances, factor
Institute of Scientific and Technical Information of China (English)
黄真萍; 胡艳; 朱鹏超; 李文灵
2014-01-01
In recent years,high-density electrical method have been used more and more widely and the researches on resolution influence factors analysis of the high-density electrical method have been drawn much attentions.This paper takes the partially erect anomaly of a uniform half-space as detect target,building different models to simulate and analyze with the Wenner high-density electrical device,employing 2DRES software to process forward and inverse numerical simulation and analysis to obtain the effects of the terrain,resistivity differences,depth to diameter ratio and deepening on the high-density electrical probe resolution.The results show that the undulating terrain makes the apparent resistivity anomaly location displaced,shape distorted and the resolution reduced;the detecting resolution of high-density resistivity method increases with the increase of resistivity difference;the detecting resolution decreases with the increase of depth to diameter ratio,and the detecting resolution increases with the increase of extended depth when the lateral width of certain anomalies reaching a critical level.Finally,the recommendations to improve the detecting resolution are given by combining the theoretical analysis and the engineering application.%随着高密度电阻率法在工程中的应用日益广泛，对分辨率的影响因素分析研究也受到重视。论文以均匀半空间局部直立异常体为探测目标，建立多个基于温纳高密度电法装置的地电模型，采用有限元等方法进行正反演数值模拟分析，对高密度电阻率法探测分辨率的影响规律进行探讨。分析表明，地形起伏会引起异常体的位置与形态发生畸变和位移，分辨率降低；电阻率差异增大，分辨率提高；深径比增大，分辨率降低；当异常体的水平范围一定时，探测分辨率随纵深的增大而提高。
Analysis Method for Quantifying Vehicle Design Goals
Fimognari, Peter; Eskridge, Richard; Martin, Adam; Lee, Michael
2007-01-01
A document discusses a method for using Design Structure Matrices (DSM), coupled with high-level tools representing important life-cycle parameters, to comprehensively conceptualize a flight/ground space transportation system design by dealing with such variables as performance, up-front costs, downstream operations costs, and reliability. This approach also weighs operational approaches based on their effect on upstream design variables so that it is possible to readily, yet defensively, establish linkages between operations and these upstream variables. To avoid the large range of problems that have defeated previous methods of dealing with the complex problems of transportation design, and to cut down the inefficient use of resources, the method described in the document identifies those areas that are of sufficient promise and that provide a higher grade of analysis for those issues, as well as the linkages at issue between operations and other factors. Ultimately, the system is designed to save resources and time, and allows for the evolution of operable space transportation system technology, and design and conceptual system approach targets.
Option Pricing Method in a Market Involving Interval Number Factors
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The method for pricing the option in a market with interval number factors is proposed. The no-arbitrage principle in the interval number valued market and the rule to judge the reasonability of a price interval are given. Using the method, the price interval where the riskless interest and the volatility under B-S setting is given. The price interval from binomial tree model when the key factors u, d, R are all interval numbers is also discussed.
Nominal Performance Biosphere Dose Conversion Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
M.A. Wasiolek
2005-04-28
This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis
Scoring methods used in cluster analysis
Sirota, Sergej
2014-01-01
The aim of the thesis is to compare methods of cluster analysis correctly classify objects in the dataset into groups, which are known. In the theoretical section first describes the steps needed to prepare a data file for cluster analysis. The next theoretical section is dedicated to the cluster analysis, which describes ways of measuring similarity of objects and clusters, and dedicated to description the methods of cluster analysis used in practical part of this thesis. In practical part a...
Institute of Scientific and Technical Information of China (English)
王江营; 文世新; 曹文贵; 尚守平; 曹喜仁
2009-01-01
基于塑性极限分析上限法在边坡中建立一个合理的速度场,取滑动面为对数螺旋曲面,分别求得边坡滑动体、外荷载所做外功率和滑动面上内能消散率的计算表达式.然后定义材料强度储备安全系数为边坡的稳定安全系数,即边坡中岩土体的c和φ按同一比例折减后边坡达到极限平衡状态,由此可求得边坡处于极限状态时的外功率与内能消散率.再结合虚功率方程得出边坡安全系数的表达式,进而求得其最小上限解,便能确定边坡最不利滑动面的位置以及其最小安全系数.最后,以一个边坡安全系数求解为算例,把本文的计算结果与多种传统方法的计算结果比较,证明本文所提出的方法合理可行.%A reasonable velocity fields in the slope on the base of plastic limit analysis upper bound method was established,and the sliding surface was supposed as logarithmic spiral surface,the computation expression of work rate was calculated by the body forces and surface loads as well as rate of internal energy dissipations of slope can be obtained; then the material safety margin factor was defined as the safety factor of slope,that is c and φ of soil and rock in the slope reduced by the same percentage,and the slope reaches the state of limit balance,the rate of external energy and internal energy dissipations can be gotten while the slope is in the limit state. The expression of safety factor can be obtained via virtual work-rate equation,and its smallest upper bound solution is derived,and the most disadvantageous sliding surface's position and smallest safety factor of the slope are presented. Finally,taking a slope's safety factor solution as an example,the comparisons among the analytical results of various traditional methods were made,then the method proposed by this paper was proved reasonable and feasible.
Analysis of Factors Influencing Farmers’ Identification of Entrepreneurial Opportunity
Institute of Scientific and Technical Information of China (English)
Jing; GAO; Fang; YANG
2013-01-01
Based on the survey data of entrepreneurship concerning farmers in China,this article uses the multivariate adjustment regression analysis method,to analyze the factors influencing farmers’ identification of entrepreneurial opportunity and the mechanism. The results show that demographic characteristics are still an important factor influencing farmers’ identification of entrepreneurial opportunity,but the extent of its influence is weaker than entrepreneurs’ trait. The new trait theory is verified in farmers’ entrepreneurship opportunity behavior; entrepreneurship environment is becoming an important factor influencing entrepreneurial opportunity identification,whose regulation effect on entrepreneurs’ social network and previous experience is stronger than the regulation effect on entrepreneurs’ psychological trait.
Dynamic Factor Method of Computing Dynamic Mathematical Model for System Simulation
Institute of Scientific and Technical Information of China (English)
老大中; 吴娟; 杨策; 蒋滋康
2003-01-01
The computational methods of a typical dynamic mathematical model that can describe the differential element and the inertial element for the system simulation are researched. The stability of numerical solutions of the dynamic mathematical model is researched. By means of theoretical analysis, the error formulas, the error sign criteria and the error relationship criterion of the implicit Euler method and the trapezoidal method are given, the dynamic factor affecting the computational accuracy has been found, the formula and the methods of computing the dynamic factor are given. The computational accuracy of the dynamic mathematical model like this can be improved by use of the dynamic factor.
Optimisation of Sintering Factors of Titanium Foams Using Taguchi Method
Directory of Open Access Journals (Sweden)
S. Ahmad
2010-06-01
Full Text Available Metal foams have the potential to be used in the production of bipolar plates in Polymer Electron Membrane Fuel Cells (PEMFC. In this paper, pure titanium was used to prepare titanium foam using the slurry method. The electrical conductivity is the most important parameter to be considered in the production of good bipolar plates. To achieve a high conductivity of the titanium foam, the effects of various parameters including temperature, time profile and composition have to be characterised and optimised. This paper reports the use of the Taguchi method in optimising the processing parameters of pure titanium foams. The effects of four sintering factors, namely, composition, sintering temperature, heating rate and soaking time on the electrical conductivity has been studied. The titanium slurry was prepared by mixing titanium alloy powder, polyethylene glycol (PEG, methylcellulose and water. Polyurethane (PU foams were then impregnated into the slurry and later dried at room temperature. These were next sintered in a high temperature vacuum furnace. The various factors were assigned to an L9 orthogonal array. From the Analysis of Variance (ANOVA, the composition of titanium powder has the highest percentage of contribution (24.51 to the electrical conductivity followed by the heating rate (10.29. The optimum electrical conductivity was found to be 1336.227 ± 240.61 S/cm-1 for this titanium foam. It was achieved with a 70% composition of titanium, sintering temperature of 1200oC, a heating rate of 0.5oC/min and 2 hours soaking time. Confirmatory experiments have produced results that lay within the 90% confidence interval.
ANALYSIS OF RISK FACTORS IN 3901 PATIENTS WITH STROKE
Institute of Scientific and Technical Information of China (English)
Xin-Feng Liu; Guy van Melle; Julien Bogousslavsky
2005-01-01
Objective To estimate the frequency of various risk factors for overall stroke and to identify risk factors for cerebral infarction (CI) versus intracerebral hemorrhage (ICH) in a large hospital-based stroke registry.Methods Data from a total of 3901 patients, consisting of 3525 patients with CI and 376 patients with ICH were prospectively coded and entered into a computerized data bank.Results Hypertension and smoking were the most prominent factors affecting overall stroke followed by mild internal carotid artery stenosis (＜ 50%), hypercholesterolemia, transient ischemic attacks (TIAs), diabetes mellitus, and cardiac ischemia. Univariate analysis showed that factors in male significantly associated with CI versus ICH were old age, a family history of stroke, and intermittent claudication; whereas in female the factors were oral contraception and migraine. By multivariate analysis, in all patients, the factors significantly associated with CI as opposed to ICH were smoking, hypercholesterolemia, migraine, TIAs, atrial fibrillation, structural heart disease, and arterial disease. Hypertension was the only significant factor related with ICH versus CI.Conclusions The factors for ischemic and hemorrhagic stroke are not exactly the same. Cardiac and arterial disease are the most powerful factors associated with CI rather than ICH.
Analysis of effect factors-based stochastic network planning model
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
Looking at all the indeterminate factors as a whole and regarding activity durations as independent random variables,the traditional stochastic network planning models ignore the inevitable relationship and dependence among activity durations when more than one activity is possibly affected by the same indeterminate factors.On this basis of analysis of indeterminate effect factors of durations,the effect factors-based stochastic network planning (EFBSNP) model is proposed,which emphasizes on the effects of not only logistic and organizational relationships,but also the dependent relationships,due to indeterminate factors among activity durations on the project period.By virtue of indeterminate factor analysis the model extracts and describes the quantitatively indeterminate effect factors,and then takes into account the indeterminate factors effect schedule by using the Monte Carlo simulation technique.The method is flexible enough to deal with effect factors and is coincident with practice.A software has been developed to simplify the model-based calculation,in VisualStudio.NET language.Finally,a case study is included to demonstrate the applicability of the proposed model and comparison is made with some advantages over the existing models.
The Functional Methods of Discourse Analysis
Institute of Scientific and Technical Information of China (English)
覃卓敏
2008-01-01
From the macroscopic angle of function, methods of discourse analysis are clarified to find out two important methods in pragmatics and through which will better used in the understanding of discourse.
Institute of Scientific and Technical Information of China (English)
苏咸玉
2014-01-01
Objective To analysis the hepatitis b five enzyme-linked immunosorbent method to detect the common factors. Methods Randomly selected data in April 2013-April 2014 in our hospital outpatient and five test specimens of 132 cases of hospitalized patients with hepatitis b, adopt enzyme-linked immunosorbent method. Results This group of specimens of time stretching 20 min after positive rate 18.94%, static water bath after 40 min positive rate 9.08%and let stand for water bath after 60 mim positive rate 0.00%;This group of specimens under different condition of serum hepatitis b five test negative, positive performance are also different. Conclusion The effect of five clinical treatment for hepatitis b hepatitis b main judgement index of standardized operation strictly fol ow steps.%目的：分析乙肝五项酶联免疫吸附法检测结果常见影响因素。方法资料随机选取2013年4月～2014年4月本院门诊和住院患者乙肝五项检测标本132例，均采用酶联免疫吸附法检测。结果本组标本静置水浴20 min后阳性率18.94%，静置水浴40 min后阳性率9.08%，静置水浴60 mim后阳性率0.00%；本组标本在不同血清状态下乙肝五项检测阴性、阳性表现也不同。结论乙肝五项作为乙型肝炎临床治疗的效果的主要判定指标，严格遵循操作步骤行规范操作。
Chapter 11. Community analysis-based methods
Energy Technology Data Exchange (ETDEWEB)
Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.
2010-05-01
Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.
Determining Dimensionality of Exercise Readiness Using Exploratory Factor Analysis.
Strohacker, Kelley; Zakrajsek, Rebecca A
2016-06-01
Assessment of "exercise readiness" is a central component to the flexible non-linear periodization (FNLP) method of organizing training workloads, but the underlying factor structure of this construct has not been empirically determined. The purpose of this study was to assess construct dimensionality of exercise readiness using exploratory factor analysis. The result of which serve as initial steps of developing a brief measure of exercise readiness. Participants consisted of students recruited from undergraduate Kinesiology courses at a racially diverse, southern University. Independent, anonymous online survey data were collected across three stages: 1) generation of item pool (n = 290), 2) assessment of face validity and refinement of item pool (n = 168), and 3) exploratory factor analysis (n = 684). A principal axis factor analysis was conducted with 41 items using oblique rotation (promax). Four statistically significant factors, as determined through parallel analysis, explained 61.5% of the variance in exercise readiness. Factor 1 contained items that represented vitality (e.g., lively, revived). Factor 2 items related to physical fatigue (e.g. tired, drained). Factors 3 and 4 were descriptive of, discomfort (e.g. pain, sick) and health (i.e. healthy, fit), respectively. This inductive approach indicates that exercise readiness is comprised of four dimensions: vitality, physical fatigue, discomfort, and health. This finding supports readiness assessment techniques currently recommended for practitioners according to the FNLP model. These results serve as a theoretical foundation upon which to further develop and refine a brief survey instrument to measure exercise readiness. Key pointsAssessment of exercise readiness is a key component in implementing an exercise program based on flexible nonlinear periodization, but the dimensionality of this concept has not been empirically determined.Based on a series of surveys and a robust exploratory factor analysis
Infinitesimal methods of mathematical analysis
Pinto, J S
2004-01-01
This modern introduction to infinitesimal methods is a translation of the book Métodos Infinitesimais de Análise Matemática by José Sousa Pinto of the University of Aveiro, Portugal and is aimed at final year or graduate level students with a background in calculus. Surveying modern reformulations of the infinitesimal concept with a thoroughly comprehensive exposition of important and influential hyperreal numbers, the book includes previously unpublished material on the development of hyperfinite theory of Schwartz distributions and its application to generalised Fourier transforms and harmon
Identification of noise in linear data sets by factor analysis
Energy Technology Data Exchange (ETDEWEB)
Roscoe, B.A.; Hopke, P.K.
1982-01-01
With the use of atomic and nuclear methods to analyze samples for a multitude of elements, very large data sets have been generated. Due to the ease of obtaining these results with computerized systems, the elemental data acquired are not always as thoroughly checked as they should be leading to some, if not many, bad data points. It is advantageous to have some feeling for the trouble spots in a data set before it is used for further studies. A technique which has the ability to identify bad data points, after the data has been generated, is classical factor analysis. The ability of classical factor analysis to identify two different types of data errors make it ideally suited for scanning large data sets. Since the results yielded by factor analysis indicate correlations between parameters, one must know something about the nature of the data set and the analytical techniques used to obtain it to confidentially isolate errors.
Using mixed methods to identify factors influencing patient flow.
Van Vaerenbergh, Cindy
2009-11-01
An effective method of identifying operational factors that influence patient flow can potentially lead to improvements and thus have huge benefits on the efficiency of hospital departments. This paper presents a new inductive mixed-method approach to identify operational factors that influence patient flow through an accident and emergency (A&E) department. Preliminary explorative observations were conducted, followed by semi-structured interviews with key stakeholders. A questionnaire survey of all medical, nursing, porter and clerical staff was then conducted. The observations provided factors for further exploration: skill-mix, long working hours, equipment availability, lack of orientation programmes, inefficient IT use and issues regarding communication structures. Interviewees highlighted several factors, including availability of medical supervision and senior nursing staff, nursing documentation issues, lack of morale due to overcrowding, personality differences and factors relating to the department layout. The questionnaire respondents strongly supported the importance of the previously identified factors. This paper demonstrates an effective mixed-method approach that can be replicated by other health-care managers to identify factors influencing patient flow. Further benefits include increased volume and quality of data, increased staff awareness for the influence of internal factors on patient flow and enhancing the evidence base for future decision making when prioritizing A&E projects.
A Beginners Guide to Factor Analysis: Focusing on Exploratory Factor Analysis
Directory of Open Access Journals (Sweden)
An Gie Yong
2013-10-01
Full Text Available The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.
Uncertainty of quantitative microbiological methods of pharmaceutical analysis.
Gunar, O V; Sakhno, N G
2015-12-30
The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods.
Institute of Scientific and Technical Information of China (English)
吴窈画; 谈书华; 范超超; 李正华
2011-01-01
MTT法是生物活细胞计数的灵敏、快速和便捷方法.报告对MTT法应用于细菌细胞数定量分析时的分析前培养时间、MTT反应时间、MTT剂量和检测波长等主要影响因素的分析结果,同时报告MTT反应产物溶解检测与直接检测,以及MTT法与平板菌落计数法定量检测的结果比较.结果如下:Formazan生成量与细菌活细胞数呈正相关,对数生长期变化最明显;MTT为0.25 mg/mL,反应2 h时吸光值与菌落数具有良好的线性关系(r=0.999 8);Formazan的最大吸收波长为570～580 rm;MTT反应产物溶解检测和直接检测结果经统计学分析无显著性差异(P＞0.05);细菌细胞数在107～109cfu/mL范围内,MTT法与菌落计数法定量结果呈现良好的线性关系(r≥0.9917).%MTT method is a sensitive, rapid, and simple cytometry for counting living bio - cell. This paper reported the results of analysis when applying MTT method on bacterial cell quantity analysis, incubation time before the analysis, MTT reaction time, MTT dosage and detection wavelength and other main effective factors. At the same time also reported the comparison results of the lysing detection of reaction product with direct detection of it, and MTT method with plate colony counting method. The results were as follows: ( 1 ) the production of Formazan assumed positive pertinence to the quantity of living bacterial cells, the change assumed the most obvious at logarithmic phase; (2) Absorbancy value possessed fine linear relation ( r = 0. 999 8 ) with colony quantity when the reaction time was 2 hours with 0.25 mg/mL MTT; (3) The maximum absorption wavelength was 570 ～ 580 nm; (4) The statistical results showed that there was no significant difference in the reaction products between lysing detection of reaction product and direct detection (P ＞ 0.05); (5) bacterial cell quantity within the range of 107 ～ 109 cfu/mL, the result assumed good linear relationship ( r ≥ 0.991 7 ) between
Institute of Scientific and Technical Information of China (English)
范文波; 吴普特; 韩志全; 姚斌
2012-01-01
Accurate estimation of reference crop evapotranspiration(ET0) is essential to water resources planning and farm irrigation scheduling. Based on the meteorological data from-1953 to 2008,ET0 in Manas river basin (a typical inland river in Xinjiang province) was calculated by Penman-Monteith method (PM),and the meteorological factors affecting ET0 was analyzed by the path analysis. The results showed that temperature was the main factor affecting ET0. Assuming the ET0 calculated by PM as the standard value,values of monthly ET0 calculated by Hargreaves method were higher,especially from April to October per year. Based on meteorological data of 35 years,the Hargreaves method was modified by the Bayesian method,and was validated with the meteorological data of the other time series of 21 years. Results indicated that the accuracy of the modified Hargreaves method in calculating ET0 was greatly improved. The modified Hargreaves method that is simpler than PM method can be used to calculate ET0 in other inland river areas.%准确估算作物蒸发蒸腾量对于农业水资源的规划和指导农田灌溉非常重要.该文以具有典型内陆河特征的玛纳斯河流域为例,根据1953-2008年的气象资料,采用Penman-Monteith (PM)法计算了流域内参考作物蒸发蒸腾量,并对影响参考作物蒸发蒸腾量的各气象因子进行了通径分析,结果表明在玛纳斯河流域温度是影响参考作物蒸发蒸腾量的最主要因子.以PM法计算得到的参考作物蒸发蒸腾量为标准值,对Hargreaves法计算的参考作物蒸发蒸腾量进行比较分析,结果表明Hargreaves法计算结果偏差较大,在4-10月尤为明显.基于35a气象资料,采用贝叶斯方法对Hargreaves公式进行修订,采用另外21a气象资料对修订的Hargreaves公式进行验证,结果显示修正后的Hargreaves公式满足精度要求,且计算简便,可为内陆河流域参考作物蒸发蒸腾量的计算提供参考.
Dimensionality of an Early Childhood Scale Using Rasch Analysis and Confirmatory Factor Analysis.
Banerji, Madhabi; Smith, Richard M.; Dedrick, Robert F.
1997-01-01
This paper explores the use of Rasch analysis and linear confirmatory factor analysis to investigate the dimensionality of an early childhood test, the Gesell School Readiness Screening Test (F. Ilg and others, 1978). Discusses empirical analyses of results from 523 kindergarten students using both methods. (SLD)
Rosenberg's Self-Esteem Scale: Two Factors or Method Effects.
Tomas, Jose M.; Oliver, Amparo
1999-01-01
Results of a study with 640 Spanish high school students suggest the existence of a global self-esteem factor underlying responses to Rosenberg's (M. Rosenberg, 1965) Self-Esteem Scale, although the inclusion of method effects is needed to achieve a good model fit. Method effects are associated with item wording. (SLD)
Institute of Scientific and Technical Information of China (English)
孙世光; 李自发; 孙鹏; 魏盛; 乔明琦; 张惠云
2011-01-01
目的 探讨旷场实验(Open Field Test,OFT)作为昆明小鼠行为学评价方法的结构维度.方法成年雄性昆明小鼠放入旷场箱正中央格,摄像系统记录5 min的行为变化,实验间隔1周进行2次；采用因子分析进行分析评价,实验参数如下:中央区停留时间百分率( Ctime％)、中央区水平运动百分率( Ccross％)、水平运动(Cross)、垂直运动(Rear)和粪便粒数(FB).结果 Ctime％与Ccross％ (P=0.756,P ＜0.001:Pearson=0.869,P＜0.001)、Rear(P=0.694,P＜0.01;Pearson=0.465,P＜0.05)及Cross与Rear(P=0.599,P＜0.01:Pearson =0.739,P＜0.001)初测重测均具有较好组内相关性；Ctime％ (P=0.586,P＜0.01)、Ccross％(P=0.559,P＜0.05)、Cross(P=0.633,P＜0.01)、Rear(P=0.612,P＜0.01)初测重测均具有较好相关性；OFT所有参数初测重测可提取三个公因子:Ctime％、Ccross％对焦虑因子F1贡献大(Loading％:43.34％,48.56％),Cross、Rear对运动探索因子F2贡献大(Loading％:27.94％,29.30％),FB对情绪因子F3贡献大(Loading％:21.95％,15.93％)；F1、F2((P=0.567,P＜0.01；Pearson =0.538,P＜0.05)初测重测相关性亦较好.结论 OFT作为昆明小鼠行为学评价方法,具有三维结构:焦虑因子维度、OFT活性因子维度(运动探索因子)和情绪因子维度.%Objective To explore the behavioral dimensions of the open field test ( OFF),as a method for ethology in Kunming mice.Methods The behavior of adult,male,Kunming mice in OFF was recorded twice,for five minutes,with a one-week inter-trial interval.The following parameters were evaluated by factor analysis:percentage of time exploring in the central area (Ctime％); percentage of squares crossing in the central area (Ccross％) ; total number of squares crossing in the whole apparatus(Cross) ; total number of rears in the whole apparatus(Rear) ; and number of fecal boli(FB).Results There were good intercorrelation and intracorrelation between test and retest for Ctime
West, Phillip B [Idaho Falls, ID; Novascone, Stephen R [Idaho Falls, ID; Wright, Jerry P [Idaho Falls, ID
2012-05-29
Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.
West, Phillip B.; Novascone, Stephen R.; Wright, Jerry P.
2011-09-27
Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.
Analysis methods for facial motion
Directory of Open Access Journals (Sweden)
Katsuaki Mishima
2009-05-01
Full Text Available Objective techniques to evaluate a facial movement are indispensable for the contemporary treatment of patients with motor disorders such as facial paralysis, cleft lip, postoperative head and neck cancer, and so on. Recently, computer-assisted, video-based techniques have been devised and reported as measuring systems in which facial movements can be evaluated quantitatively. Commercially available motion analysis systems, in which a stereo-measuring technique with multiple cameras and markers to facilitate search of matching among images through all cameras, also are utilized, and are used in many measuring systems such as video-based systems. The key is how the problems of facial movement can be extracted precisely, and how useful information for the diagnosis and decision-making process can be derived from analyses of facial movement. Therefore, it is important to discuss which facial animations should be examined, and whether fixation of the head and markers attached to the face can hamper natural facial movement.
Methods for estimating uncertainty in factor analytic solutions
Directory of Open Access Journals (Sweden)
P. Paatero
2013-08-01
Full Text Available EPA PMF version 5.0 and the underlying multilinear engine executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS, displacement of factor elements (DISP, and bootstrap enhanced by displacement of factor elements (BS-DISP. The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.
A kernel version of spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
of PCA and related techniques. An interesting dilemma in reduction of dimensionality of data is the desire to obtain simplicity for better understanding, visualization and interpretation of the data on the one hand, and the desire to retain sufficient detail for adequate representation on the other hand......Based on work by Pearson in 1901, Hotelling in 1933 introduced principal component analysis (PCA). PCA is often used for general feature generation and linear orthogonalization or compression by dimensionality reduction of correlated multivariate data, see Jolliffe for a comprehensive description...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...
Nominal Performance Biosphere Dose Conversion Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
M. Wasiolek
2004-09-08
This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle
The concept of key success factors: Theory and method
DEFF Research Database (Denmark)
Grunert, Klaus G.; Ellegaard, Charlotte
1992-01-01
and resources required to be successful in a given market. We adopt the last view. 2. The actual key success factors on a market, and those key success factors perceived by decision-makers in companies operating in the market, will be different. A number of psychological mechanisms result in misperceptions...... or resource that a business can i in, which, on the market the business is operating on, explains a major part of the observable differences in perceived value and/or relative costs. 4. Key success factors differ from core skills and resources, which are prerequisites for being on a market, but do not explain...... of the causes of success on a market. Both the actual key success factors on a market, and the way they are perceived by decision-makers, are amenable to scientific analysis. Such an analysis can improve performance of decision-makers on that market. 3. The major immediate causes of success on any market...
An alternative method for centrifugal compressor loading factor modelling
Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.
2017-08-01
The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function – loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.
Probabilistic structural analysis by extremum methods
Nafday, Avinash M.
1990-01-01
The objective is to demonstrate discrete extremum methods of structural analysis as a tool for structural system reliability evaluation. Specifically, linear and multiobjective linear programming models for analysis of rigid plastic frames under proportional and multiparametric loadings, respectively, are considered. Kinematic and static approaches for analysis form a primal-dual pair in each of these models and have a polyhedral format. Duality relations link extreme points and hyperplanes of these polyhedra and lead naturally to dual methods for system reliability evaluation.
Sensitivity Analysis Using Simple Additive Weighting Method
Directory of Open Access Journals (Sweden)
Wayne S. Goodridge
2016-05-01
Full Text Available The output of a multiple criteria decision method often has to be analyzed using some sensitivity analysis technique. The SAW MCDM method is commonly used in management sciences and there is a critical need for a robust approach to sensitivity analysis in the context that uncertain data is often present in decision models. Most of the sensitivity analysis techniques for the SAW method involve Monte Carlo simulation methods on the initial data. These methods are computationally intensive and often require complex software. In this paper, the SAW method is extended to include an objective function which makes it easy to analyze the influence of specific changes in certain criteria values thus making easy to perform sensitivity analysis.
Quantitative Risk Analysis: Method And Process
Directory of Open Access Journals (Sweden)
Anass BAYAGA
2010-03-01
Full Text Available Recent and past studies (King III report, 2009: 73-75; Stoney 2007;Committee of Sponsoring Organisation-COSO, 2004, Bartell, 2003; Liebenberg and Hoyt, 2003; Reason, 2000; Markowitz 1957 lament that although, the introduction of quantifying risk to enhance degree of objectivity in finance for instance was quite parallel to its development in the manufacturing industry, it is not the same in Higher Education Institution (HEI. In this regard, the objective of the paper was to demonstrate the methods and process of Quantitative Risk Analysis (QRA through likelihood of occurrence of risk (phase I. This paper serves as first of a two-phased study, which sampled hundred (100 risk analysts in a University in the greater Eastern Cape Province of South Africa.The analysis of likelihood of occurrence of risk by logistic regression and percentages were conducted to investigate whether there were a significant difference or not between groups (analyst in respect of QRA.The Hosmer and Lemeshow test was non-significant with a chi-square(X2 =8.181; p = 0.300, which indicated that there was a good model fit, since the data did not significantly deviate from the model. The study concluded that to derive an overall likelihood rating that indicated the probability that a potential risk may be exercised within the construct of an associated threat environment, the following governing factors must be considered: (1 threat source motivation and capability (2 nature of the vulnerability (3 existence and effectiveness of current controls (methods and process.
Constructing an Intelligent Patent Network Analysis Method
Directory of Open Access Journals (Sweden)
Chao-Chan Wu
2012-11-01
Full Text Available Patent network analysis, an advanced method of patent analysis, is a useful tool for technology management. This method visually displays all the relationships among the patents and enables the analysts to intuitively comprehend the overview of a set of patents in the field of the technology being studied. Although patent network analysis possesses relative advantages different from traditional methods of patent analysis, it is subject to several crucial limitations. To overcome the drawbacks of the current method, this study proposes a novel patent analysis method, called the intelligent patent network analysis method, to make a visual network with great precision. Based on artificial intelligence techniques, the proposed method provides an automated procedure for searching patent documents, extracting patent keywords, and determining the weight of each patent keyword in order to generate a sophisticated visualization of the patent network. This study proposes a detailed procedure for generating an intelligent patent network that is helpful for improving the efficiency and quality of patent analysis. Furthermore, patents in the field of Carbon Nanotube Backlight Unit (CNT-BLU were analyzed to verify the utility of the proposed method.
An efficient method for identification of risk factors
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
This paper presents a method to identify risk factors during bridge construction.The method integrates the concepts of analytical hierarchy process and fuzzy consistent matrix method.The advantage of the method is that instead of using 9-point scale of relative importance of the conventional analytical hierarchy process,it uses a 3-point scale to describe the scale of importance,thus greatly simplifying the identification problem of risk factors.Moreover,the difficulties of making judgment and comparison caused by uncertainty that jeopardizes the accuracy of the results in the conventional analytical hierarchy process can also be overcome.Another advantage of the method is that it does not involve consistency checking,thus saving a large amount of CPU time.It has been demonstrated with a numerical example that the proposed fuzzy analytical hierarchy process based on 3-point scale can offer significant computational savings over the conventional analytical hierarchy process.
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
Institute of Scientific and Technical Information of China (English)
徐洪国
2004-01-01
We present a numerical method for solving the indefinite least squares problem. We first normalize the coefficient matrix,Then we compute the hyperbolic QR factorization of the normalized matrix. Finally we compute the solution by solving several trian-gular systems. We give the first order error analysis to show that the method is backward stable. The method is more efficient thanthe backward stable method proposed by Chandrasekaran, Gu and Sayed.
Analysis of Ultra Linguistic Factors in Interpretation
Institute of Scientific and Technical Information of China (English)
姚嘉
2015-01-01
The quality of interpretation is a dynamic conception, involving a good deal of variables, such as the participants, the situations, working conditions, cultures etc.. Therefore, in interpretation, those static elements, such as traditional grammars and certain linguistic rules can not be counted as the only criteria for the quality of interpretation. That is, there are many other non-language elements—Ultra-linguistic factors that play an important role in interpretation. Ultra-linguistic factors get rid of the bounding of traditional grammar and parole, and reveal the facts in an indirect way. This paper gives a brief analysis of Ultra Lin⁃guistic elements in interpretation in order to achieve better result in interpretation practice.
Matrix methods for bare resonator eigenvalue analysis.
Latham, W P; Dente, G C
1980-05-15
Bare resonator eigenvalues have traditionally been calculated using Fox and Li iterative techniques or the Prony method presented by Siegman and Miller. A theoretical framework for bare resonator eigenvalue analysis is presented. Several new methods are given and compared with the Prony method.
Text analysis methods, text analysis apparatuses, and articles of manufacture
Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M
2014-10-28
Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.
Factor Rotation and Standard Errors in Exploratory Factor Analysis
Zhang, Guangjian; Preacher, Kristopher J.
2015-01-01
In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…
Determining the penetrability factor using a small perturbations method
Energy Technology Data Exchange (ETDEWEB)
Khayrullin, M.Kh.
1983-01-01
An iterational process is built for finding the penetrability factor which is based on small perturbation formulas with the assumption that the penetrability factor belongs to a class of piecewise constant functions. The iteration process is built in the following manner: finite differential analogies of direct and adjacent problems are solved in each step, then these solutions are used for obtaining a system of linear algebraic equations relative to perturbations in the penetrability factor. The refined values of the penetrability factor serve as the initial data for the next step of the iteration process. When the penetrability factor belongs to another class of functions, then it is possible to build its evaluation in the class of piecewise constant functions using the known values of the bottom hole pressures and flow rates using this iteration process. Examples of calculations of such evaluations are given and they are compared with evaluations obtained through a least squares method in the class of piecewise constant functions.
Analysis of Interaction Factors Between Two Piles
Institute of Scientific and Technical Information of China (English)
CAO Ming; CHEN Long-zhu
2008-01-01
A rigorous analytical method is presented for calculating the interaction factor between two identical piles subjected to vertical loads. Following the technique proposed by Muki and Sternberg, the problem is decomposed into an extended soil mass and two fictitious piles characterized respectively by Young's modulus of the soil and that of the difference between the pile and soil. The unknown axial forces along fictitious piles are determined by solving a Fredholm integral equation of the second kind, which imposes the compatibility condition that the axial strains of the fictitious piles are equal to those corresponding to the centroidal axes of the extended soil. The real pile forces and displacements can subequally be calculated based on the determined fictitious pile forces, and finally, the desired pile interaction factors may be obtained. Results confirm the validity of the proposed approach and portray the influence of the governing parameters on the pile interaction.
Impact factors of fractal analysis of porous structure
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
Characterization of pore structure is one of the key problems for fabrication and application research on porous materials. But, complexity of pore structure makes it difficult to characterize pore structure by Euclidean geometry and traditional experimental methods. Fractal theory has been proved effective to characterize the complex pore structure. The box dimension method based on fractal theory was applied to characterizing the pore structure of fiber porous materials by analyzing the electronic scanning microscope (SEM) images of the porous materials in this paper. The influences of image resolution, threshold value, and image magnification on fractal analysis were investigated. The results indicate that such factors greatly affect fractal analysis process and results. The appropriate magnification threshold and fractal analysis are necessary for fractal analysis.
Scalable group level probabilistic sparse factor analysis
DEFF Research Database (Denmark)
Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard
2017-01-01
Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...
Disruptive Event Biosphere Dose Conversion Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
M. Wasiolek
2004-09-08
This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2004 [DIRS 169671]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis''. The objective of this
Personality and coping traits: A joint factor analysis.
Ferguson, Eamonn
2001-11-01
OBJECTIVES: The main objective of this paper is to explore the structural similarities between Eysenck's model of personality and the dimensions of the dispositional COPE. Costa et al. {Costa P., Somerfield, M., & McCrae, R. (1996). Personality and coping: A reconceptualisation. In (pp. 44-61) Handbook of coping: Theory, research and applications. New York: Wiley} suggest that personality and coping behaviour are part of a continuum based on adaptation. If this is the case, there should be structural similarities between measures of personality and coping behaviour. This is tested using a joint factor analysis of personality and coping measures. DESIGN: Cross-sectional survey. METHODS: The EPQ-R and the dispositional COPE were administered to 154 participants, and the data were analysed using joint factor analysis and bivariate associations. RESULTS: The joint factor analysis indicated that these data were best explained by a four-factor model. One factor was primarily unrelated to personality. There was a COPE-neurotic-introvert factor (NI-COPE) containing coping behaviours such as denial, a COPE-extroversion (E-COPE) factor containing behaviours such as seeking social support and a COPE-psychoticism factor (P-COPE) containing behaviours such as alcohol use. This factor pattern, especially for NI- and E-COPE, was interpreted in terms of Gray's model of personality {Gray, J. A. (1987) The psychology of fear and stress. Cambridge: Cambridge University Press}. NI-, E-, and P-COPE were shown to be related, in a theoretically consistent manner, to perceived coping success and perceived coping functions. CONCLUSIONS: The results indicate that there are indeed conceptual links between models of personality and coping. It is argued that future research should focus on identifying coping 'trait complexes'. Implications for practice are discussed.
Institute of Scientific and Technical Information of China (English)
苏建云; 黄耀裔; 李子蓉
2016-01-01
The comprehensive evaluation method for groundwater is both absolute and relative with multiple indexes .There are corre-lation and multi-collinearity problems in the evaluation process .So the comprehensive evaluation result of the groundwater can be made more objective by overcoming the correlation and multi-collinearity problems through factor analysis and meeting the require-ment of having both absoluteness and relativity through the improvement of TOPSIS method .This paper expounds the implementa-tion process and the effect of the evaluation method by choosing the shallow layer groundwater in Jinjiang City of Fujian Province as the validation study area and visualizes the evaluation results with the Kriging interpolation .The evaluation results show that the shallow layer groundwater in Jinjiang City is overall in good level .The study results can be used as the decision basis for the protec-tion and utilization of shallow layer groundwater .%地下水综合评价是一种兼具绝对性、相对性和多指标性的一种综合评价法 ,在评价过程中存在相关性或多重共线性问题.先通过因子分析来克服相关性或多重共线性 ,再通过对TOPSIS方法的改进和拓展(加权欧氏距离平方的TOPSIS法)来达到兼具绝对性与相对性综合评价的要求 ,可以使评价结果更为客观.以福建省晋江市浅层地下水为验证研究区来阐述评价方法的实施过程和效果 ,并通过地统计法中的 Kriging插值实现评价结果可视化.评价结果显示晋江市浅层地下水水质整体处于良好水平 ,可为合理保护和利用浅层地下水提供决策依据.
Factoring-based method for the design of a nuclear fuel
Energy Technology Data Exchange (ETDEWEB)
Guzman-Arriaga, Rafael; Espinosa-Paredes, Gilberto [Division de Ciencias Basicas e Ingenieria, Universidad Autonoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco 186 Col. Vicentina, Mexico 09340, D. F. (Mexico)
2010-05-15
In this work a simple method for a fuel lattice design is presented. The method is focused on finding the radial distribution of the fuel rods having different fissile contents to obtain a prescribed neutron multiplication factor k{sub {infinity}} to a certain discharge burnup and to minimize the rod power peaking. This method is based on the factorization of the fissile content of each fuel bar and the performance of this novel method was demonstrated with a fuel design composed of enriched uranium for a typical boiling water reactor (BWR). The results show that the factoring-based method for the design of a nuclear fuel converges to a minimum rod power peaking and a prescribed k{sub {infinity}} in few iterations. A comparative analysis shows that the proposed method is more efficient than existing methods. (author)
Parametric Methods for Order Tracking Analysis
DEFF Research Database (Denmark)
Jensen, Tobias Lindstrøm; Nielsen, Jesper Kjær
2017-01-01
Order tracking analysis is often used to find the critical speeds at which structural resonances are excited by a rotating machine. Typically, order tracking analysis is performed via non-parametric methods. In this report, however, we demonstrate some of the advantages of using a parametric method...... for order tracking analysis. Specifically, we show that we get a much better time and frequency resolution, obtain a much more robust and accurate estimate of the RPM profile, and are able to perform accurate order tracking analysis even without the tachometer signal....
Beam-propagation method - Analysis and assessment
van Roey, J.; van der Donk, J.; Lagasse, P. E.
1981-07-01
A method for the calculation of the propagation of a light beam through an inhomogeneous medium is presented. A theoretical analysis of this beam-propagation method is given, and a set of conditions necessary for the accurate application of the method is derived. The method is illustrated by the study of a number of integrated-optic structures, such as thin-film waveguides and gratings.
Fractal methods in image analysis and coding
Neary, David
2001-01-01
In this thesis we present an overview of image processing techniques which use fractal methods in some way. We show how these fields relate to each other, and examine various aspects of fractal methods in each area. The three principal fields of image processing and analysis th a t we examine are texture classification, image segmentation and image coding. In the area of texture classification, we examine fractal dimension estimators, comparing these methods to other methods in use, a...
Tensor-Dictionary Learning with Deep Kruskal-Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
Stevens, Andrew J.; Pu, Yunchen; Sun, Yannan; Spell, Gregory; Carin, Lawrence
2017-04-20
We introduce new dictionary learning methods for tensor-variate data of any order. We represent each data item as a sum of Kruskal decomposed dictionary atoms within the framework of beta-process factor analysis (BPFA). Our model is nonparametric and can infer the tensor-rank of each dictionary atom. This Kruskal-Factor Analysis (KFA) is a natural generalization of BPFA. We also extend KFA to a deep convolutional setting and develop online learning methods. We test our approach on image processing and classification tasks achieving state of the art results for 2D & 3D inpainting and Caltech 101. The experiments also show that atom-rank impacts both overcompleteness and sparsity.
The crowding factor method applied to parafoveal vision
Ghahghaei, Saeideh; Walker, Laura
2016-01-01
Crowding increases with eccentricity and is most readily observed in the periphery. During natural, active vision, however, central vision plays an important role. Measures of critical distance to estimate crowding are difficult in central vision, as these distances are small. Any overlap of flankers with the target may create an overlay masking confound. The crowding factor method avoids this issue by simultaneously modulating target size and flanker distance and using a ratio to compare crowded to uncrowded conditions. This method was developed and applied in the periphery (Petrov & Meleshkevich, 2011b). In this work, we apply the method to characterize crowding in parafoveal vision (crowding than in the periphery, yet radial/tangential asymmetries are clearly preserved. There are considerable idiosyncratic differences observed between participants. The crowding factor method provides a powerful tool for examining crowding in central and peripheral vision, which will be useful in future studies that seek to understand visual processing under natural, active viewing conditions. PMID:27690170
Crime vs. demographic factors revisited: Application of data mining methods
Xingan Li; Henry Joutsijoki; Jorma Laurikkala; Martti Juhola
2015-01-01
The aim of this article is to inquire about correlations between criminal phenomena and demographic factors. This international-level comparative study used a dataset covering 56 countries and 28 attributes. The data were processed with the Self-Organizing Map (SOM), assisted other clustering methods, and several statistical methods for obtaining comparable results. The article is an exploratory application of the SOM in mapping criminal phenomena through processing of multivariate data. We f...
Analysis of Recurrence Factor of Postoperative Papillary Thyroid Cancer
Directory of Open Access Journals (Sweden)
XING Lan-lan;CHEN Song;LI Ya-ming
2014-02-01
Full Text Available To investigate the factors that influences the recurrence of papillary thyroid cancer，69 patients with papillary thyroid cancer since January 1, 2011 to march 30, 2013 were analyzed respectively. They meet the inclusion criteria and complete clinical data, 18 males and 51 females,average age: 40.17±12.97.Thyroid ultrasonography, thyroid function test, thyroglobulin and antibody measurement were performed on all patients and thyroid function were checked three or more times on the premise of continuously levothyroxine. Single factor analysis were performed using SPSS17.0 in these respects including patients' gender, age, tumor size, type of opetation, the inhibition degree of TSH with taking levothyroxine postoperative and whether to perform 131I thyroid remnant ablation. Binary Logistic regression analysis were used for studying recurrence factors in multivariate analysis. The ROC curve were drawn, and then determine the threshold of TSH to evaluate tumor recurrence using Youden index method. Unvaried analysis showed that there was no statistically significance between papillary thyroid cancer recurrence and patients' age, surgical approach (P =0.373, P = 0.226,but were related to patient's gender, tumor size, postoperative TSH suppression degree and the removal of residual thyroid tissue postoperative(P= 0.031, P = 0.004, P = 0.000 01, P = 0.000 05. Males, large tumors, high postoperative TSH values and patients who didn't remove the residual thyroid tissue after surgery had higher recurrence rate. Logistic regression analysis showed that tumor size, postoperative TSH suppression degree and whether to remove the residual thyroid tissue were the influencing factors of tumor recurrence. The postoperative TSH supressive degree evaluation of critical point of tumor recurrence was determined by 0.223 5 mU/L using the Yueden index method. Large tumors, high postoperative TSH values,and no removal of the residual thyroid tissue had more influence
Analysis on some factors affecting MIMO in tunnel
Zheng, Hong-dang; Nie, Xiao-Yan; Xu, Zhao
2009-07-01
Based on the 3D-GBSB (three-dimensional Geometrically Based Single-Bounce) model and MIMO channel capacity function, by geometric analysis, it is analyzed that transceiver antenna arrays, antenna spacing, antenna array angle, SNR and Rician K-factor and so on impact on the frequency-nonselective fading MIMO channel capacity. Monte Carlo method can be applied to stimulate the wireless fading channel and demonstrate Cumulative Distribution Function of above.
A Primer on Bootstrap Factor Analysis as Applied to Health Studies Research
Lu, Wenhua; Miao, Jingang; McKyer, E. Lisako J.
2014-01-01
Objectives: To demonstrate how the bootstrap method could be conducted in exploratory factor analysis (EFA) with a syntax written in SPSS. Methods: The data obtained from the Texas Childhood Obesity Prevention Policy Evaluation project (T-COPPE project) were used for illustration. A 5-step procedure to conduct bootstrap factor analysis (BFA) was…
Improving Your Exploratory Factor Analysis for Ordinal Data: A Demonstration Using FACTOR
Directory of Open Access Journals (Sweden)
James Baglin
2014-06-01
Full Text Available Exploratory factor analysis (EFA methods are used extensively in the field of assessment and evaluation. Due to EFA's widespread use, common methods and practices have come under close scrutiny. A substantial body of literature has been compiled highlighting problems with many of the methods and practices used in EFA, and, in response, many guidelines have been proposed with the aim to improve application. Unfortunately, implementing recommended EFA practices has been restricted by the range of options available in commercial statistical packages and, perhaps, due to an absence of clear, practical - how-to' demonstrations. Consequently, this article describes the application of methods recommended to get the most out of your EFA. The article focuses on dealing with the common situation of analysing ordinal data as derived from Likert-type scales. These methods are demonstrated using the free, stand-alone, easy-to-use and powerful EFA package FACTOR (http://psico.fcep.urv.es/utilitats/factor/, Lorenzo-Seva & Ferrando, 2006. The demonstration applies the recommended techniques using an accompanying dataset, based on the Big 5 personality test. The outcomes obtained by the EFA using the recommended procedures through FACTOR are compared to the default techniques currently available in SPSS.
Redfield, Joel
1978-01-01
TMFA, a FORTRAN program for three-mode factor analysis and individual-differences multidimensional scaling, is described. Program features include a variety of input options, extensive preprocessing of input data, and several alternative methods of analysis. (Author)
PERFORMANCE ANALYSIS OF METHODS FOR ESTIMATING ...
African Journals Online (AJOL)
2014-12-31
Dec 31, 2014 ... analysis revealed that the MLM was the most accurate model ..... obtained using the empirical method as the same formula is used. ..... and applied meteorology, American meteorological society, October 1986, vol.25, pp.
Statistical methods for categorical data analysis
Powers, Daniel
2008-01-01
This book provides a comprehensive introduction to methods and models for categorical data analysis and their applications in social science research. Companion website also available, at https://webspace.utexas.edu/dpowers/www/
LANDSCAPE ANALYSIS METHOD OF RIVERINE TERRITORIES
Fedoseeva O. S.
2013-01-01
The article proposes a method for landscape area analysis, which consists of four stages. Technique is proposed as a tool for the practical application of pre-project research materials in the design solutions for landscape areas planning and organization
An introduction to numerical methods and analysis
Epperson, James F
2013-01-01
Praise for the First Edition "". . . outstandingly appealing with regard to its style, contents, considerations of requirements of practice, choice of examples, and exercises.""-Zentralblatt MATH "". . . carefully structured with many detailed worked examples.""-The Mathematical Gazette The Second Edition of the highly regarded An Introduction to Numerical Methods and Analysis provides a fully revised guide to numerical approximation. The book continues to be accessible and expertly guides readers through the many available techniques of numerical methods and analysis. An Introduction to
[Framework analysis method in qualitative research].
Liao, Xing; Liu, Jian-ping; Robison, Nicola; Xie, Ya-ming
2014-05-01
In recent years a number of qualitative research methods have gained popularity within the health care arena. Despite this popularity, different qualitative analysis methods pose many challenges to most researchers. The present paper responds to the needs expressed by recent Chinese medicine researches. The present paper is mainly focused on the concepts, nature, application of framework analysis, especially on how to use it, in such a way to assist the newcomer of Chinese medicine researchers to engage with the methodology.
Attitude Exploration Using Factor Analysis Technique
Directory of Open Access Journals (Sweden)
Monika Raghuvanshi
2016-12-01
Full Text Available Attitude is a psychological variable that contains positive or negative evaluation about people or an environment. The growing generation possesses learning skills, so if positive attitude is inculcated at the right age, it might therefore become habitual. Students in the age group 14-20 years from the city of Bikaner, India, are the target population for this study. An inventory of 30Likert-type scale statements was prepared in order to measure attitude towards the environment and matters related to conservation. The primary data is collected though a structured questionnaire, using cluster sampling technique and analyzed using the IBM SPSS 23 statistical tool. Factor analysis is used to reduce 30 variables to a smaller number of more identifiable groups of variables. Results show that students “need more regulation and voluntary participation to protect the environment”, “need conservation of water and electricity”, “are concerned for undue wastage of water”, “need visible actions to protect the environment”, “need strengthening of the public transport system”, “are a little bit ignorant about the consequences of global warming”, “want prevention of water pollution by industries”, “need changing of personal habits to protect the environment”, and “don’t have firsthand experience of global warming”. Analysis revealed that nine factors obtained could explain about 58.5% variance in the attitude of secondary school students towards the environment in the city of Bikaner, India. The remaining 39.6% variance is attributed to other elements not explained by this analysis. A global campaign for improvement in attitude about environmental issues and its utility in daily lives may boost positive youth attitudes, potentially impacting worldwide. A cross-disciplinary approach may be developed by teaching along with other related disciplines such as science, economics, and social studies etc.
Exploratory factor analysis of the Brazilian OHIP for edentulous subjects.
Souza, R F; Leles, C R; Guyatt, G H; Pontes, C B; Della Vecchia, M P; Neves, F D
2010-03-01
The use of seven domains for the Oral Health Impact Profile (OHIP)-EDENT was not supported for its Brazilian version, making data interpretation in clinical settings difficult. Thus, the aim of this study was to assess patients' responses for the translated OHIP-EDENT in a group of edentulous subjects and to develop factor scales for application in future studies. Data from 103 conventional and implant-retained complete denture wearers (36 men, mean age of 69.1 +/- 10.3 years) were assessed using the Brazilian version of the OHIP-EDENT. Oral health-related quality of life domains were identified by factor analysis using principal component analysis as the extraction method, followed by varimax rotation. Factor analysis identified four factors that accounted for 63% of the 19 items total variance, named masticatory discomfort and disability (four items), psychological discomfort and disability (five items), social disability (five items) and oral pain and discomfort (five items). Four factors/domains of the Brazilian OHIP-EDENT version represent patient-important aspects of oral health-related quality of life.
Statistical methods of SNP data analysis with applications
Bulinski, Alexander; Shashkin, Alexey; Yaskov, Pavel
2011-01-01
Various statistical methods important for genetic analysis are considered and developed. Namely, we concentrate on the multifactor dimensionality reduction, logic regression, random forests and stochastic gradient boosting. These methods and their new modifications, e.g., the MDR method with "independent rule", are used to study the risk of complex diseases such as cardiovascular ones. The roles of certain combinations of single nucleotide polymorphisms and external risk factors are examined. To perform the data analysis concerning the ischemic heart disease and myocardial infarction the supercomputer SKIF "Chebyshev" of the Lomonosov Moscow State University was employed.
Factorized molecular wave functions: Analysis of the nuclear factor
Energy Technology Data Exchange (ETDEWEB)
Lefebvre, R., E-mail: roland.lefebvre@u-psud.fr [Institut des Sciences Moléculaires d’ Orsay, Bâtiment 350, UMR8214, CNRS- Université. Paris-Sud, 91405 Orsay, France and Sorbonne Universités, UPMC Univ Paris 06, UFR925, F-75005 Paris (France)
2015-06-07
The exact factorization of molecular wave functions leads to nuclear factors which should be nodeless functions. We reconsider the case of vibrational perturbations in a diatomic species, a situation usually treated by combining Born-Oppenheimer products. It was shown [R. Lefebvre, J. Chem. Phys. 142, 074106 (2015)] that it is possible to derive, from the solutions of coupled equations, the form of the factorized function. By increasing artificially the interstate coupling in the usual approach, the adiabatic regime can be reached, whereby the wave function can be reduced to a single product. The nuclear factor of this product is determined by the lowest of the two potentials obtained by diagonalization of the potential matrix. By comparison with the nuclear wave function of the factorized scheme, it is shown that by a simple rectification, an agreement is obtained between the modified nodeless function and that of the adiabatic scheme.
Two MIS Analysis Methods: An Experimental Comparison.
Wang, Shouhong
1996-01-01
In China, 24 undergraduate business students applied data flow diagrams (DFD) to a mini-case, and 20 used object-oriented analysis (OOA). DFD seemed easier to learn, but after training, those using the OOA method for systems analysis made fewer errors. (SK)
Crime vs. demographic factors revisited: Application of data mining methods
Directory of Open Access Journals (Sweden)
Xingan Li
2015-06-01
Full Text Available The aim of this article is to inquire about correlations between criminal phenomena and demographic factors. This international-level comparative study used a dataset covering 56 countries and 28 attributes. The data were processed with the Self-Organizing Map (SOM, assisted other clustering methods, and several statistical methods for obtaining comparable results. The article is an exploratory application of the SOM in mapping criminal phenomena through processing of multivariate data. We found out that SOM was able to group efficiently the present data and characterize these different groups. Other machine learning methods were applied to ensure groups computed with SOM. The correlations obtained between attributes were chiefly weak.
An Analysis Method of Business Application Framework
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
We discuss the evolution of object-oriented software developmentpr o cess based on software pattern. For developing mature software fra mework and component, we advocate to elicit and incorporate software patterns fo r ensuing quality and reusability of software frameworks. On the analysis base o f requirement specification for business application domain, we present analysis method and basic role model of software framework. We also elicit analysis patt ern of framework architecture, and design basic role classes and their structure .
Tazhibi, Mahdi; Sarrafzade, Sheida; Amini, Masoud
2014-01-01
Introduction: Diabetes is one of the most common chronic diseases in the world. Incidence and prevalence of diabetes are increasing in developing countries as well as in Iran. Retinopathy is the most common chronic disorder in diabetic patients. Materials and Methods: In this study, we used the information of diabetic patients’ reports that refer to endocrine and metabolism research center of Isfahan University of Medical Sciences to determine diabetic retinopathy risk factors. We used factor...
Relating Actor Analysis Methods to Policy Problems
Van der Lei, T.E.
2009-01-01
For a policy analyst the policy problem is the starting point for the policy analysis process. During this process the policy analyst structures the policy problem and makes a choice for an appropriate set of methods or techniques to analyze the problem (Goeller 1984). The methods of the policy anal
Causal Moderation Analysis Using Propensity Score Methods
Dong, Nianbo
2012-01-01
This paper is based on previous studies in applying propensity score methods to study multiple treatment variables to examine the causal moderator effect. The propensity score methods will be demonstrated in a case study to examine the causal moderator effect, where the moderators are categorical and continuous variables. Moderation analysis is an…
Error Analysis of Band Matrix Method
Taniguchi, Takeo; Soga, Akira
1984-01-01
Numerical error in the solution of the band matrix method based on the elimination method in single precision is investigated theoretically and experimentally, and the behaviour of the truncation error and the roundoff error is clarified. Some important suggestions for the useful application of the band solver are proposed by using the results of above error analysis.
Testing the number of required dimensions in exploratory factor analysis
Directory of Open Access Journals (Sweden)
Achim, Andr\\'e
2017-01-01
Full Text Available While maximum likelihood exploratory factor analysis (EFA provides a statistical test that $k$ dimensions are sufficient to account for the observed correlations among a set of variables, determining the required number of factors in least-squares based EFA has essentially relied on heuristic procedures. Two methods, Revised Parallel Analysis (R-PA and Comparison Data (CD, were recently proposed that generate surrogate data based on an increasing number of principal axis factors in order to compare their sequence of eigenvalues with that from the data. The latter should be unremarkable among the former if enough dimensions are included. While CD looks for a balance between efficiency and parsimony, R-PA strictly test that $k$ dimensions are sufficient by ranking the next eigenvalue, i.e. at rank $k+1$, of the actual data among those from the surrogate data. Importing two features of CD into R-PA defines four variants that are here collectively termed Next Eigenvalue Sufficiency Tests (NESTs. Simulations implementing 144 sets of parameters, including correlated factors and presence of a doublet factor, show that all four NESTs largely outperform CD, the standard Parallel Analysis, the Mean Average Partial method and even the maximum likelihood approach, in identifying the correct number of common factors. The recommended, most successful NEST variant is also the only one that never overestimates the correct number of dimensions beyond its nominal $\\alpha$ level. This variant is made available as R and MATLAB code as well as a complement incorporated in a Microsoft Excel file.
Empirical likelihood method in survival analysis
Zhou, Mai
2015-01-01
Add the Empirical Likelihood to Your Nonparametric ToolboxEmpirical Likelihood Method in Survival Analysis explains how to use the empirical likelihood method for right censored survival data. The author uses R for calculating empirical likelihood and includes many worked out examples with the associated R code. The datasets and code are available for download on his website and CRAN.The book focuses on all the standard survival analysis topics treated with empirical likelihood, including hazard functions, cumulative distribution functions, analysis of the Cox model, and computation of empiric
A Method to Estimate Shear Quality Factor of Hard Rocks
Wang, Xin; Cai, Ming
2017-07-01
Attenuation has a large influence on ground motion intensity. Quality factors are used to measure wave attenuation in a medium and they are often difficult to estimate due to many factors such as the complex geology and underground mining environment. This study investigates the effect of attenuation on seismic wave propagation and ground motion using an advanced numerical tool—SPECFEM2D. A method, which uses numerical modeling and site-specific scaling laws, is proposed to estimate the shear quality factor of hard rocks in underground mines. In the numerical modeling, the seismic source is represented by a moment tensor model and the considered medium is isotropic and homogeneous. Peak particle velocities along the strongest wave motion direction are compared with that from a design scaling law. Based on the field data that were used to derive a semi-empirical design scaling law, it is demonstrated that a shear quality factor of 60 seems to be a representative for the hard rocks in deep mines to consider the attenuation effect of seismic wave propagation. Using the proposed method, reasonable shear quality factors of hard rocks can be obtained and this, in turn, will assist accurate ground motion determination for mine design.
Institute of Scientific and Technical Information of China (English)
伊燕平; 卢文喜; 辛欣; 陈社明
2012-01-01
Under the 8 different monitoring sections of the groundwater quality monitoring data that Jinquan Industrial Area water source, using R-factor analysis of the water area a comprehensive evaluation of groundwater quality,based on 13 species that the correlation between the chemical composition are studied in groundwater, then four kinds of main factor are extracted , and analysis calculation. The results show that, Cl-, SO2-4 , total hardness and soluble solids on the first principal factor contributed significantly; the second factor is the representative indicator of Zn; The third factor is As; N03 , ammonia, NO2- on the fourth factor contribution significant. People channel, Current River and the Ural trench Map Department water quality monitoring sections ,that in the worst water quality class for the V class, other sections are Class IV. Factor analysis to evaluate the whole industrial park water quality of groundwater, the results is clear, the future for the park management and protection of groundwater resources to provide a scientific basis.%根据金泉工业园水源地8个不同监测断面的地下水水质监测数据,采用R型因子分析法对园区水源地地下水水质进行综合评价,在研究地下水中13种化学成分间相关关系的基础上,提取4个主因子进行分析计算.结果表明,Cl-、SO42-、总硬度和可溶性固体对第一主因子贡献明显；第二主因子代表指标是Zn；第三主因子代表指标是As;NO3-、氨氮、NO2-对第四主因子贡献显著.对水质进行综合评价后发现,人民渠、海流图河和乌拉壕监测断面处水质类别为Ⅴ类,其他断面均为Ⅳ类.因子分析法从整体上评价了园区水源地地下水的水质现状,分析结果清晰明确,为日后地下水资源的管理与保护提供科学依据.
Factorial invariance in multilevel confirmatory factor analysis.
Ryu, Ehri
2014-02-01
This paper presents a procedure to test factorial invariance in multilevel confirmatory factor analysis. When the group membership is at level 2, multilevel factorial invariance can be tested by a simple extension of the standard procedure. However level-1 group membership raises problems which cannot be appropriately handled by the standard procedure, because the dependency between members of different level-1 groups is not appropriately taken into account. The procedure presented in this article provides a solution to this problem. This paper also shows Muthén's maximum likelihood (MUML) estimation for testing multilevel factorial invariance across level-1 groups as a viable alternative to maximum likelihood estimation. Testing multilevel factorial invariance across level-2 groups and testing multilevel factorial invariance across level-1 groups are illustrated using empirical examples. SAS macro and Mplus syntax are provided.
Physiological Factors Analysis in Unpressurized Aircraft Cabins
Patrao, Luis; Zorro, Sara; Silva, Jorge
2016-11-01
Amateur and sports flight is an activity with growing numbers worldwide. However, the main cause of flight incidents and accidents is increasingly pilot error, for a number of reasons. Fatigue, sleep issues and hypoxia, among many others, are some that can be avoided, or, at least, mitigated. This article describes the analysis of psychological and physiological parameters during flight in unpressurized aircraft cabins. It relates cerebral oximetry and heart rate with altitude, as well as with flight phase. The study of those parameters might give clues on which variations represent a warning sign to the pilot, thus preventing incidents and accidents due to human factors. Results show that both cerebral oximetry and heart rate change along the flight and altitude in the alert pilot. The impaired pilot might not reveal these variations and, if this is detected, he can be warned in time.
An exploratory analysis of personality factors contributed to suicide attempts
Directory of Open Access Journals (Sweden)
P. N. Suresh Kumar
2013-01-01
Full Text Available Background: People who attempt suicide have certain individual predispositions, part of which is contributed by personality traits. Aims: The present study was conducted to identify the psycho-sociodemographic and personality related factors contributing to suicide attempts. Materials and Methods: 104 suicide attempters admitted in various departments and referred to the department of psychiatry of IQRAA Hospital formed the study sample. They were evaluated with a self designed socio-demographic proforma, Eysenck′s Personality Questionnaire Revised, Albert Einstein College of Medicine-Impulsivity Coping Scale, and Past Feelings and Acts of Violence Scale. Statistics Analysis: The data was initially analyzed by percentage of frequencies. Association between socio-demographic and selected psychological factors was analyzed using t-test and Chi-square test. Intercorrelation among psychological factors was calculated by Pearson′s correlation coefficient "r". Results and Conclusion: Factors such as young age, being married, nuclear family, feeling lonely and burden to family, inability to solve the problems of day to day life, and presence of psychiatric diagnosis and personality traits such as neuroticism, impulsivity, and violence were contributed to suicide attempt. A significant positive relationship between these factors was also identified. Findings of the present study call the attention of mental health professionals to identify these high risk factors in susceptible individuals and to modify these factors to prevent them from attempting suicide.
Scope-Based Method Cache Analysis
DEFF Research Database (Denmark)
Huber, Benedikt; Hepp, Stefan; Schoeberl, Martin
2014-01-01
, as it requests memory transfers at well-defined instructions only. In this article, we present a new cache analysis framework that generalizes and improves work on cache persistence analysis. The analysis demonstrates that a global view on the cache behavior permits the precise analyses of caches which are hard......The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution...
Energy Technology Data Exchange (ETDEWEB)
Jeong, Hae Sun; Jeong, Hyo Joon; Kim, Eun Han; Han, Moon Hee; Hwang, Won Tae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-09-15
This study analyzes the differences in the annual averaged atmospheric dispersion factor and ground deposition factor produced using two classification methods of atmospheric stability, which are based on a vertical temperature difference and the standard deviation of horizontal wind direction fluctuation. Daedeok and Wolsong nuclear sites were chosen for an assessment, and the meteorological data at 10 m were applied to the evaluation of atmospheric stability. The XOQDOQ software program was used to calculate atmospheric dispersion factors and ground deposition factors. The calculated distances were chosen at 400 m, 800 m, 1,200 m, 1,600 m, 2,400 m, and 3,200 m away from the radioactive material release points. All of the atmospheric dispersion factors generated using the atmospheric stability based on the vertical temperature difference were shown to be higher than those from the standard deviation of horizontal wind direction fluctuation. On the other hand, the ground deposition factors were shown to be same regardless of the classification method, as they were based on the graph obtained from empirical data presented in the Nuclear Regulatory Commission's Regulatory Guide 1.111, which is unrelated to the atmospheric stability for the ground level release. These results are based on the meteorological data collected over the course of one year at the specified sites; however, the classification method of atmospheric stability using the vertical temperature difference is expected to be more conservative.
Institute of Scientific and Technical Information of China (English)
鲍玉学
2016-01-01
电阻率法是电法勘探实际工作效果较为明显的一种勘探方法。但在实践应用过程中，由于野外施工地点及各种影响因素的干扰，使其实际工作效果受到一定程度的影响。本文结合实践工作，分析了电阻率法野外影响因素，并提出相应解决措施及对策，对提高电阻率法野外实际工作效果有一定参考价值。%T he resistivity method is an obvious method of electrical prospecting exploration practice effect . But , the effect of the actual work was affected to a certain extent due to the interference of field construc‐tion sites and all kinds of influence factors in practice . Comprehensive practice , It is analysed the factors of affecting the resistivity method in field , and put forward the corresponding solution and the countermeas‐ure , has the certain reference value to improve the practice effect of resistivity method .
Contraceptive Methods and Factors Associated with Modern Contraceptive In Use
Directory of Open Access Journals (Sweden)
Hammad Ali Qazi
2010-03-01
Full Text Available Objective: The world population will likely increase by 2.5 billion over the next 43 years, passing from the current 6.7 billion to 9.2 billion in 2050. Only limited information about the contraceptive practices especially modern contraceptive use is available. The aim of this study is to determine the prevalence of contraceptive methods and factors associated with modern contraceptive in useMaterials and Methods: A cross sectional study of 288 females selected through consecutive sampling was conducted in Jinnah Post graduate Medical Center family reproductive health care center Karachi, Pakistan from November 2008 to January 2009. Females of reproductive age 16-50 years using any contraceptive measures and giving informed consent were included. Those who with severe debilitating disease, having any physical and mental disability were excluded. Two trained co researchers interviewed the participants for socio demographic reasons. The main outcome variables of the study were comparing modern and traditional contraceptive methods and factors associated with modern contraceptive in use.Results: The results showed mean age of contraceptive users was 29.49 (±6.42 years. Modern contraceptive method was used by 216 (75% and traditional method by 72 (25%. Final multiple logistic regression showed that a few factors have influence on usage rate including: age>30 years [AOR, 0.426 95% CI0.209-0.865], addiction [AOR, 0.381 95% CI0.173-0.839], and means of information like family planning worker (FPW [AOR, 6.315 95% CI 3.057-13.046], Television (TV [AOR, 0.402 95% CI 0.165-0.979] and billboard (BB [AOR, 0.207 95% CI 0.066-0.447].Conclusion: Modern contraceptive method use is very common in our region (75%. The important means of information for modern contraceptive in use were GPs and family planning workers.
[ELISA method for the determination of factor VII antigen].
Jorquera, J I; Aznar, J A; Monteagudo, J; Montoro, J M; Casaña, P; Pascual, I; Bañuls, E; Curats, R; Llopis, F
1989-12-01
The low plasma concentration of clotting factor VII makes it difficult to assay its antigenic fraction by the conventional methods of precipitation with specific antigens. Simple and peroxidase-conjugated antisera are currently available from commercial sources, thus allowing one to determine F VII:Ag by enzyme immunoassay. An ELISA method has been developed in this laboratory which provides sensitivity limits about 0.1% of the plasma concentration of F VII and correlates significantly with its functional activity (r = 0.603, n = 44, p less than 0.001). This technique can be highly helpful in characterising molecular variants of F VII, as well as in detecting acquired deficiencies of this factor.
Dirac equation in low dimensions: The factorization method
Sánchez-Monroy, J. A.; Quimbay, C. J.
2014-11-01
We present a general approach to solve the (1 + 1) and (2 + 1) -dimensional Dirac equations in the presence of static scalar, pseudoscalar and gauge potentials, for the case in which the potentials have the same functional form and thus the factorization method can be applied. We show that the presence of electric potentials in the Dirac equation leads to two Klein-Gordon equations including an energy-dependent potential. We then generalize the factorization method for the case of energy-dependent Hamiltonians. Additionally, the shape invariance is generalized for a specific class of energy-dependent Hamiltonians. We also present a condition for the absence of the Klein paradox (stability of the Dirac sea), showing how Dirac particles in low dimensions can be confined for a wide family of potentials.
An improvement of window factor analysis for resolution of noisy HPLC-DAD data
Institute of Scientific and Technical Information of China (English)
邵学广; 林祥钦; 邵利民; 李梅青
2002-01-01
Window factor analysis (WFA) is a powerful tool in analyzing evolutionary process. However, it was found that window factor analysis is much sensitive to the noise involved in original data matrix. An error analysis was done with the fact that the concentration profiles resolved by the conventional window factor analysis are easily distorted by the noise reserved by the abstract factor analysis (AFA), and a modified algorithm for window factor analysis was proposed. Both simulated and experimental HPLC-DAD data were investigated by the conventional and the improved methods. Results show that the improved method can yield less noise-distorted concentration profiles than the conventional method, and the ability for resolution of noisy data sets can be greatly enhanced.
Method to determine factors contributing to thermoplastic sheet shrinkage
Rensch, Greg J.; Frye, Brad A.
A test method is presented for the determination of shrinkage behavior in vacuum-formed thermoplastic resin sheeting, as presently simulated for various resin lots, sheet-gage thicknesses, sheet orientations, and mold profiles. The thermoforming machine and vacuum-forming mold characteristics are discussed. It is established that the four variable factors exert statistically significant effects on the shrinkage response of three Declar resin lots, but that these are of no real practical significance for either engineering or manufacturing operations.
Institute of Scientific and Technical Information of China (English)
穆鹏志
2014-01-01
以某工程的施工为例，在对该工程施工阶段环境污染问题状况分析的基础上，对环境污染问题的根源进行了探讨，针对空气、噪声、废水、固体废弃物等影响因素提出了环境污染问题的控制措施。%Taking a engineering construction as an example,based on environmental pollution problems situation analysis in engineering con-struction phase,discussed the root of environmental pollution problems,according to the influence factors of air,noise,waste water,solid waste and other factors put forward the control measures of environmental pollution problems.
Radiation analysis devices, radiation analysis methods, and articles of manufacture
Roybal, Lyle Gene
2010-06-08
Radiation analysis devices include circuitry configured to determine respective radiation count data for a plurality of sections of an area of interest and combine the radiation count data of individual of sections to determine whether a selected radioactive material is present in the area of interest. An amount of the radiation count data for an individual section is insufficient to determine whether the selected radioactive material is present in the individual section. An article of manufacture includes media comprising programming configured to cause processing circuitry to perform processing comprising determining one or more correction factors based on a calibration of a radiation analysis device, measuring radiation received by the radiation analysis device using the one or more correction factors, and presenting information relating to an amount of radiation measured by the radiation analysis device having one of a plurality of specified radiation energy levels of a range of interest.
Dirac equation in low dimensions: The factorization method
Energy Technology Data Exchange (ETDEWEB)
Sánchez-Monroy, J.A., E-mail: antosan@if.usp.br [Instituto de Física, Universidade de São Paulo, 05508-090, São Paulo, SP (Brazil); Quimbay, C.J., E-mail: cjquimbayh@unal.edu.co [Departamento de Física, Universidad Nacional de Colombia, Bogotá, D. C. (Colombia); CIF, Bogotá (Colombia)
2014-11-15
We present a general approach to solve the (1+1) and (2+1)-dimensional Dirac equations in the presence of static scalar, pseudoscalar and gauge potentials, for the case in which the potentials have the same functional form and thus the factorization method can be applied. We show that the presence of electric potentials in the Dirac equation leads to two Klein–Gordon equations including an energy-dependent potential. We then generalize the factorization method for the case of energy-dependent Hamiltonians. Additionally, the shape invariance is generalized for a specific class of energy-dependent Hamiltonians. We also present a condition for the absence of the Klein paradox (stability of the Dirac sea), showing how Dirac particles in low dimensions can be confined for a wide family of potentials. - Highlights: • The low-dimensional Dirac equation in the presence of static potentials is solved. • The factorization method is generalized for energy-dependent Hamiltonians. • The shape invariance is generalized for energy-dependent Hamiltonians. • The stability of the Dirac sea is related to the existence of supersymmetric partner Hamiltonians.
van der Gaag, Mark; Cuijpers, Anke; Hoffman, Tonko; Remijsen, Mila; Hijman, Ron; de Haan, Lieuwe; van Meijel, Berno; van Harten, Peter N.; Valmaggia, Lucia; de Hert, Marc; Wiersma, Durk
2006-01-01
Objective: The aim of this study was to test the goodness-of-fit of all previously published five-factor models of the Positive and Negative Syndrome Scale (PANSS). Methods: We used confirmatory factor analysis (CFA) with a large data set (N = 5769). Results: The different subsamples were tested for
Spatial analysis statistics, visualization, and computational methods
Oyana, Tonny J
2015-01-01
An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...
Advanced Analysis Methods in Particle Physics
Energy Technology Data Exchange (ETDEWEB)
Bhat, Pushpalatha C. [Fermilab
1900-01-01
Each generation of high energy physics experiments is grander in scale than the previous – more powerful, more complex and more demanding in terms of data handling and analysis. The spectacular performance of the Tevatron and the beginning of operations of the Large Hadron Collider, have placed us at the threshold of a new era in particle physics. The discovery of the Higgs boson or another agent of electroweak symmetry breaking and evidence of new physics may be just around the corner. The greatest challenge in these pursuits is to extract the extremely rare signals, if any, from huge backgrounds arising from known physics processes. The use of advanced analysis techniques is crucial in achieving this goal. In this review, I discuss the concepts of optimal analysis, some important advanced analysis methods and a few examples. The judicious use of these advanced methods should enable new discoveries and produce results with better precision, robustness and clarity.
Application of Software Safety Analysis Methods
Energy Technology Data Exchange (ETDEWEB)
Park, G. Y.; Hur, S.; Cheon, S. W.; Kim, D. H.; Lee, D. Y.; Kwon, K. C. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, S. J.; Koo, Y. H. [Doosan Heavy Industries and Construction Co., Daejeon (Korea, Republic of)
2009-05-15
A fully digitalized reactor protection system, which is called the IDiPS-RPS, was developed through the KNICS project. The IDiPS-RPS has four redundant and separated channels. Each channel is mainly composed of a group of bistable processors which redundantly compare process variables with their corresponding setpoints and a group of coincidence processors that generate a final trip signal when a trip condition is satisfied. Each channel also contains a test processor called the ATIP and a display and command processor called the COM. All the functions were implemented in software. During the development of the safety software, various software safety analysis methods were applied, in parallel to the verification and validation (V and V) activities, along the software development life cycle. The software safety analysis methods employed were the software hazard and operability (Software HAZOP) study, the software fault tree analysis (Software FTA), and the software failure modes and effects analysis (Software FMEA)
Advanced analysis methods in particle physics
Energy Technology Data Exchange (ETDEWEB)
Bhat, Pushpalatha C.; /Fermilab
2010-10-01
Each generation of high energy physics experiments is grander in scale than the previous - more powerful, more complex and more demanding in terms of data handling and analysis. The spectacular performance of the Tevatron and the beginning of operations of the Large Hadron Collider, have placed us at the threshold of a new era in particle physics. The discovery of the Higgs boson or another agent of electroweak symmetry breaking and evidence of new physics may be just around the corner. The greatest challenge in these pursuits is to extract the extremely rare signals, if any, from huge backgrounds arising from known physics processes. The use of advanced analysis techniques is crucial in achieving this goal. In this review, I discuss the concepts of optimal analysis, some important advanced analysis methods and a few examples. The judicious use of these advanced methods should enable new discoveries and produce results with better precision, robustness and clarity.
Schoenaker, D.A.J.M.; Dobson, A.; Soedamah-Muthu, S.S.; Mishra, G.D.
2013-01-01
Treelet transform (TT) is a proposed alternative to factor analysis for deriving dietary patterns. Before applying this method to nutrition data, further analyses are required to assess its validity in nutritional epidemiology. We aimed to compare dietary patterns from factor analysis and TT and the
A Unified Factors Analysis Framework for Discriminative Feature Extraction and Object Recognition
Directory of Open Access Journals (Sweden)
Ningbo Hao
2016-01-01
Full Text Available Various methods for feature extraction and dimensionality reduction have been proposed in recent decades, including supervised and unsupervised methods and linear and nonlinear methods. Despite the different motivations of these methods, we present in this paper a general formulation known as factor analysis to unify them within a common framework. During factor analysis, an object can be seen as being comprised of content and style factors, and the objective of feature extraction and dimensionality reduction is to obtain the content factor without style factor. There are two vital steps in factor analysis framework; one is the design of factor separating objective function, including the design of partition and weight matrix, and the other is the design of space mapping function. In this paper, classical Linear Discriminant Analysis (LDA and Locality Preserving Projection (LPP algorithms are improved based on factor analysis framework, and LDA based on factor analysis (FA-LDA and LPP based on factor analysis (FA-LPP are proposed. Experimental results show the superiority of our proposed approach in classification performance compared to classical LDA and LPP algorithms.
Analysis of mixed data methods & applications
de Leon, Alexander R
2013-01-01
A comprehensive source on mixed data analysis, Analysis of Mixed Data: Methods & Applications summarizes the fundamental developments in the field. Case studies are used extensively throughout the book to illustrate interesting applications from economics, medicine and health, marketing, and genetics. Carefully edited for smooth readability and seamless transitions between chaptersAll chapters follow a common structure, with an introduction and a concluding summary, and include illustrative examples from real-life case studies in developmental toxicolog
Chromatographic methods for analysis of triazine herbicides.
Abbas, Hana Hassan; Elbashir, Abdalla A; Aboul-Enein, Hassan Y
2015-01-01
Gas chromatography (GC) and high-performance liquid chromatography (HPLC) coupled to different detectors, and in combination with different sample extraction methods, are most widely used for analysis of triazine herbicides in different environmental samples. Nowadays, many variations and modifications of extraction and sample preparation methods such as solid-phase microextraction (SPME), hollow fiber-liquid phase microextraction (HF-LPME), stir bar sportive extraction (SBSE), headspace-solid phase microextraction (HS-SPME), dispersive liquid-liquid microextraction (DLLME), dispersive liquid-liquid microextraction based on solidification of floating organic droplet (DLLME-SFO), ultrasound-assisted emulsification microextraction (USAEME), and others have been introduced and developed to obtain sensitive and accurate methods for the analysis of these hazardous compounds. In this review, several analytical properties such as linearity, sensitivity, repeatability, and accuracy for each developed method are discussed, and excellent results were obtained for the most of developed methods combined with GC and HPLC techniques for the analysis of triazine herbicides. This review gives an overview of recent publications of the application of GC and HPLC for analysis of triazine herbicides residues in various samples.
HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD
Energy Technology Data Exchange (ETDEWEB)
Harold S. Blackman; David I. Gertman; Ronald L. Boring
2008-09-01
This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.
Probabilistic Analysis Methods for Hybrid Ventilation
DEFF Research Database (Denmark)
Brohus, Henrik; Frier, Christian; Heiselberg, Per
This paper discusses a general approach for the application of probabilistic analysis methods in the design of ventilation systems. The aims and scope of probabilistic versus deterministic methods are addressed with special emphasis on hybrid ventilation systems. A preliminary application of stoc...... of stochastic differential equations is presented comprising a general heat balance for an arbitrary number of loads and zones in a building to determine the thermal behaviour under random conditions....
An introduction to numerical methods and analysis
Epperson, J F
2007-01-01
Praise for the First Edition "". . . outstandingly appealing with regard to its style, contents, considerations of requirements of practice, choice of examples, and exercises.""-Zentrablatt Math "". . . carefully structured with many detailed worked examples . . .""-The Mathematical Gazette "". . . an up-to-date and user-friendly account . . .""-Mathematika An Introduction to Numerical Methods and Analysis addresses the mathematics underlying approximation and scientific computing and successfully explains where approximation methods come from, why they sometimes work (or d
Methods for genetic linkage analysis using trisomies
Energy Technology Data Exchange (ETDEWEB)
Feingold, E. [Emory Univ. School of Public Health, Atlanta, GA (United States); Lamb, N.E.; Sherman, S.L. [Emory Univ., Atlanta, GA (United States)
1995-02-01
Certain genetic disorders are rare in the general population, but more common in individuals with specific trisomies. Examples of this include leukemia and duodenal atresia in trisomy 21. This paper presents a linkage analysis method for using trisomic individuals to map genes for such traits. It is based on a very general gene-specific dosage model that posits that the trait is caused by specific effects of different alleles at one or a few loci and that duplicate copies of {open_quotes}susceptibility{close_quotes} alleles inherited from the nondisjoining parent give increased likelihood of having the trait. Our mapping method is similar to identity-by-descent-based mapping methods using affected relative pairs and also to methods for mapping recessive traits using inbred individuals by looking for markers with greater than expected homozygosity by descent. In the trisomy case, one would take trisomic individuals and look for markers with greater than expected homozygosity in the chromosomes inherited from the nondisjoining parent. We present statistical methods for performing such a linkage analysis, including a test for linkage to a marker, a method for estimating the distance from the marker to the trait gene, a confidence interval for that distance, and methods for computing power and sample sizes. We also resolve some practical issues involved in implementing the methods, including how to use partially informative markers and how to test candidate genes. 20 refs., 5 figs., 1 tab.
[Methods for grain size analysis of nanomedicines].
Geng, Zhi-Wang; He, Lan; Zhang, Qi-Ming; Yang, Yong-Jian
2012-07-01
As nanomedicines are developing fast in both academic and market areas, building up suitable methods for nanomedicine analysis with proper techniques is an important subject, requiring further research. The techniques, which could be employed for grain size analysis of nanomedicines, were reviewed. Several key techniques were discussed with their principles, scope of applications, advantages and defects. Their applications to nanomedine analysis were discussed according to the properties of different nanomedicines, with the purpose of providing some suggestions for the control and administration of nanomedicines.
ANALYSIS OF RISK FACTORS ECTOPIC PREGNANCY
Directory of Open Access Journals (Sweden)
Budi Santoso
2017-04-01
Full Text Available Introduction: Ectopic pregnancy is a pregnancy with extrauterine implantation. This situation is gynecologic emergency that contributes to maternal mortality. Therefore, early recognition, based on identification of the causes of ectopic pregnancy risk factors, is needed. Methods: The design descriptive observational. The samples were pregnant women who had ectopic pregnancy at Maternity Room, Emergency Unit, Dr. Soetomo Hospital, Surabaya, from 1 July 2008 to 1 July 2010. Sampling technique was total sampling using medical records. Result: Patients with ectopic pregnancy were 99 individuals out of 2090 pregnant women who searched for treatment in Dr. Soetomo Hospital. However, only 29 patients were accompanied with traceable risk factors. Discussion:. Most ectopic pregnancies were in the age group of 26-30 years, comprising 32 patients (32.32%, then in age groups of 31–35 years as many as 25 patients (25.25%, 18 patients in age group 21–25 years (18.18%, 17 patients in age group 36–40 years (17.17%, 4 patients in age group 41 years and more (4.04%, and the least was in age group of 16–20 years with 3 patients (3.03%. A total of 12 patients with ectopic pregnancy (41.38% had experience of abortion and 6 patients (20.69% each in groups of patients with ectopic pregnancy who used family planning, in those who used family planning as well as ectopic pregnancy patients with history of surgery. There were 2 patients (6.90% of the group of patients ectopic pregnancy who had history of surgery and history of abortion. The incidence rate of ectopic pregnancy was 4.73%, mostly in the second gravidity (34.34%, whereas the nulliparous have the highest prevalence of 39.39%. Acquired risk factors, i.e. history of operations was 10.34%, patients with family planning 20.69%, patients with history of abortion 41.38%, patients with history of abortion and operation 6.90% patients with family and history of abortion was 20.69%.
Institute of Scientific and Technical Information of China (English)
赵世舜; 麻海煜; 胡涛
2014-01-01
在影响人民币汇率的众多因素中，选出GDP增长率、进出口差额增长率、货币和准货币供应量增长率、外汇储备增长率、中美相对消费价格指数、通货膨胀率和中美利差等7个影响人民币汇率的主要因素。选用了一种新的变量选择方法———自适应Lasso方法对人民币汇率影响因素进行有效的选择。同时使用真实数据作了实证研究，并与最小二乘法和逐步线性回归方法进行比较。结果表明：自适应Lasso方法在人民币汇率影响因素的选择方面，相对于逐步线性回归和最小二乘法有明显的优势。自适应Lasso方法不仅仅完成了模型的参数估计，同时也完成了对影响人民币汇率因素的筛选。%Among many factors that affect the RMB exchange rate , we choose seven main factors that affecting the RMB exchange rate , these are the GDP growth rate, the increase rate of imports and exports balance, the growth rate of money supply, the growth rate of foreign exchange reserves, the increase rate of China-American relative price index, inflation rate, gap of China-American interest.Choose a new variable selection method-adaptive Lasso methods to select impact factors of the RMB exchange rate .We use real data to study, and compared with the least squares meth-od and stepwise linear regression.The results show that adaptive Lasso is better than the least squares and stepwise linear regression .By the adap-tive Lasso method, we not only obtain the parameter estimation, but also get the impact factors of the RMB exchange rate.
Institute of Scientific and Technical Information of China (English)
张锐丽; 史凤隆; 高万春
2013-01-01
Maintenance capability assessment involves many measurable indicators , and how to streamline a large number of in-dex values is a hot research problem .We used the factor analysis to integrate various indicators , considered their relevance , and then extracted the common factors .According to the common factors which represent the maintenance indicators , we reintegrated the original data , carried out the groups divided with systematic cluster .%维修保障能力中涉及衡量的指标值较多，如何对大量的指标值进行精简，是当前评估保障能力研究的热点。本文使用因子分析先将指标综合，考虑其相关性，提取公共因子，然后根据公因子代表的维修指标重新评估维修保障能力。
Association between Social and Demographic Factors with Feeding Methods in
Directory of Open Access Journals (Sweden)
Maryam Gholamalizadeh
2013-03-01
Full Text Available Background: Healthy Nutrition has an important role in childhood. Food habits of a child probably will continue to adulthood and increase the risk of many chronic diseases. Role of parents in child nutrition as a food producer and eating pattern has recognized to most important factor of child nutrition. Recent studies have shown that the methods used by parents to child feeding have an important role in the child’s diet and BMI. This paper aimed to investigate which parents use which types of parenting control practices to manage their children’s nutrition. Materials and Methods: A cross-sectional survey of 208 parents with children aged 3-6 years was carried out in 30 primary schools. Measures included demographic and social factors and aspects of child feeding practices.Results: Results showed that stay at home mothers used more modeling practices. Mothers of sons used more pressure to eat than others. Older mothers used less pressure to eat. Mothers with higher BMI used more emotion regulation strategy and less modeling. And mothers with more education used more modeling.Conclusion: The results showed a significant relationship between demographic and social factors with aspects of the feeding practices.
Numerical methods in software and analysis
Rice, John R
1992-01-01
Numerical Methods, Software, and Analysis, Second Edition introduces science and engineering students to the methods, tools, and ideas of numerical computation. Introductory courses in numerical methods face a fundamental problem-there is too little time to learn too much. This text solves that problem by using high-quality mathematical software. In fact, the objective of the text is to present scientific problem solving using standard mathematical software. This book discusses numerous programs and software packages focusing on the IMSL library (including the PROTRAN system) and ACM Algorithm
DEFF Research Database (Denmark)
Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der
2002-01-01
The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals...... are considered to be Gaussian. Conventional FORM analysis yields the linearization point of the idealized limit-state surface. A model correction factor is then introduced to push the idealized limit-state surface onto the actual limit-state surface. A few iterations yield a good approximation of the reliability...... reliability method; Model correction factor method; Nataf field integration; Non-Gaussion random field; Random field integration; Structural reliability; Pile foundation reliability...
Key Factors for Selecting an Agile Method: A Systematic Literature Review
Directory of Open Access Journals (Sweden)
Mashal Kasem Alqudah
2017-04-01
Full Text Available Agile methods have become popular in recent years because the success rate of project development using Agile methods is better than structured design methods. Nevertheless, less than 50 percent of projects implemented using Agile methods are considered successful, and selecting the wrong Agile method is one of the reasons for project failure. Selecting the most appropriate Agile method is a challenging task because there are so many to choose from. In addition, potential adopters believe that migrating to an Agile method involves taking a drastic risk. Therefore, to assist project managers and other decision makers, this study aims to identify the key factors that should be considered when selecting an appropriate Agile method. A systematic literature review was performed to elicit these factors in an unbiased manner and then content analysis was used to analyze the resultant data. It was found that the nature of project, development team skills, project constraints, customer involvement and organizational culture are the key factors that should guide decision makers in the selection of an appropriate Agile method based on the value these factors have for different organizations and/or different projects.
Organic reagents in spectrophotometric methods of analysis
Energy Technology Data Exchange (ETDEWEB)
Savvin, Sergey B; Mikhailova, Alla V [V.I. Vernadsky Institute of Geochemistry and Analytical Chemistry, Russian Academy of Sciences, Moscow (Russian Federation); Shtykov, S N [Department of Chemistry, N.G. Chernyshevskii Saratov State University (Russian Federation)
2006-04-30
The role of organic, in particular, complex-forming, reagents in the formation and development of spectrophotometric analysis is discussed. The prospects for the use of organic reagents in modern analytical methods are considered; the attention is focused on modified and immobilised reagents, receptor molecules and on the use of nonaqueous and organised media.
Systems and methods for sample analysis
Energy Technology Data Exchange (ETDEWEB)
Cooks, Robert Graham; Li, Guangtao; Li, Xin; Ouyang, Zheng
2015-10-20
The invention generally relates to systems and methods for sample analysis. In certain embodiments, the invention provides a system for analyzing a sample that includes a probe including a material connected to a high voltage source, a device for generating a heated gas, and a mass analyzer.
Modern methods of wine quality analysis
Directory of Open Access Journals (Sweden)
Галина Зуфарівна Гайда
2015-06-01
Full Text Available In this paper physical-chemical and enzymatic methods of quantitative analysis of the basic wine components were reviewed. The results of own experiments were presented for the development of enzyme- and cell-based amperometric sensors on ethanol, lactate, glucose, arginine
Single-cell analysis - Methods and protocols
Carlo Alberto Redi
2013-01-01
This is certainly a timely volume in the Methods in molecular biology series: we already entered the synthetic biology era and thus we need to be aware of the new methodological advances able to fulfill the new and necessary needs for biologists, biotechnologists and nano-biotechnologists. Notably, among these, the possibility to perform single cell analysis allow researchers to capture single cell responses....
ASAAM: Aspectual Sofware Architecture Analysis Method
Tekinerdogan, B.
Software architecture analysis methods aim to predict the quality of a system before it has been developed. In general, the quality of the architecture is validated by analyzing the impact of predefined scenarios on architectural components. Hereby, it is implicitly assumed that an appropriate
Digital Forensics Analysis of Spectral Estimation Methods
Mataracioglu, Tolga
2011-01-01
Steganography is the art and science of writing hidden messages in such a way that no one apart from the intended recipient knows of the existence of the message. In today's world, it is widely used in order to secure the information. In this paper, the traditional spectral estimation methods are introduced. The performance analysis of each method is examined by comparing all of the spectral estimation methods. Finally, from utilizing those performance analyses, a brief pros and cons of the spectral estimation methods are given. Also we give a steganography demo by hiding information into a sound signal and manage to pull out the information (i.e, the true frequency of the information signal) from the sound by means of the spectral estimation methods.
Spectroscopic Chemical Analysis Methods and Apparatus
Hug, William F. (Inventor); Reid, Ray D. (Inventor); Bhartia, Rohit (Inventor); Lane, Arthur L. (Inventor)
2017-01-01
Spectroscopic chemical analysis methods and apparatus are disclosed which employ deep ultraviolet (e.g. in the 200 nm to 300 nm spectral range) electron beam pumped wide bandgap semiconductor lasers, incoherent wide bandgap semiconductor light emitting devices, and hollow cathode metal ion lasers to perform non-contact, non-invasive detection of unknown chemical analytes. These deep ultraviolet sources enable dramatic size, weight and power consumption reductions of chemical analysis instruments. In some embodiments, Raman spectroscopic detection methods and apparatus use ultra-narrow-band angle tuning filters, acousto-optic tuning filters, and temperature tuned filters to enable ultra-miniature analyzers for chemical identification. In some embodiments Raman analysis is conducted along with photoluminescence spectroscopy (i.e. fluorescence and/or phosphorescence spectroscopy) to provide high levels of sensitivity and specificity in the same instrument.
Practical Fourier analysis for multigrid methods
Wienands, Roman
2004-01-01
Before applying multigrid methods to a project, mathematicians, scientists, and engineers need to answer questions related to the quality of convergence, whether a development will pay out, whether multigrid will work for a particular application, and what the numerical properties are. Practical Fourier Analysis for Multigrid Methods uses a detailed and systematic description of local Fourier k-grid (k=1,2,3) analysis for general systems of partial differential equations to provide a framework that answers these questions.This volume contains software that confirms written statements about convergence and efficiency of algorithms and is easily adapted to new applications. Providing theoretical background and the linkage between theory and practice, the text and software quickly combine learning by reading and learning by doing. The book enables understanding of basic principles of multigrid and local Fourier analysis, and also describes the theory important to those who need to delve deeper into the detai...
Multiple predictor smoothing methods for sensitivity analysis.
Energy Technology Data Exchange (ETDEWEB)
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
Simple gas chromatographic method for furfural analysis.
Gaspar, Elvira M S M; Lopes, João F
2009-04-03
A new, simple, gas chromatographic method was developed for the direct analysis of 5-hydroxymethylfurfural (5-HMF), 2-furfural (2-F) and 5-methylfurfural (5-MF) in liquid and water soluble foods, using direct immersion SPME coupled to GC-FID and/or GC-TOF-MS. The fiber (DVB/CAR/PDMS) conditions were optimized: pH effect, temperature, adsorption and desorption times. The method is simple and accurate (RSDTOF-MS: 0.3, 1.2 and 0.9 ngmL(-1) for 2-F, 5-MF and 5-HMF, respectively). It was applied to different commercial food matrices: honey, white, demerara, brown and yellow table sugars, and white and red balsamic vinegars. This one-step, sensitive and direct method for the analysis of furfurals will contribute to characterise and quantify their presence in the human diet.
Heteroscedastic regression analysis method for mixed data
Institute of Scientific and Technical Information of China (English)
FU Hui-min; YUE Xiao-rui
2011-01-01
The heteroscedastic regression model was established and the heteroscedastic regression analysis method was presented for mixed data composed of complete data, type- I censored data and type- Ⅱ censored data from the location-scale distribution. The best unbiased estimations of regression coefficients, as well as the confidence limits of the location parameter and scale parameter were given. Furthermore, the point estimations and confidence limits of percentiles were obtained. Thus, the traditional multiple regression analysis method which is only suitable to the complete data from normal distribution can be extended to the cases of heteroscedastic mixed data and the location-scale distribution. So the presented method has a broad range of promising applications.
Menstrual Factors, Reproductive Factors and Lung Cancer Risk: A Meta-analysis
Directory of Open Access Journals (Sweden)
Yue ZHANG
2012-12-01
Full Text Available Background and objective Epidemiological studies have suggested that menstrual and reproductive factors may influence lung cancer risk, but the results are controversial. We therefore carried out a meta-analysis aiming to examine the associations of lung cancer in women with menstrual and reproductive factors. Methods Relevant studies were searched from PubMed database, CNKI, WANFANG DATA and VIP INFORMATION up to January 2012, with no language restrictions. References listed from selected papers were also reviewed. We included studies that reported the estimates of relative risks (RRs with 95% confidence intervals (CIs for the association between menstrual and reproductive factors and lung cancer risk. The pooled RRs were calculated after the heterogeneity test with the software Stata 11, and publication bias and sensitivity were evaluated at the same time. Results Twenty-five articles, representing 24 independent studies, were included in this meta-analysis. Older age at menarche in North America women (RR=0.83; 95%CI: 0.73-0.94 was associated with a significant decreased risk of lung cancer. Longer length of menstrual cycle was also associated with decreased lung cancer risk (RR=0.72; 95%CI: 0.57-0.90. Other exposures were not significantly associated. Conclusions Our analysis provides evidence of the hypothesis that female sex hormones influence the risk of lung cancer in women, yet additional studies are warranted to extend this finding and to clarify the underlying mechanisms.
Institute of Scientific and Technical Information of China (English)
苏菊香; 蔡连顺; 张庆华; 陈光; 毕胜; 代月; 逯川英子; 刘春波; 薛凤娇
2012-01-01
Objective To compare the Demodex detection method,to investigate the situation of Demodex infection in medical students,and to analyze the Demodex infectious factors.Methods 612 students were detected for Demodex with transparent adhesive tape method,skimming method and extrusion method respectively.The factors leading to Demodex infection were investigated through questionnaire survey.Results The infection rates detected by cellophane tape,scraping and squeezing methods were 21.73％,14.71％ and 15.35％,respectively.The relevance ratio of cellophane tape method was higher than that of the other two methods.The Demodex infection rate was 21.73％ in college students.Compared to the dry skin,the infection raties of oily skin and mixed skin,were higher with statistics significance,however there was no statistic significant difference between the infection rates in facial healthy students and facial diseased students.Conclusion The cellophane tape method will be used in future teaching.The Demodex infection in college students was mild.The factors leading to Demodex infection are closely related to the habits and customs,skin characters,and the collective life.%目的 比较蠕形螨的检测方法,了解我校医学生蠕形螨感染情况,分析蠕形螨感染的相关因素.方法 分别采用透明胶纸法、刮脂法、挤刮法对612名学生进行蠕形螨检测,并通过问卷调查分析蠕形螨感染的相关因素.结果 透明胶纸法、刮脂法和挤压检测的感染率分别为21.73％、14.71％、15.35％.透明胶纸法检测的感染率高于其他2种方法,大学生蠕形螨感染率为21.73％,油性皮肤和混合性皮肤感染率与干性皮肤感染率相比差异具有统计学意义,油性皮肤和混合性皮肤感染率较高,面部健康者和面部疾患者感染率无统计学意义.结论 在以后的教学中采用透明胶纸法检测蠕形螨,大学生以轻度感染为主,蠕形螨感染与生活习惯、皮肤性状、集体生活因素密切相关.
The determinants of the bias in Minimum Rank Factor Analysis (MRFA)
Socan, G; ten Berge, JMF; Yanai, H; Okada, A; Shigemasu, K; Kano, Y; Meulman, JJ
2003-01-01
Minimum Rank Factor Analysis (MRFA), see Ten Berge (1998), and Ten Berge and Kiers (1991), is a method of common factor analysis which yields, for any given covariance matrix Sigma, a diagonal matrix Psi of unique variances which are nonnegative and which entail a reduced covariance matrix Sigma-Psi
Integrals of random fields treated by the model correction factor method
DEFF Research Database (Denmark)
Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der
2002-01-01
The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals...... are considered to be Gaussian. Conventional FORM analysis yields the linearization point of the idealized limit-state surface. A model correction factor is then introduced to push the idealized limit-state surface onto the actual limit-state surface. A few iterations yield a good approximation of the reliability...
Institute of Scientific and Technical Information of China (English)
张敏; 郝艳青; 柳韦华; 孙铮
2013-01-01
目的 了解产前保健服务利用现状并探讨主要影响因素及其干预方法.方法 采用简单随机抽样方法,选择2011年1月至2012年7月住院的1685例孕产妇为研究对象,采用自行设计的问卷进行调查,并从孕产妇及其病历中收集相关资料,内容包括:一般人口学特征、医疗费用的支付方式、产前保健知识认知程度、就诊便利性、家庭成员对孕产妇的重视程度,产前保健服务利用情况等.结果 产前保健服务的利用率为81.19％.多因素Logistic回归分析显示:不同年龄、医疗费用支付方式、孕次、产前保健知识认知程度、家庭成员重视程度的孕产妇间,产前保健服务的利用率有显著差异.结论 本地区产前保健服务利用率偏低,改善产前保健服务模式；加大健康教育宣传力度；完善医疗保障制度；强化社区保健服务的意识及技术的提高是提高产前保健服务利用率的关键.%Objective To investigate utilization of antenatal health care service,analyze its influential factors and intervention methods.Methtods 1685 pregnant women who were in hospitalization from January 2011 to July 2012 were selected.A self-designed questionnaire was conducted and case history information were collected,including general information,medical costs means,awareness of prenatal care knowledge,whether convenient to see doctor,the degree of attention paid to pregnant women,the utilization of antenatal health care.Results 81.19％(1368/1685)of the pregnant women used antenatal health care service.Multiple Logistic regression analysis showed that the difference between diverse age,medical costs means,the number of pregnancies,level of awareness of prenatal care knowledge,the degree of attention paid to pregnant women was significant.Conclusions The utilization of antenatal health care service in this area is on the low side.The critical of improving the utilization of prenatal care services is improving
Replica Analysis for Portfolio Optimization with Single-Factor Model
Shinzato, Takashi
2017-06-01
In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.
Disruptive Event Biosphere Doser Conversion Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
M. Wasiolek
2000-12-28
The purpose of this report was to document the process leading to, and the results of, development of radionuclide-, exposure scenario-, and ash thickness-specific Biosphere Dose Conversion Factors (BDCFs) for the postulated postclosure extrusive igneous event (volcanic eruption) at Yucca Mountain. BDCF calculations were done for seventeen radionuclides. The selection of radionuclides included those that may be significant dose contributors during the compliance period of up to 10,000 years, as well as radionuclides of importance for up to 1 million years postclosure. The approach documented in this report takes into account human exposure during three different phases at the time of, and after, volcanic eruption. Calculations of disruptive event BDCFs used the GENII-S computer code in a series of probabilistic realizations to propagate the uncertainties of input parameters into the output. The pathway analysis included consideration of different exposure pathway's contribution to the BDCFs. BDCFs for volcanic eruption, when combined with the concentration of radioactivity deposited by eruption on the soil surface, allow calculation of potential radiation doses to the receptor of interest. Calculation of radioactivity deposition is outside the scope of this report and so is the transport of contaminated ash from the volcano to the location of the receptor. The integration of the biosphere modeling results (BDCFs) with the outcomes of the other component models is accomplished in the Total System Performance Assessment (TSPA), in which doses are calculated to the receptor of interest from radionuclides postulated to be released to the environment from the potential repository at Yucca Mountain.
Nominal Performance Biosphere Dose Conversion Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
M. Wasiolek
2000-12-21
The purpose of this report was to document the process leading to development of the Biosphere Dose Conversion Factors (BDCFs) for the postclosure nominal performance of the potential repository at Yucca Mountain. BDCF calculations concerned twenty-four radionuclides. This selection included sixteen radionuclides that may be significant nominal performance dose contributors during the compliance period of up to 10,000 years, five additional radionuclides of importance for up to 1 million years postclosure, and three relatively short-lived radionuclides important for the human intrusion scenario. Consideration of radionuclide buildup in soil caused by previous irrigation with contaminated groundwater was taken into account in the BDCF development. The effect of climate evolution, from the current arid conditions to a wetter and cooler climate, on the BDCF values was evaluated. The analysis included consideration of different exposure pathway's contribution to the BDCFs. Calculations of nominal performance BDCFs used the GENII-S computer code in a series of probabilistic realizations to propagate the uncertainties of input parameters into the output. BDCFs for the nominal performance, when combined with the concentrations of radionuclides in groundwater allow calculation of potential radiation doses to the receptor of interest. Calculated estimates of radionuclide concentration in groundwater result from the saturated zone modeling. The integration of the biosphere modeling results (BDCFs) with the outcomes of the other component models is accomplished in the Total System Performance Assessment (TSPA) to calculate doses to the receptor of interest from radionuclides postulated to be released to the environment from the potential repository at Yucca Mountain.
Exploratory factor analysis of the Dizziness Handicap Inventory (German version
Directory of Open Access Journals (Sweden)
de Bruin Eling D
2010-03-01
Full Text Available Abstract Background The Dizziness Handicap Inventory (DHI is a validated, self-report questionnaire which is widely used as an outcome measure. Previous studies supported the multidimensionality of the DHI, but not the original subscale structure. The objectives of this survey were to explore the dimensions of the Dizziness Handicap Inventory - German version, and to investigate the associations of the retained factors with items assessing functional disability and the Hospital Anxiety and Depression Scale (HADS. Secondly we aimed to explore the retained factors according to the International Classification of Functioning, Disability and Health (ICF. Methods Patients were recruited from a tertiary centre for vertigo, dizziness or balance disorders. They filled in two questionnaires: (1 The DHI assesses precipitating physical factors associated with dizziness/unsteadiness and functional/emotional consequences of symptoms. (2 The HADS assesses non-somatic symptoms of anxiety and depression. In addition, patients answered the third question of the University of California Los Angeles-Dizziness Questionnaire which covers the impact of dizziness and unsteadiness on everyday activities. Principal component analysis (PCA was performed to explore the dimensions of the DHI. Associations were estimated by Spearman correlation coefficients. Results One hundred ninety-four patients with dizziness or unsteadiness associated with a vestibular disorder, mean age (standard deviation of 50.6 (13.6 years, participated. Based on eigenvalues greater one respectively the scree plot we analysed diverse factor solutions. The 3-factor solution seems to be reliable, clinically relevant and can partly be explained with the ICF. It explains 49.2% of the variance. Factor 1 comprises the effect of dizziness and unsteadiness on emotion and participation, factor 2 informs about specific activities or effort provoking dizziness and unsteadiness, and factor 3 focuses on self
Evaluation of methods for modeling transcription-factor sequence specificity
Weirauch, Matthew T.; Cote, Atina; Norel, Raquel; Annala, Matti; Zhao, Yue; Riley, Todd R.; Saez-Rodriguez, Julio; Cokelaer, Thomas; Vedenko, Anastasia; Talukder, Shaheynoor; Bussemaker, Harmen J.; Morris, Quaid D.; Bulyk, Martha L.; Stolovitzky, Gustavo
2013-01-01
Genomic analyses often involve scanning for potential transcription-factor (TF) binding sites using models of the sequence specificity of DNA binding proteins. Many approaches have been developed to model and learn a protein’s binding specificity, but these methods have not been systematically compared. Here we applied 26 such approaches to in vitro protein binding microarray data for 66 mouse TFs belonging to various families. For 9 TFs, we also scored the resulting motif models on in vivo data, and found that the best in vitro–derived motifs performed similarly to motifs derived from in vivo data. Our results indicate that simple models based on mononucleotide position weight matrices learned by the best methods perform similarly to more complex models for most TFs examined, but fall short in specific cases (<10%). In addition, the best-performing motifs typically have relatively low information content, consistent with widespread degeneracy in eukaryotic TF sequence preferences. PMID:23354101
Methods for genetic linkage analysis using trisomies
Energy Technology Data Exchange (ETDEWEB)
Feingold, E.; Lamb, N.E.; Sherman, S.L. [Emory Univ., Atlanta, GA (United States)
1994-09-01
Certain genetic disorders (e.g. congenital cataracts, duodenal atresia) are rare in the general population, but more common in people with Down`s syndrome. We present a method for using individuals with trisomy 21 to map genes for such traits. Our methods are analogous to methods for mapping autosomal dominant traits using affected relative pairs by looking for markers with greater than expected identity-by-descent. In the trisomy case, one would take trisomic individuals and look for markers with greater than expected reduction to homozygosity in the chromosomes inherited form the non-disjoining parent. We present statistical methods for performing such a linkage analysis, including a test for linkage to a marker, a method for estimating the distance from the marker to the gene, a confidence interval for that distance, and methods for computing power and sample sizes. The methods are described in the context of gene-dosage model for the etiology of the disorder, but can be extended to other models. We also resolve some practical issues involved in implementing the methods, including how to use partially informative markers, how to test candidate genes, and how to handle the effect of reduced recombination associated with maternal meiosis I non-disjunction.
Normal science and its tools: Reviewing the effects of exploratory factor analysis in management
Directory of Open Access Journals (Sweden)
Luciano Rossoni
2016-06-01
Full Text Available ABSTRACT The aim of this study is to investigate how different methods of extraction, factor definition, and rotation of exploratory factor analysis affect the fit of measurement scales. For this purpose, we undertook a meta-analysis of 23 studies. Our results indicate that the Principal Components method provides greater explained variance, while the Maximum Likelihood method increases reliability. Of the rotations methods, Varimax provides greater reliability while Quartimax provides lower correlation between factors. In conclusion, this study highlights implications for quantitative research and suggests potential new studies.
A DECOMPOSITION METHOD OF STRUCTURAL DECOMPOSITION ANALYSIS
Institute of Scientific and Technical Information of China (English)
LI Jinghua
2005-01-01
Over the past two decades,structural decomposition analysis(SDA)has developed into a major analytical tool in the field of input-output(IO)techniques,but the method was found to suffer from one or more of the following problems.The decomposition forms,which are used to measure the contribution of a specific determinant,are not unique due to the existence of a multitude of equivalent forms,irrational due to the weights of different determinants not matching,inexact due to the existence of large interaction terms.In this paper,a decomposition method is derived to overcome these deficiencies,and we prove that the result of this approach is equal to the Shapley value in cooperative games,and so some properties of the method are obtained.Beyond that,the two approaches that have been used predominantly in the literature have been proved to be the approximate solutions of the method.
Factor analysis of serogroups botanica and aurisina of Leptospira biflexa.
Cinco, M
1977-11-01
Factor analysis is performed on serovars of Botanica and Aurisina serogroup of Leptospira biflexa. The results show the arrangement of main factors serovar and serogroup specific, as well as the antigens common with serovars of heterologous serogroups.
An Item Factor Analysis of the Mooney Problem Check List
Stewart, David W.; Deiker, Thomas
1976-01-01
Explores the factor structure of the Mooney Problem Check List (MPCL) at the junior and senior high school level by undertaking a large obverse factor analysis of item responses in three adolescent criterion groups. (Author/DEP)
Project-Method Fit: Exploring Factors That Influence Agile Method Use
Young, Diana K.
2013-01-01
While the productivity and quality implications of agile software development methods (SDMs) have been demonstrated, research concerning the project contexts where their use is most appropriate has yielded less definitive results. Most experts agree that agile SDMs are not suited for all project contexts. Several project and team factors have been…
On the Analysis Method of the Triple Tesh Cross Design
Institute of Scientific and Technical Information of China (English)
JinYi
1995-01-01
The analysis method of the triple test cross design has been discussed carefully form the two factor experiment design and the genetic models of additive-dominant effect and of epistasis effect-Two points different from the previous reprots have been concluded:(1)both the degrees of freedom of the orthogonal terms C2 and C3 are m;(2) the denominator in the F test to C2 and C3 is the error mean of square between plots.
Method of thermal derivative gradient analysis (TDGA
Directory of Open Access Journals (Sweden)
M. Cholewa
2009-07-01
Full Text Available In this work a concept of thermal analysis was shown, using for crystallization kinetics description the temperature derivatives after time and direction. Method of thermal derivative gradient analysis (TDGA is assigned for alloys and metals investigation as well as cast composites in range of solidification. The construction and operation characteristics were presented for the test stand including processing modules and probes together with thermocouples location. Authors presented examples of results interpretation for AlSi11 alloy castings with diversified wall thickness and at different pouring temperature.
Directory of Open Access Journals (Sweden)
Michelle Alessandra de Castro
2015-02-01
Full Text Available This study aimed to investigate the effects of factor rotation methods on interpretability and construct validity of dietary patterns derived in a representative sample of 1,102 Brazilian adults. Dietary patterns were derived from exploratory factor analysis. Orthogonal (varimax and oblique rotations (promax, direct oblimin were applied. Confirmatory factor analysis assessed construct validity of the dietary patterns derived according to two factor loading cut-offs (≥ |0.20| and ≥ |0.25|. Goodness-of-fit indexes assessed the model fit. Differences in composition and in interpretability of the first pattern were observed between varimax and promax/oblimin at cut-off ≥ |0.20|. At cut-off ≥ |0.25|, these differences were no longer observed. None of the patterns derived at cut-off ≥ |0.20| showed acceptable model fit. At cut-off ≥ |0.25|, the promax rotation produced the best model fit. The effects of factor rotation on dietary patterns differed according to the factor loading cut-off used in exploratory factor analysis.
基于蒙特卡洛方法研究电子线输出因子影响因素%Analysis of electron beam output factors by Monte Carlo method
Institute of Scientific and Technical Information of China (English)
迟子锋; 刘丹; 张勇; 李润霄; 景仲昊; 冯峰; 韩春
2014-01-01
目的 使用蒙特卡洛方法计算电子线输出因子,探讨直线加速器各组成部分对输出因子影响.方法 使用蒙特卡洛MCTP软件模拟瓦里安23EX医用直线加速器6、9、18 MeV电子线,源皮距100 cm下不同射野大小(2 cm×2cm～25 cm×25 cm)的输出因子.使用Scanditronix Wellhofer Blue Phantom自动扫描系统三维水箱测量相同条件下的输出因子.观察计算与测量结果差异,要求两者＜2％.再使用蒙特卡洛方法分别计算不同能量下不同限束系统条件下原粒子和散射粒子的输出因子(包括限光筒、铅门、挡铅的输出因子),并讨论限束系统各个部件对输出因子的影响.结果 MCTP系统计算的6、9、18 MeV电子线输出因子与测量结果间差异＜1％.电子线能量、限光筒—铅门—挡铅组合以复杂的方式影响原粒子和散射粒子,从而影响输出因子.结论 蒙特卡洛MCTP软件能精确计算不同能量下不同限束系统条件下原粒子和散射粒子的输出因子,电子线能量及限束系统以复杂方式影响输出因子.%Objective To investigate the application of the Monte Carlo dose calculation of output factors for electron beams in radiotherapy.Methods The code EGS4/MCTP was used to simulate the head of a medical linear accelerator (Varian 23EX) to calculate the output factors for 6 MeV,9 MeV and 18 MeV electron beams.The source-to-surface distance used was 100 cm.The field sizes ranged from 2 cm × 2 cm to 25 cm × 25 cm.The calculated output factors agreed with the corresponding measured factors which were measured by the IBA Phantom system to within 2％.Then,the output factors of direct articles and indirect articles which were under different energy and various cone-insert combinations were calculated by the code EGS4/MCTP.Results The calculated output factors agreement with the measurements is found to be mostly under the 1％ level.The variation of output factors depends on the characteristics of
Institute of Scientific and Technical Information of China (English)
路晓崇; 黄元炯; 宋朝鹏; 孙福山; 王松峰; 张铭真; 宫长荣
2015-01-01
烤烟烘烤效果受到诸多因素的影响,为在众多的影响因素中识别出烤烟烘烤的关键性影响因素,采用模糊DEMATEL(Fuzzy of Decision Making Trial and Evaluation Laboratory)分析法对影响烤烟烘烤的5大类(烟叶生长环境、烘烤环境、烟叶属性、装烟状况以及烘烤人员素质)17个因素的原因度与中心度进行了分析.结果表明,影响烤烟烘烤的主要因素有光照、大气温度、降雨、干球温度与湿球温度,其中烟草大田生长期间的降雨量、光照与大气温度是影响烤烟烘烤效果的根本因素,对烤房内干球温度与湿球温度的控制是影响烤烟烘烤效果的直接因素,而干球温度对烤烟烘烤的影响最大.%To identify the key factors influencing the curing of flue-cured tobacco, fuzzy DEMATEL (Fuzzy of Decision Making Trial and Evaluation Laboratory) method was adopted to analyze the relation and prominence of seventeen factors in five major categories (growing environment, curing conditions, tobacco leaf characteristics, loading status and operators'skill) affecting curing. The results showed that the major factors affecting curing of flue-cured tobacco were light irradiation, atmosphere temperature, rainfall, dry bulb temperature and wet bulb temperature; among them, the rainfall, light irradiation and atmosphere at tobacco growing stage in field were fundamental factors, the control of dry bulb temperature and wet bulb temperature in barn was direct factor, and dry bulb temperature impacted the greatest influence on curing.
Neumann, Marc B
2012-09-01
Five sensitivity analysis methods based on derivatives, screening, regression, variance decomposition and entropy are introduced, applied and compared for a model predicting micropollutant degradation in drinking water treatment. The sensitivity analysis objectives considered are factors prioritisation (detecting important factors), factors fixing (detecting non-influential factors) and factors mapping (detecting which factors are responsible for causing pollutant limit exceedances). It is shown how the applicability of methods changes in view of increasing interactions between model factors and increasing non-linearity between the model output and the model factors. A high correlation is observed between the indices obtained for the objectives factors prioritisation and factors mapping due to the positive skewness of the probability distributions of the predicted residual pollutant concentrations. The entropy-based method which uses the Kullback-Leibler divergence is found to be particularly suited when assessing pollutant limit exceedances.
Model-based methods for linkage analysis.
Rice, John P; Saccone, Nancy L; Corbett, Jonathan
2008-01-01
The logarithm of an odds ratio (LOD) score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential so that pedigrees or LOD curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders where the maximum LOD score statistic shares some of the advantages of the traditional LOD score approach, but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the LOD score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.
Exploratory matrix factorization for PET image analysis.
Kodewitz, A; Keck, I R; Tomé, A M; Lang, E W
2010-01-01
Features are extracted from PET images employing exploratory matrix factorization techniques such as nonnegative matrix factorization (NMF). Appropriate features are fed into classifiers such as a support vector machine or a random forest tree classifier. An automatic feature extraction and classification is achieved with high classification rate which is robust and reliable and can help in an early diagnosis of Alzheimer's disease.
Spatial Analysis Methods of Road Traffic Collisions
DEFF Research Database (Denmark)
Loo, Becky P. Y.; Anderson, Tessa Kate
Spatial Analysis Methods of Road Traffic Collisions centers on the geographical nature of road crashes, and uses spatial methods to provide a greater understanding of the patterns and processes that cause them. Written by internationally known experts in the field of transport geography, the book...... outlines the key issues in identifying hazardous road locations (HRLs), considers current approaches used for reducing and preventing road traffic collisions, and outlines a strategy for improved road safety. The book covers spatial accuracy, validation, and other statistical issues, as well as link...
ANALYSIS METHOD OF AUTOMATIC PLANETARY TRANSMISSION KINEMATICS
Directory of Open Access Journals (Sweden)
Józef DREWNIAK
2014-06-01
Full Text Available In the present paper, planetary automatic transmission is modeled by means of contour graphs. The goals of modeling could be versatile: ratio calculating via algorithmic equation generation, analysis of velocity and accelerations. The exemplary gears running are analyzed, several drives/gears are consecutively taken into account discussing functional schemes, assigned contour graphs and generated system of equations and their solutions. The advantages of the method are: algorithmic approach, general approach where particular drives are cases of the generally created model. Moreover, the method allows for further analyzes and synthesis tasks e.g. checking isomorphism of design solutions.
Analysis of Factors Affecting the Quality of an E-commerce Website Using Factor Analysis
Directory of Open Access Journals (Sweden)
Saurabh Mishra
2014-12-01
Full Text Available The purpose of this study is to identify factors which affect the quality and effectiveness of an e commerce website which also majorly affect customer satisfaction and ultimately customer retention and loyalty. This research paper examines a set of 23 variables and integrates them into 4 factors which affect the quality of a website. An online questionnaire survey was conducted to generate statistics regarding the preferences of the e-commerce website users.The 23 variables taken from customer survey are generalized into 4 major factors using exploratory factor analysis which are content, navigation, services and interface design. The research majorly consists of the responses of students between the age group of 18-25 years and considers different B2C commercial websites. Identified variables are important with respect to the current competition in the market as service of an e-commerce website also play a major role in ensuring customer satisfaction. Further research in this domain can be done for websites’ version for mobile devices.
Quantitative gold nanoparticle analysis methods: A review.
Yu, Lei; Andriola, Angelo
2010-08-15
Research and development in the area of gold nanoparticles' (AuNPs) preparation, characterization, and applications are burgeoning in recent years. Many of the techniques and protocols are very mature, but two major concerns are with the mass domestic production and the consumption of AuNP based products. First, how many AuNPs exist in a dispersion? Second, where are the AuNPs after digestion by the environment and how many are there? To answer these two questions, reliable and reproducible methods are needed to analyze the existence and the population of AuNP in samples. This review summarized the most recent chemical and particle quantitative analysis methods that have been used to characterize the concentration (in number of moles of gold per liter) or population (in number of particles per mL) of AuNPs. The methods summarized in this review include, mass spectroscopy, electroanalytical methods, spectroscopic methods, and particle counting methods. These methods may count the number of AuNP directly or analyze the total concentration of element gold in an AuNP dispersion.
Exploring Technostress: Results of a Large Sample Factor Analysis
Steponas Jonušauskas; Agota Giedre Raisiene
2016-01-01
With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...
Identification of noise in linear data sets by factor analysis
Energy Technology Data Exchange (ETDEWEB)
Roscoe, B.A.; Hopke, Ph.K. (Illinois Univ., Urbana (USA))
1982-01-01
A technique which has the ability to identify bad data points, after the data has been generated, is classical factor analysis. The ability of classical factor analysis to identify two different types of data errors make it ideally suited for scanning large data sets. Since the results yielded by factor analysis indicate correlations between parameters, one must know something about the nature of the data set and the analytical techniques used to obtain it to confidentially isolate errors.
Institute of Scientific and Technical Information of China (English)
方伟成; 孙成访; 郭文显
2015-01-01
水资源生态足迹是衡量水资源可持续发展的重要指标。研究东莞市水资源生态足迹现状，并应用LMDI法剖析水资源生态足迹影响因素。结果表明：2000－2011年间，东莞市水资源生态足迹整体上呈现先急速上升后缓慢下降的趋势，经济效应是水资源生态足迹增长的主要推动因素，水足迹技术效应是抑制水资源生态足迹增长的关键因素，水足迹结构效应和人口效应一定程度上助长水资源生态足迹的增长。%The water resources ecological footprint is an important index to measure the sustainable devel -opment of water resources .The paper studied the situation of water ecological footprint and used LMDI method to analyze the factors that impact the water ecological footprint in Dongguan .The result showed that during the period of 2000 to 2011 ,the water ecological footprint of Dongguan presented a trend of first rapid rise and then slow decline .The economic effect is the main factor to promote the development of wa-ter ecological footprints .The water footprint technical effect is a key factor to restrain the growth of water ecological footprint .The effects of water footprint structure and population encouraged the growth of water ecological footprint to a certain extent .
Institute of Scientific and Technical Information of China (English)
张祎; 杨春霞; 栗保明
2011-01-01
为了研究电磁轨道炮结构设计参数与电枢出口速度之间的相关性,建立了电磁轨道炮电枢出口速度影响因素的指标体系,采用灰熵关联法,分析了电容、电感、电阻、电枢质量、轨道有效长度、放电电压对某口径电磁轨道炮电枢出口速度散布特性影响的显著程度.结果表明,放电电压、电容是影响电磁轨道炮精度的主要因素,电枢质量时电磁轨道炮精度影响最小,这为提高电磁发射系统的优化设计提供了科学依据.%In order to study the relationship between the design parameters of electromagnetic railgun(EMG)and the velocity of armature, the index system of factors affecting the velocity of armature was built. By using grey relation entropy method, the significance of factors influencing the scattering of armature's muzzle velocity, including capacitance, inductance, resistance,armature's mass,effective length of railgun and charge voltage, was analyzed. The result shows that the main factors affecting the accuracy of EMG are discharge voltage and capacitance,and the effect of armature's mass on the accuracy of EMG is the smallest. The result offers scientific evidence for optimization design of EMG system.
Institute of Scientific and Technical Information of China (English)
孙桂春
2016-01-01
通过试验研究，分析了化学剂中有机氯含量测定的影响因素，规定了样品称量的范围、样品包裹固定的方法、盐含量测定时的进样速度等操作步骤，规范了检测方法，提高了检验数据准确性和工作时效性。%According to a series of experiment research,this thesis analyses the effective factors of organic chlorine content detection in chemicals,provides the range of sample weighing and the methods of sample packaging and fixing,stipulates some operating steps such sampling speed during determination of salt content,standardizes the detection approaches,improves the accuracy and timeliness in test data.
Cloud Based Development Issues: A Methodical Analysis
Directory of Open Access Journals (Sweden)
Sukhpal Singh
2012-11-01
Full Text Available Cloud based development is a challenging task for various software engineering projects, especifically for those which demand extraordinary quality, reusability and security along with general architecture. In this paper we present a report on a methodical analysis of cloud based development problems published in major computer science and software engineering journals and conferences organized by various researchers. Research papers were collected from different scholarly databases using search engines within a particular period of time. A total of 89 research papers were analyzed in this methodical study and we categorized into four classes according to the problems addressed by them. The majority of the research papers focused on quality (24 papers associated with cloud based development and 16 papers focused on analysis and design. By considering the areas focused by existing authors and their gaps, untouched areas of cloud based development can be discovered for future research works.
Numerical methods and analysis of multiscale problems
Madureira, Alexandre L
2017-01-01
This book is about numerical modeling of multiscale problems, and introduces several asymptotic analysis and numerical techniques which are necessary for a proper approximation of equations that depend on different physical scales. Aimed at advanced undergraduate and graduate students in mathematics, engineering and physics – or researchers seeking a no-nonsense approach –, it discusses examples in their simplest possible settings, removing mathematical hurdles that might hinder a clear understanding of the methods. The problems considered are given by singular perturbed reaction advection diffusion equations in one and two-dimensional domains, partial differential equations in domains with rough boundaries, and equations with oscillatory coefficients. This work shows how asymptotic analysis can be used to develop and analyze models and numerical methods that are robust and work well for a wide range of parameters.
Qualitative data analysis a methods sourcebook
Miles, Matthew B; Saldana, Johnny
2014-01-01
The Third Edition of Miles & Huberman's classic research methods text is updated and streamlined by Johnny SaldaNa, author of The Coding Manual for Qualitative Researchers. Several of the data display strategies from previous editions are now presented in re-envisioned and reorganized formats to enhance reader accessibility and comprehension. The Third Edition's presentation of the fundamentals of research design and data management is followed by five distinct methods of analysis: exploring, describing, ordering, explaining, and predicting. Miles and Huberman's original research studies are profiled and accompanied with new examples from SaldaNa's recent qualitative work. The book's most celebrated chapter, "Drawing and Verifying Conclusions," is retained and revised, and the chapter on report writing has been greatly expanded, and is now called "Writing About Qualitative Research." Comprehensive and authoritative, Qualitative Data Analysis has been elegantly revised for a new generation of qualitative r...
Digital dream analysis: a revised method.
Bulkeley, Kelly
2014-10-01
This article demonstrates the use of a digital word search method designed to provide greater accuracy, objectivity, and speed in the study of dreams. A revised template of 40 word search categories, built into the website of the Sleep and Dream Database (SDDb), is applied to four "classic" sets of dreams: The male and female "Norm" dreams of Hall and Van de Castle (1966), the "Engine Man" dreams discussed by Hobson (1988), and the "Barb Sanders Baseline 250" dreams examined by Domhoff (2003). A word search analysis of these original dream reports shows that a digital approach can accurately identify many of the same distinctive patterns of content found by previous investigators using much more laborious and time-consuming methods. The results of this study emphasize the compatibility of word search technologies with traditional approaches to dream content analysis. Copyright © 2014 Elsevier Inc. All rights reserved.
Effect Factors of Liquid Scintillation Analysis
Institute of Scientific and Technical Information of China (English)
2008-01-01
<正>Over the past decades, the liquid scintillation analysis (LSA) technique remains one of the most popular experimental tools used for the quantitative analysis of radionuclide, especially low-energy β
Risk factors for progressive ischemic stroke A retrospective analysis
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
BACKGROUND: Progressive ischemic stroke has higher fatality rate and disability rate than common cerebral infarction, thus it is very significant to investigate the early predicting factors related to the occurrence of progressive ischemic stroke, thc potential pathological mechanism and the risk factors of early intervention for preventing the occurrence of progressive ischemic stroke and ameliorating its outcome.OBJECTIVE: To analyze the possible related risk factors in patients with progressive ishcemic stroke, so as to provide reference for the prevention and treatment of progressive ishcemic stroke.DESIGN: A retrospective analysis.SETTING: Department of Neurology, General Hospital of Beijing Coal Mining Group.PARTICIPANTS: Totally 280 patients with progressive ischemic stroke were selected from the Department of Neurology, General Hospital of Beijing Coal Mining Group from March 2002 to June 2006, including 192 males and 88 females, with a mean age of (62±7) years old. They were all accorded with the diagnostic standards for cerebral infarction set by the Fourth National Academic Meeting for Cerebrovascular Disease in 1995, and confired by CT or MRI, admitted within 24 hours after attack, and the neurological defect progressed gradually or aggravated in gradients within 72 hours after attack, and the aggravation of neurological defect was defined as the neurological deficit score decreased by more than 2 points. Meanwhile,200 inpatients with non-progressive ischemic stroke (135 males and 65 females) were selected as the control group.METHODS: After admission, a univariate analysis of variance was conducted using the factors of blood pressure, history of diabetes mellitus, fever, leukocytosis, levels of blood lipids, fibrinogen, blood glucose and plasma homocysteine, cerebral arterial stenosis, and CT symptoms of early infarction, and the significant factors were involved in the multivariate non-conditional Logistic regression analysis.MAIN OUTCOME MEASURES
Trevisan, Marcello G; Garcia, Camila M; Schuchardt, Ulf; Poppi, Ronei J
2008-01-15
In this work, the base-catalyzed transesterification of soybean oil with ethanol was monitored on-line using mid-infrared spectroscopy (MIRS) and the yield of fatty acid ethyl esters (biodiesel) was obtained by (1)H NMR spectroscopy. The MIRS monitoring carried out for 12min, was performed using a cylindrical internal reflectance cell of PbSe in the range of 3707-814cm(-1) with eight co-added scans. Two different data treatment strategies were used: in the first, the models were built using the raw data and in the other, evolving factor analysis (EFA) was used to overcome the sensor time delay due to the on-line analysis, producing significantly better results. In addition, models based on partial least squares (PLS) using three batches for calibration and another for validation were compared with models with just one batch for calibration and three for validation. The models were compared between each other using root mean square error of prediction (RMSEP) and root mean square prediction difference (RMSPD), obtaining relative errors under 3%.
Factors affecting the selection and analysis of sports dance teaching methods%体育舞蹈教学方法的选用与影响因素分析
Institute of Scientific and Technical Information of China (English)
于君
2014-01-01
在体育舞蹈教学的过程中，体育舞蹈课堂教学方法作为提升教学效果，促进学生体育舞蹈技能不断提升的手段，学校体育舞蹈教学方法的选用与影响因素的分析，是学生体育舞蹈技能与体育舞蹈意识发展不竭的动力源泉。因此，体育舞蹈教学过程中，获得理想的课堂教学效果，对促进学生体育舞蹈学习兴趣的发展，提升学生的健康获得效果有着积极的意义。基于此，本文以学校体育舞蹈教学方法的选用为研究对象，结合体育舞蹈教学实际，就体育舞蹈课堂教学的方法进行了分析。%In the course of sports dance teaching, the sports dance teaching methods to improve teaching effectiveness as promote student sports dance skills and improve the means to analyze the factors affecting the choice of school sports and dance teaching methods, student sports dance skills and sports Dance consciousness development inexhaustible source of inspiration. Therefore, sports dance teaching process, to get a good classroom teaching, to promote students 'interest in learning and development dance sport enhance students' health, the effect has a positive meaning. Based on this, in order to choose the school sports dance teaching methods for the study, combined with practical sports dance teaching, dance sport on classroom teaching methods are analyzed.
Single-cell analysis - Methods and protocols
Directory of Open Access Journals (Sweden)
Carlo Alberto Redi
2013-06-01
Full Text Available This is certainly a timely volume in the Methods in molecular biology series: we already entered the synthetic biology era and thus we need to be aware of the new methodological advances able to fulfill the new and necessary needs for biologists, biotechnologists and nano-biotechnologists. Notably, among these, the possibility to perform single cell analysis allow researchers to capture single cell responses....
Text analysis devices, articles of manufacture, and text analysis methods
Turner, Alan E; Hetzler, Elizabeth G; Nakamura, Grant C
2013-05-28
Text analysis devices, articles of manufacture, and text analysis methods are described according to some aspects. In one aspect, a text analysis device includes processing circuitry configured to analyze initial text to generate a measurement basis usable in analysis of subsequent text, wherein the measurement basis comprises a plurality of measurement features from the initial text, a plurality of dimension anchors from the initial text and a plurality of associations of the measurement features with the dimension anchors, and wherein the processing circuitry is configured to access a viewpoint indicative of a perspective of interest of a user with respect to the analysis of the subsequent text, and wherein the processing circuitry is configured to use the viewpoint to generate the measurement basis.
Schonrock-Adema, Johanna; Heijne-Penninga, Marjolein; van Hell, Elisabeth A.; Cohen-Schotanus, Janke
2009-01-01
Background: The validation of educational instruments, in particular the employment of factor analysis, can be improved in many instances. Aims: To demonstrate the superiority of a sophisticated method of factor analysis, implying an integration of recommendations described in the factor analysis
Schonrock-Adema, Johanna; Heijne-Penninga, Marjolein; van Hell, Elisabeth A.; Cohen-Schotanus, Janke
2009-01-01
Background: The validation of educational instruments, in particular the employment of factor analysis, can be improved in many instances. Aims: To demonstrate the superiority of a sophisticated method of factor analysis, implying an integration of recommendations described in the factor analysis li
Slope stability analysis using limit equilibrium method in nonlinear criterion.
Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu
2014-01-01
In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci , and the parameter of intact rock m i . There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i , F decreases first and then increases.
采用水量平衡法分析呼伦湖水域面积变化因素%Analysis of variation factors of Hulun lake area by using water balance method
Institute of Scientific and Technical Information of China (English)
张璐; 张生; 孙标; 赵胜男; 田野; 赵水霞
2016-01-01
为摸清呼伦湖水位下降而导致周边生态环境恶化的原因,以呼伦湖为研究对象,通过Landsat ETM+和OLI影像资料得到呼伦湖面积变化情况,将1960-2013年共分为4个时期,利用水量平衡法分析影响呼伦湖水量的主要因素.结果表明:不同时期影响因素及影响程度均不同,径流量的R值分别为0.42、0.93、0.60、0.86,每个时期影响最大值,面积减小了213 km2,水位降低了3.35m.综合以上所述,得出入湖河流的径流量为调控呼伦湖水量的主控因子,其水域面积的持续萎缩主要是受气候变化和人类活动影响.%In order to investigate the reason why the deterioration of ecological environment surrounding Hulun Lake result from the decrease of water level, the paper took Hulun Lake as a research object,and used Landsat ETM + and OLI image information to obtain the situation of area change of the lake. It di-vided 1960-2013 into four periods and used water balance method to analyze the main factors that affect water quantityin the lake. The results showed that the influence factors and degree are different in differ-ent periods, the runoff value R were 0. 42,0. 93,0. 6,0. 86,among them,the maximum factor value in each period, the area reduced 213 km2 and the water level decreased 3. 35 m. Comprehensive discussed above, the runoff of input lake is the main control factor of water quantity in Hulun Lake, and the sustan-ble shrinking of water area is mainly affected by climate change and human activities.
National Research Council Canada - National Science Library
Bing-Yang Hu Tao Wan Wen-Zhi Zhang Jia-Hong Dong
2016-01-01
AIM To analyze the risk factors for pancreatic fistula after pancreaticoduodenectomy.METHODS We conducted a retrospective analysis of 539 successive cases of pancreaticoduodenectomy performed at our hospital from March 2012 to October 2015...
Space Debris Reentry Analysis Methods and Tools
Institute of Scientific and Technical Information of China (English)
WU Ziniu; HU Ruifeng; QU Xi; WANG Xiang; WU Zhe
2011-01-01
The reentry of uncontrolled spacecraft may be broken into many pieces of debris at an altitude in the range of 75-85 km.The surviving fragments could pose great hazard and risk to ground and people.In recent years,methods and tools for predicting and analyzing debris reentry and ground risk assessment have been studied and developed in National Aeronautics and Space Administration(NASA),European Space Agency(ESA) and other organizations,including the group of the present authors.This paper reviews the current progress on this topic of debris reentry briefly.We outline the Monte Carlo method for uncertainty analysis,breakup prediction,and parameters affecting survivability of debris.The existing analysis tools can be classified into two categories,i.e.the object-oriented and the spacecraft-oriented methods,the latter being more accurate than the first one.The past object-oriented tools include objects of only simple shapes.For more realistic simulation,here we present an object-oriented tool debris reentry and ablation prediction system(DRAPS) developed by the present authors,which introduces new object shapes to 15 types,as well as 51 predefined motions and relevant aerodynamic and aerothermal models.The aerodynamic and aerothermal models in DRAPS are validated using direct simulation Monte Carlo(DSMC) method.
Optical methods for the analysis of dermatopharmacokinetics
Lademann, Juergen; Weigmann, Hans-Juergen; von Pelchrzim, R.; Sterry, Wolfram
2002-07-01
The method of tape stripping in combination with spectroscopic measurements is a simple and noninvasive method for the analysis of dermatopharmacokinetics of cosmetic products and topically applied drugs. The absorbance at 430 nm was used for the characterization of the amount of corneocytes on the tape strips. It was compared to the increase of weight of the tapes after removing them from the skin surface. The penetration profiles of two UV filter substances used in sunscreens were determined. The combined method of tape stripping and spectroscopic measurements can be also used for the investigation of the dermatopharmacokinetics of topically applied drugs passing through the skin. Differences in the penetration profiles of the steroid compound clobetasol, applied in the same concentration in different formulations on the skin are presented.
Data Analysis Methods for Library Marketing
Minami, Toshiro; Kim, Eunja
Our society is rapidly changing to information society, where the needs and requests of the people on information access are different widely from person to person. Library's mission is to provide its users, or patrons, with the most appropriate information. Libraries have to know the profiles of their patrons, in order to achieve such a role. The aim of library marketing is to develop methods based on the library data, such as circulation records, book catalogs, book-usage data, and others. In this paper we discuss the methodology and imporatnce of library marketing at the beginning. Then we demonstrate its usefulness through some examples of analysis methods applied to the circulation records in Kyushu University and Guacheon Library, and some implication that obtained as the results of these methods. Our research is a big beginning towards the future when library marketing is an unavoidable tool.
Analysis and estimation of risk management methods
Directory of Open Access Journals (Sweden)
Kankhva Vadim Sergeevich
2016-05-01
Full Text Available At the present time risk management is an integral part of state policy in all the countries with developed market economy. Companies dealing with consulting services and implementation of the risk management systems carve out a niche. Unfortunately, conscious preventive risk management in Russia is still far from the level of standardized process of a construction company activity, which often leads to scandals and disapproval in case of unprofessional implementation of projects. The authors present the results of the investigation of the modern understanding of the existing methodology classification and offer the authorial concept of classification matrix of risk management methods. Creation of the developed matrix is based on the analysis of the method in the context of incoming and outgoing transformed information, which may include different elements of risk control stages. So the offered approach allows analyzing the possibilities of each method.
An Analysis of the Factors and Methods of E-book Pricing in China%我国电子书定价的影响因素及方法探析
Institute of Scientific and Technical Information of China (English)
刘银娣
2014-01-01
我国电子书市场定价混乱，在某种程度上造成了电子书销售市场的无序状态。分析了我国电子书价格现状，并在此基础上对读者、市场、版权、价值和成本等电子书价格的相关影响要素及其重要程度进行了分析，总结了我国电子书定价的方法，包括渗透定价法、差别化定价法、捆绑定价法和尾数定价法等。%The confusion of e-book market pricing, to some extent, has caused the disorder of book sales market. We analyze the current situation of e-books prices in China.And on the basis of the relevant factors affecting e-book pricing, including e-book reader, market, copyright, value and cost and their importance were analyzed, we summarize the method of pricing of e-books, including penetration pricing, differentiated pricing, bundled pricing and ending pricing method.
Information technology portfolio in supply chain management using factor analysis
Directory of Open Access Journals (Sweden)
Ahmad Jaafarnejad
2013-11-01
Full Text Available The adoption of information technology (IT along with supply chain management (SCM has become increasingly a necessity among most businesses. This enhances supply chain (SC performance and helps companies achieve the organizational competitiveness. IT systems capture and analyze information and enable management to make decisions by considering a global scope across the entire SC. This paper reviews the existing literature on IT in SCM and considers pertinent criteria. Using principal component analysis (PCA of factor analysis (FA, a number of related criteria are divided into smaller groups. Finally, SC managers can develop an IT portfolio in SCM using mean values of few extracted components on the relevance –emergency matrix. A numerical example is provided to explain details of the proposed method.
Kwon, Young-Hoo; Han, Ki Hoon; Como, Christopher; Lee, Sangwoo; Singhal, Kunal
2013-09-01
The purpose of this study was to assess the validity of the X-factor computation methods and to examine whether direct relationships exist between the X-factor parameters and the clubhead velocity in a group of skilled male golfers (n = 18, handicap = -0.6 +/- 2.1). Five driver trials were captured from each golfer using an optical motion capture system (250 Hz). Two plane-based methods (conventional vs. functional swing plane-based) and one Cardan rotation-based method (relative orientation) were used to compute select X-factor (end of pelvis rotation, top of backswing, ball impact (BI), and maximum), X-factor stretch (stretch and maximum stretch), and X-factor velocity (BI and maximum) parameters. The maximum clubhead velocity was extracted and normalized to golfer's body height to eliminate the effect of body size. A one-way repeated MANOVA revealed that the computation methods generated significantly different X-factor parameter values (p < 0.001). The conventional method provided substantially larger X-factor values than the other methods in the untwisting phase and the meaningfulness of select X-factor parameters generated by this method was deemed questionable. The correlation analysis revealed that the X-factor parameters were not directly related to the maximum clubhead velocity (both unnormalized and normalized).
Itahashi, S.; Yumimoto, K.; Uno, I.; Kim, S.
2012-12-01
Air quality studies based on the chemical transport model have been provided many important results for promoting our knowledge of air pollution phenomena, however, discrepancies between modeling results and observation data are still important issue to overcome. One of the concerning issue would be an over-prediction of summertime tropospheric ozone in remote area of Japan. This problem has been pointed out in the model comparison study of both regional scale (e.g., MICS-Asia) and global scale model (e.g., TH-FTAP). Several reasons for this issue can be listed as, (i) the modeled reproducibility on the penetration of clean oceanic air mass, (ii) correct estimation of the anthropogenic NOx / VOC emissions over East Asia, (iii) the chemical reaction scheme used in model simulation. In this study, we attempt to inverse estimation of some important chemical reactions based on the combining system of DDM (decoupled direct method) sensitivity analysis and modeled Green's function approach. The decoupled direct method (DDM) is an efficient and accurate way of performing sensitivity analysis to model inputs, calculates sensitivity coefficients representing the responsiveness of atmospheric chemical concentrations to perturbations in a model input or parameter. The inverse solutions with the Green's functions are given by a linear, least-squares method but are still robust against nonlinearities, To construct the response matrix (i.e., Green's functions), we can directly use the results of DDM sensitivity analysis. The solution of chemical reaction constants which have relatively large uncertainties are determined with constraints of observed ozone concentration data over the remote area in Japan. Our inversed estimation demonstrated that the underestimation of reaction constant to produce HNO3 (NO2 + OH + M → HNO3 + M) in SAPRC99 chemical scheme, and the inversed results indicated the +29.0 % increment to this reaction. This estimation has good agreement when compared
Reliability analysis method for slope stability based on sample weight
Directory of Open Access Journals (Sweden)
Zhi-gang YANG
2009-09-01
Full Text Available The single safety factor criteria for slope stability evaluation, derived from the rigid limit equilibrium method or finite element method (FEM, may not include some important information, especially for steep slopes with complex geological conditions. This paper presents a new reliability method that uses sample weight analysis. Based on the distribution characteristics of random variables, the minimal sample size of every random variable is extracted according to a small sample t-distribution under a certain expected value, and the weight coefficient of each extracted sample is considered to be its contribution to the random variables. Then, the weight coefficients of the random sample combinations are determined using the Bayes formula, and different sample combinations are taken as the input for slope stability analysis. According to one-to-one mapping between the input sample combination and the output safety coefficient, the reliability index of slope stability can be obtained with the multiplication principle. Slope stability analysis of the left bank of the Baihetan Project is used as an example, and the analysis results show that the present method is reasonable and practicable for the reliability analysis of steep slopes with complex geological conditions.
Reliability estimation in a multilevel confirmatory factor analysis framework.
Geldhof, G John; Preacher, Kristopher J; Zyphur, Michael J
2014-03-01
Scales with varying degrees of measurement reliability are often used in the context of multistage sampling, where variance exists at multiple levels of analysis (e.g., individual and group). Because methodological guidance on assessing and reporting reliability at multiple levels of analysis is currently lacking, we discuss the importance of examining level-specific reliability. We present a simulation study and an applied example showing different methods for estimating multilevel reliability using multilevel confirmatory factor analysis and provide supporting Mplus program code. We conclude that (a) single-level estimates will not reflect a scale's actual reliability unless reliability is identical at each level of analysis, (b) 2-level alpha and composite reliability (omega) perform relatively well in most settings, (c) estimates of maximal reliability (H) were more biased when estimated using multilevel data than either alpha or omega, and (d) small cluster size can lead to overestimates of reliability at the between level of analysis. We also show that Monte Carlo confidence intervals and Bayesian credible intervals closely reflect the sampling distribution of reliability estimates under most conditions. We discuss the estimation of credible intervals using Mplus and provide R code for computing Monte Carlo confidence intervals.
A Multiscale Factorization Method for Simulating Mesoscopic Systems with Atomic Precision
Mansour, Andrew Abi
2013-01-01
Mesoscopic N-atom systems derive their structural and dynamical properties from processes coupled across multiple scales in space and time. An efficient method for understanding and simulating such systems from the underlying N-atom formulation is presented. The method integrates notions of multiscale analysis, Trotter factorization, and a hypothesis that the momenta conjugate to coarse-grained variables can be treated as a stationary random process. The method is demonstrated for Lactoferrin protein, Nudaurelia Capensis Omega Virus, and Cowpea Chlorotic Mottle Virus to assess its accuracy and scaling with system size.
Analysis of the three-dimensional tongue shape using a three-index factor analysis model
Zheng, Yanli; Hasegawa-Johnson, Mark; Pizza, Shamala
2003-01-01
Three-dimensional tongue shape during vowel production is analyzed using the three-mode PARAFAC (parallel factors) model. Three-dimensional MRI images of five speakers (9 vowels) are analyzed. Sixty-five virtual fleshpoints (13 segments along the rostral-caudal dimension and 5 segments along the right-left direction) are chosen based on the interpolated tongue shape images. Methods used to adjust the alignment of MRI images, to set up the fleshpoints, and to measure the position of the fleshpoints are presented. PARAFAC analysis of this 3D coordinate data results in a stable two-factor solution that explains about 70% of the variance.
A high-efficiency aerothermoelastic analysis method
Wan, ZhiQiang; Wang, YaoKun; Liu, YunZhen; Yang, Chao
2014-06-01
In this paper, a high-efficiency aerothermoelastic analysis method based on unified hypersonic lifting surface theory is established. The method adopts a two-way coupling form that couples the structure, aerodynamic force, and aerodynamic thermo and heat conduction. The aerodynamic force is first calculated based on unified hypersonic lifting surface theory, and then the Eckert reference temperature method is used to solve the temperature field, where the transient heat conduction is solved using Fourier's law, and the modal method is used for the aeroelastic correction. Finally, flutter is analyzed based on the p-k method. The aerothermoelastic behavior of a typical hypersonic low-aspect ratio wing is then analyzed, and the results indicate the following: (1) the combined effects of the aerodynamic load and thermal load both deform the wing, which would increase if the flexibility, size, and flight time of the hypersonic aircraft increase; (2) the effect of heat accumulation should be noted, and therefore, the trajectory parameters should be considered in the design of hypersonic flight vehicles to avoid hazardous conditions, such as flutter.
FACTOR ANALYSIS OF THE ELKINS HYPNOTIZABILITY SCALE
Elkins, Gary; Johnson, Aimee K.; Johnson, Alisa J.; Sliwinski, Jim
2015-01-01
Assessment of hypnotizability can provide important information for hypnosis research and practice. The Elkins Hypnotizability Scale (EHS) consists of 12 items and was developed to provide a time-efficient measure for use in both clinical and laboratory settings. The EHS has been shown to be a reliable measure with support for convergent validity with the Stanford Hypnotic Susceptibility Scale, Form C (r = .821, p < .001). The current study examined the factor structure of the EHS, which was administered to 252 adults (51.3% male; 48.7% female). Average time of administration was 25.8 minutes. Four factors selected on the basis of the best theoretical fit accounted for 63.37% of the variance. The results of this study provide an initial factor structure for the EHS. PMID:25978085
ANALYSIS OF EXTERNAL FACTORS AFFECTING THE PRICING
Directory of Open Access Journals (Sweden)
Irina A. Kiseleva
2013-01-01
Full Text Available The external factors influencing the process of formation of tariffs of commercial services are considered in the article. External environment is known to be very diverse and changeable. Currently, pricing has become one of the key processes of strategic development of a company. Pricing in the service sector, in turn, is highly susceptible to changes in the external environment. Its components directly or indirectly affect the market of services, changing it adopted economic processes. As a rule, firms providing services can’t influence the changes in external factors. However, the service market is very flexible, which enables businesses to reshape pricing strategy, to adapt it to the new environment.
Institute of Scientific and Technical Information of China (English)
孔颖
2011-01-01
The brutal law enforcement and the weak law enforcement are two extreme law enforcement methods of the public security organ. The public security administration enforcement activity is realized by each law enforcer. It is easier to find the crux of the extreme law enforcement by analyzing law enforcers. The main factors of the extreme law enforcement are the misunderstanding of perception and valuation system, and the lack of enforcement capability. Ac- cording to those factors, the perception of law enforcement can be set up by theory study, and innovation of police the- ory. The combination of the soft system and the tough system can improve the method of police enforcement. The soft system includes professional ability training, enforcement environment modifying, and psychological counseling. The tough system includes accountability system, restraint and supervision system, and performance evaluation system.%粗暴执法与软弱执法是公安机关两种极端的执法方式，公安行政执法活动是通过执法者个体的每一个具体行政行为来实现的，从执法主体因素分析极端执法的成因更容易找到极端违法的症结。通过分析，执法主体观念上的认识错误，以及对评估制度的理解偏差和执法能力的欠缺是导致极端执法的重要因素。鉴于此，可从务实的理论学习、公安理论的创新等方面培育法治观念，以培训业务能力、改善执法环境、建立心理疏导的软制度与完善责任追究制度、监督制约机制、绩效评价体系的硬制度相结合的方法而进一步改进公安执法方式。
Factor Analysis for Spectral Reconnaissance and Situational Understanding
2016-07-11
reviewed journals: Final Report: Factor Analysis for Spectral Reconnaissance and Situational Understanding Report Title The Army has a critical need for...based NP-hard design problems, by associating them with corresponding estimation problems. 1 Factor Analysis for Spectral Reconnaissance and Situational ...SECURITY CLASSIFICATION OF: The Army has a critical need for enhancing situational understanding for dismounted soldiers and rapidly deployed tactical
A Factor Analysis of the BSRI and the PAQ.
Edwards, Teresa A.; And Others
Factor analysis of the Bem Sex Role Inventory (BSRI) and the Personality Attributes Questionnaire (PAQ) was undertaken to study the independence of the masculine and feminine scales within each instrument. Both instruments were administered to undergraduate education majors. Analysis of primary first and second order factors of the BSRI indicated…
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhenyue [Zhejiang Univ., Hangzhou (People' s Republic of China); Zha, Hongyuan [Pennsylvania State Univ., University Park, PA (United States); Simon, Horst [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2006-07-31
In this paper, we developed numerical algorithms for computing sparse low-rank approximations of matrices, and we also provided a detailed error analysis of the proposed algorithms together with some numerical experiments. The low-rank approximations are constructed in a certain factored form with the degree of sparsity of the factors controlled by some user-specified parameters. In this paper, we cast the sparse low-rank approximation problem in the framework of penalized optimization problems. We discuss various approximation schemes for the penalized optimization problem which are more amenable to numerical computations. We also include some analysis to show the relations between the original optimization problem and the reduced one. We then develop a globally convergent discrete Newton-like iterative method for solving the approximate penalized optimization problems. We also compare the reconstruction errors of the sparse low-rank approximations computed by our new methods with those obtained using the methods in the earlier paper and several other existing methods for computing sparse low-rank approximations. Numerical examples show that the penalized methods are more robust and produce approximations with factors which have fewer columns and are sparser.
Factor Analysis of People Rather than Variables: Q and Other Two-Mode Factor Analytic Models.
Frederick, Brigitte N.
Factor analysis attempts to study how different objects group together to form factors with the purposes of: (1) reducing the number of factorable entities (e.g., variables) with which the researcher needs to deal; (2) searching data for qualitative and quantitative differences; and (3) testing hypotheses (R. Gorsuch, 1983). While most factor…
Chiral analysis of baryon form factors
Energy Technology Data Exchange (ETDEWEB)
Gail, T.A.
2007-11-08
This work presents an extensive theoretical investigation of the structure of the nucleon within the standard model of elementary particle physics. In particular, the long range contributions to a number of various form factors parametrizing the interactions of the nucleon with an electromagnetic probe are calculated. The theoretical framework for those calculations is chiral perturbation theory, the exact low energy limit of Quantum Chromo Dynamics, which describes such long range contributions in terms of a pion-cloud. In this theory, a nonrelativistic leading one loop order calculation of the form factors parametrizing the vector transition of a nucleon to its lowest lying resonance, the {delta}, a covariant calculation of the isovector and isoscalar vector form factors of the nucleon at next to leading one loop order and a covariant calculation of the isoscalar and isovector generalized vector form factors of the nucleon at leading one loop order are performed. In order to perform consistent loop calculations in the covariant formulation of chiral perturbation theory an appropriate renormalization scheme is defined in this work. All theoretical predictions are compared to phenomenology and results from lattice QCD simulations. These comparisons allow for a determination of the low energy constants of the theory. Furthermore, the possibility of chiral extrapolation, i.e. the extrapolation of lattice data from simulations at large pion masses down to the small physical pion mass is studied in detail. Statistical as well as systematic uncertainties are estimated for all results throughout this work. (orig.)
Numerical analysis method for linear induction machines.
Elliott, D. G.
1972-01-01
A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.
Thermal Analysis Methods for Aerobraking Heating
Amundsen, Ruth M.; Gasbarre, Joseph F.; Dec, John A.
2005-01-01
As NASA begins exploration of other planets, a method of non-propulsively slowing vehicles at the planet, aerobraking, may become a valuable technique for managing vehicle design mass and propellant. An example of this is Mars Reconnaissance Orbiter (MRO), which will launch in late 2005 and reach Mars in March of 2006. In order to save propellant, MRO will use aerobraking to modify the initial orbit at Mars. The spacecraft will dip into the atmosphere briefly on each orbit, and during the drag pass, the atmospheric drag on the spacecraft will slow it, thus lowering the orbit apoapsis. The largest area on the spacecraft, and that most affected by the heat generated during the aerobraking process, is the solar arrays. A thermal analysis of the solar arrays was conducted at NASA Langley, to simulate their performance throughout the entire roughly 6-month period of aerobraking. Several interesting methods were used to make this analysis more rapid and robust. Two separate models were built for this analysis, one in Thermal Desktop for radiation and orbital heating analysis, and one in MSC.Patran for thermal analysis. The results from the radiation model were mapped in an automated fashion to the Patran thermal model that was used to analyze the thermal behavior during the drag pass. A high degree of automation in file manipulation as well as other methods for reducing run time were employed, since toward the end of the aerobraking period the orbit period is short, and in order to support flight operations the runs must be computed rapidly. All heating within the Patran Thermal model was combined in one section of logic, such that data mapped from the radiation model and aeroheating model, as well as skin temperature effects on the aeroheating and surface radiation, could be incorporated easily. This approach calculates the aeroheating at any given node, based on its position and temperature as well as the density and velocity at that trajectory point. Run times on
FUZZY METHOD FOR FAILURE CRITICALITY ANALYSIS
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The greatest benefit is realized from failure mode, effect and criticality analysis (FMECA) when it is done early in the design phase and tracks product changes as they evolve; design changes can then be made more economically than if the problems are discovered after the design has been completed. However, when the discovered design flaws must be prioritized for corrective actions, precise information on their probability of occurrence, the effect of the failure, and their detectability often are not availabe. To solve this problem, this paper described a new method, based on fuzzy sets, for prioritizing failures for corrective actions in a FMCEA. Its successful application to the container crane shows that the proposed method is both reasonable and practical.
Selective spectroscopic methods for water analysis
Energy Technology Data Exchange (ETDEWEB)
Vaidya, Bikas [Iowa State Univ., Ames, IA (United States)
1997-06-24
This dissertation explores in large part the development of a few types of spectroscopic methods in the analysis of water. Methods for the determination of some of the most important properties of water like pH, metal ion content, and chemical oxygen demand are investigated in detail. This report contains a general introduction to the subject and the conclusions. Four chapters and an appendix have been processed separately. They are: chromogenic and fluorogenic crown ether compounds for the selective extraction and determination of Hg(II); selective determination of cadmium in water using a chromogenic crown ether in a mixed micellar solution; reduction of chloride interference in chemical oxygen demand determination without using mercury salts; structural orientation patterns for a series of anthraquinone sulfonates adsorbed at an aminophenol thiolate monolayer chemisorbed at gold; and the role of chemically modified surfaces in the construction of miniaturized analytical instrumentation.
Analytical methods for arsenic speciation analysis
Directory of Open Access Journals (Sweden)
Rajaković Ljubinka V.
2013-01-01
Full Text Available Arsenic exists in the form of various chemical species differing by their physicochemical behavior, in toxicity, bioavailability and biotransformation. The determination of arsenic species is an important issue for environmental, clinical and food chemistry. However, differentiation of these species is a quite complex analytical task. Numerous speciation procedures have been studied that include electrochemical, chromatographic, spectrometric and hyphenated techniques. This review comprehends the relevant research in the field of arsenic speciation analysis with novel applications and significant advances. Stability of arsenic species and each of the analytical steps (sample collection, storage, preservation, extraction of the arsenic speciation methods was particularly evaluated. Analytical validation and performance of these methods are also reviewed.
Analytic standard errors for exploratory process factor analysis.
Zhang, Guangjian; Browne, Michael W; Ong, Anthony D; Chow, Sy Miin
2014-07-01
Exploratory process factor analysis (EPFA) is a data-driven latent variable model for multivariate time series. This article presents analytic standard errors for EPFA. Unlike standard errors for exploratory factor analysis with independent data, the analytic standard errors for EPFA take into account the time dependency in time series data. In addition, factor rotation is treated as the imposition of equality constraints on model parameters. Properties of the analytic standard errors are demonstrated using empirical and simulated data.
A replication of a factor analysis of motivations for trapping
Schroeder, Susan; Fulton, David C.
2015-01-01
Using a 2013 sample of Minnesota trappers, we employed confirmatory factor analysis to replicate an exploratory factor analysis of trapping motivations conducted by Daigle, Muth, Zwick, and Glass (1998). We employed the same 25 items used by Daigle et al. and tested the same five-factor structure using a recent sample of Minnesota trappers. We also compared motivations in our sample to those reported by Daigle et el.
Menstrual Factors,Reproductive Factors and Lung Cancer Risk:A Meta-analysis
Institute of Scientific and Technical Information of China (English)
Yue ZHANG; Zhihua YIN; Li SHEN; Yan WAN; Baosen ZHOU
2012-01-01
Background and objective Epidemiological studies have suggested that menstrual and reproductive factors may influence lung cancer risk,but the results are controversial.We therefore carried out a meta-analysis aiming to examine the associations of lung cancer in women with menstrual and reproductive factors.Methods Relevant studies were searched from PubMed database,CNKI,WANFANG DATA and VIP INFORMATION up to January 2012,with no language restrictions.References listed from selected papers were also reviewed.We included studies that reported the estimates of relative risks(RRs)with 95％ confidence intervals(CIs)for the association between menstrual and reproductive factors and lung cancer risk.The pooled RRs were calculated after the heterogeneity test with the software Stata 11,and publication bias and sensitivity were evaluated at the same time.Results Twenty-five articles,representing 24 independent studies,were included in this meta-analysis.Older age at menarche in North America women(RR=0.83;95％CI:0.73-0.94)was associated with a significant decreased risk of lung cancer.Longer length of menstrual cycle was also associated with decreased lung cancer risk(RR=0.72;95％CI:0.57-0.90).Other exposures were not significantly associated.Conclusions Our analysis provides evidence of the hypothesis that female sex hormones influence the risk of lung cancer in women,yet additional studies are warranted to extend this finding and to clarify the underlying mechanisms.
Analysis of random factors of the self-education process
Directory of Open Access Journals (Sweden)
A. A. Solodov
2016-01-01
Full Text Available The aim of the study is the statistical description of the random factors of the self-educationт process, namely that stage of the process of continuous education, in which there is no meaningful impact on the student’s educational organization and the development of algorithms for estimating these factors. It is assumed that motivations of self-education are intrinsic factors that characterize the individual learner and external, associated with the changing environment and emerging challenges. Phenomena available for analysis a self-learning process (observed data are events relevant to this process, which are modeled by points on the time axis, the number and position of which is assumed to be random. Each point can be mapped with the unknown and unobserved random or nonrandom factor (parameter which affects the intensity of formation of dots. The purpose is to describe observable and unobservable data and developing algorithms for optimal evaluation. Further, such evaluations can be used for the individual characteristics of the process of self-study or for comparison of different students. For the analysis of statistical characteristics of the process of selfeducation applied mathematical apparatus of the theory of point random processes, which allows to determine the key statistical characteristics of unknown random factors of the process of self-education. The work consists of a logically complete model including the following components.• Study the basic statistical model of the appearance of points in the process of self-education in the form of a Poisson process, the only characteristic is the intensity of occurrence of events• Methods of testing the hypothesis about Poisson distribution of observed events.• Generalization of the basic model to the case where the intensity function depends on the time and unknown factor (variable can be both random and not random. Such factors are interpreted as
Analysis of the Wing Tsun Punching Methods
Directory of Open Access Journals (Sweden)
Jeff Webb
2012-07-01
Full Text Available The three punching techniques of Wing Tsun, while few in number, represent an effective approach to striking with the closed fist. At first glance, the rather short stroke of each punch would seem disproportionate to the amount of power it generates. Therefore, this article will discuss the structure and body mechanics of each punch, in addition to the various training methods employed for developing power. Two of the Wing Tsun punches, namely the lifting punch and the hooking punch, are often confused with similar punches found in Western boxing. The key differences between the Wing Tsun and boxing punches, both in form and function, will be discussed. Finally, the strategy for applying the Wing Tsun punches will serve as the greatest factor in differentiating them from the punches of other martial arts styles.
Correction factor for real-time determination of wood dust mass concentration by photometric method
Directory of Open Access Journals (Sweden)
Ankica Čavlović
2009-03-01
Full Text Available Samples of wood dust were collected in the working environment of wood machining processes for the purpose of determining correction factors for measuring mass concentration of wood dust by photometric method. According to the NIOSH 0600 Norm and NIOSH Manual of Analytical Methods for photometric measurement, correction factor must be determined before measuring mass concentration of different types of dust. The correction factor is defined as the ratio of mass concentration obtained by the gravimetric method and mass concentration obtained by the photometric method. The correction factor should be determined because of the influence of particle size distribution, density, particle shape and refractive index on values obtained by the photometric method. The aim of the research was to investigate the possibility of using photometric method for the determination of mass concentration of inhalable fraction of airborne wood dust. Sampling was conducted in several woodworking plants during the machining of wet and dry beech-wood, wet and dry oak-wood, wet fir-wood and particleboard. There is a significant correlation between the results obtained by the photometric method and values obtained by the gravimetric method (R2=0.88 and this is the base for using the photometric method in determining mass concentration of airborne wood dust. According to the results of this research, correction factors must be determined and used for measuring mass concentration of inhalable wood dust during the machining of different wood species and wood with different moisture content. The best corresponding results of photometric and gravimetric methods are obtained for the samples collected during machining of wet fir-wood (k=1. The largest correction factor should be used in determining workers exposure to wood dust during machining of dry oak-wood (k=4.4 and particleboard (k=4.5. Only the results of 8-hour measurements of mass concentration by gravimetric methods can
Analysis of factors affecting fattening of chickens
OBERMAJEROVÁ, Barbora
2013-01-01
Poultry meat belongs to the basic assortment of human nutrition. The meat of an intensively fattened poultry is a source of easily digestible proteins, lipids, mineral substances and vitamins. The aim of this bachelor´s thesis was to write out a literature review, which is focused on the intensity of growth, carcass yield, quality and composition of broiler chickens meat. The following describes the internal and external factors that affect them, i.e. genetic foundation, hybrid combination, s...
Analysis of significant factors for dengue fever incidence prediction.
Siriyasatien, Padet; Phumee, Atchara; Ongruk, Phatsavee; Jampachaisri, Katechan; Kesorn, Kraisak
2016-04-16
Many popular dengue forecasting techniques have been used by several researchers to extrapolate dengue incidence rates, including the K-H model, support vector machines (SVM), and artificial neural networks (ANN). The time series analysis methodology, particularly ARIMA and SARIMA, has been increasingly applied to the field of epidemiological research for dengue fever, dengue hemorrhagic fever, and other infectious diseases. The main drawback of these methods is that they do not consider other variables that are associated with the dependent variable. Additionally, new factors correlated to the disease are needed to enhance the prediction accuracy of the model when it is applied to areas of similar climates, where weather factors such as temperature, total rainfall, and humidity are not substantially different. Such drawbacks may consequently lower the predictive power for the outbreak. The predictive power of the forecasting model-assessed by Akaike's information criterion (AIC), Bayesian information criterion (BIC), and the mean absolute percentage error (MAPE)-is improved by including the new parameters for dengue outbreak prediction. This study's selected model outperforms all three other competing models with the lowest AIC, the lowest BIC, and a small MAPE value. The exclusive use of climate factors from similar locations decreases a model's prediction power. The multivariate Poisson regression, however, effectively forecasts even when climate variables are slightly different. Female mosquitoes and seasons were strongly correlated with dengue cases. Therefore, the dengue incidence trends provided by this model will assist the optimization of dengue prevention. The present work demonstrates the important roles of female mosquito infection rates from the previous season and climate factors (represented as seasons) in dengue outbreaks. Incorporating these two factors in the model significantly improves the predictive power of dengue hemorrhagic fever forecasting
Investigating product development strategy in beverage industry using factor analysis
Directory of Open Access Journals (Sweden)
Naser Azad
2013-03-01
Full Text Available Selecting a product development strategy that is associated with the company's current service or product innovation, based on customers’ needs and changing environment, plays an important role in increasing demand, increasing market share, increasing sales and profits. Therefore, it is important to extract effective variables associated with product development to improve performance measurement of firms. This paper investigates important factors influencing product development strategies using factor analysis. The proposed model of this paper investigates 36 factors and, using factor analysis, we extract six most influential factors including information sharing, intelligence information, exposure strategy, differentiation, research and development strategy and market survey. The first strategy, partnership, includes five sub-factor including product development partnership, partnership with foreign firms, customers’ perception from competitors’ products, Customer involvement in product development, inter-agency coordination, customer-oriented approach to innovation and transmission of product development change where inter-agency coordination has been considered the most important factor. Internal strengths are the most influential factors impacting the second strategy, intelligence information. The third factor, introducing strategy, introducing strategy, includes four sub criteria and consumer buying behavior is the most influencing factor. Differentiation is the next important factor with five components where knowledge and expertise in product innovation is the most important one. Research and development strategy with four sub-criteria where reducing product development cycle plays the most influential factor and finally, market survey strategy is the last important factor with three factors and finding new market plays the most important role.
The Financial Impact of Risk Factors Affecting Project Cost Contingency: Evidential Reasoning Method
Directory of Open Access Journals (Sweden)
Emmanuel Abeere-Inga
2013-07-01
Full Text Available The process of cost modeling using risk analysis for construction projects is very crucial for the achievement of project success. The purpose of this paper is to present an analysis of the financial impact of risk factors affecting key construction work sections; using a systematic risk methodology based on empirical judgment. The failure mode effect analysis (FMEA and the evidential reasoning methods are presented as qualitative and quantitative risk tools respectively. Data analysis from structured questionnaires revealed that four work sections are prone to high scope changes contemporaneous with seven risk factors. Contrary to the usual 10% contingency estimate allowed for construction projects in Ghana, an approximate overall physical contingency range of between 13.36% and 17.88% was determined using evidential reasoning methods. The likely impact of the integrated work sections and risk factors provide a clue to estimators on how to estimate and account for project cost contingency. The research concludes by recommending a framework for improving the estimation process of cost contingency through the integration of efficient risk management strategies, cost estimation and design management process.
Analysis of methods. [information systems evolution environment
Mayer, Richard J. (Editor); Ackley, Keith A.; Wells, M. Sue; Mayer, Paula S. D.; Blinn, Thomas M.; Decker, Louis P.; Toland, Joel A.; Crump, J. Wesley; Menzel, Christopher P.; Bodenmiller, Charles A.
1991-01-01
Information is one of an organization's most important assets. For this reason the development and maintenance of an integrated information system environment is one of the most important functions within a large organization. The Integrated Information Systems Evolution Environment (IISEE) project has as one of its primary goals a computerized solution to the difficulties involved in the development of integrated information systems. To develop such an environment a thorough understanding of the enterprise's information needs and requirements is of paramount importance. This document is the current release of the research performed by the Integrated Development Support Environment (IDSE) Research Team in support of the IISEE project. Research indicates that an integral part of any information system environment would be multiple modeling methods to support the management of the organization's information. Automated tool support for these methods is necessary to facilitate their use in an integrated environment. An integrated environment makes it necessary to maintain an integrated database which contains the different kinds of models developed under the various methodologies. In addition, to speed the process of development of models, a procedure or technique is needed to allow automatic translation from one methodology's representation to another while maintaining the integrity of both. The purpose for the analysis of the modeling methods included in this document is to examine these methods with the goal being to include them in an integrated development support environment. To accomplish this and to develop a method for allowing intra-methodology and inter-methodology model element reuse, a thorough understanding of multiple modeling methodologies is necessary. Currently the IDSE Research Team is investigating the family of Integrated Computer Aided Manufacturing (ICAM) DEFinition (IDEF) languages IDEF(0), IDEF(1), and IDEF(1x), as well as ENALIM, Entity
Confirmatory Factor Analysis of the ISB - Burnout Syndrome Inventory
Directory of Open Access Journals (Sweden)
Ana Maria T. Benevides-Pereira
2017-05-01
Full Text Available AimBurnout is a dysfunctional reaction to chronic occupational stress. The present study analysis the psychometric qualities of the Burnout Syndrome Inventory (ISB through Confirmatory Factor Analysis (CFA.MethodEmpirical study in a multi-centre and multi-occupational sample (n = 701 using the ISB. The Part I assesses antecedent factors: Positive Organizational Conditions (PC and Negative Organizational Conditions (NC. The Part II assesses the syndrome: Emotional Exhaustion (EE, Dehumanization (DE, Emotional Distancing (ED and Personal Accomplishment (PA.ResultsThe highest means occurred in the positive scales CP (M = 23.29, SD = 5.89 and PA (M = 14.84, SD = 4.71. Negative conditions showed the greatest variability (SD = 6.03. Reliability indexes were reasonable, with the lowest rate at .77 for DE and the highest rate .91 for PA. The CFA revealed RMSEA = .057 and CFI = .90 with all scales regressions showing significant values (β = .73 until β = .92.ConclusionThe ISB showed a plausible instrument to evaluate burnout. The two sectors maintained the initial model and confirmed the theoretical presupposition. This instrument makes possible a more comprehensive idea of the labour context, and one or another part may be used separately according to the needs and the aims of the assessor.
A concise method for mine soils analysis
Energy Technology Data Exchange (ETDEWEB)
Winkler, S.; Wildeman, T.; Robinson, R.; Herron, J.
1999-07-01
A large number of abandoned hard rock mines exist in Colorado and other mountain west states, many on public property. Public pressure and resulting policy changes have become a driving force in the reclamation of these sites. Two of the key reclamation issues for these sites in the occurrence of acid forming materials (AFMs) in mine soils, and acid mine drainage (AMD) issuing from mine audits. An AMD treatment system design project for the Forest Queen mine in Colorado's San Juan mountains raised the need for a simple, useable method for analysis of mine land soils, both for suitability as a construction material, and to determine the AFM content and potential for acid release. The authors have developed a simple, stepwise, go - no go test for the analysis of mine soils. Samples were collected from a variety of sites in the Silverton, CO area, and subjected to three tiers of tests including: paste pH, Eh, and 10% HCl fizz test; then total digestion in HNO{sub 3}/HCl, neutralization potential, exposure to meteoric water, and toxicity content leaching procedure (TCLP). All elemental analyses were performed with an inductively-coupled plasma (ICP) spectrometer. Elimination of samples via the first two testing tiers left two remaining samples, which were subsequently subjected to column and sequential batch tests, with further elemental analysis by ICP. Based on these tests, one sample was chosen for suitability as a constructing material for the Forest Queen treatment system basins. Further simplification, and testing on two pairs of independent soil samples, has resulted in a final analytical method suitable for general use.
Generalized Analysis of a Distribution Separation Method
Directory of Open Access Journals (Sweden)
Peng Zhang
2016-04-01
Full Text Available Separating two probability distributions from a mixture model that is made up of the combinations of the two is essential to a wide range of applications. For example, in information retrieval (IR, there often exists a mixture distribution consisting of a relevance distribution that we need to estimate and an irrelevance distribution that we hope to get rid of. Recently, a distribution separation method (DSM was proposed to approximate the relevance distribution, by separating a seed irrelevance distribution from the mixture distribution. It was successfully applied to an IR task, namely pseudo-relevance feedback (PRF, where the query expansion model is often a mixture term distribution. Although initially developed in the context of IR, DSM is indeed a general mathematical formulation for probability distribution separation. Thus, it is important to further generalize its basic analysis and to explore its connections to other related methods. In this article, we first extend DSM’s theoretical analysis, which was originally based on the Pearson correlation coefficient, to entropy-related measures, including the KL-divergence (Kullback–Leibler divergence, the symmetrized KL-divergence and the JS-divergence (Jensen–Shannon divergence. Second, we investigate the distribution separation idea in a well-known method, namely the mixture model feedback (MMF approach. We prove that MMF also complies with the linear combination assumption, and then, DSM’s linear separation algorithm can largely simplify the EM algorithm in MMF. These theoretical analyses, as well as further empirical evaluation results demonstrate the advantages of our DSM approach.
Housing Price Forecastability: A Factor Analysis
DEFF Research Database (Denmark)
Bork, Lasse; Møller, Stig Vinther
2016-01-01
We examine U.S. housing price forecastability using principal component analysis (PCA), partial least squares (PLS), and sparse PLS (SPLS). We incorporate information from a large panel of 128 economic time series and show that macroeconomic fundamentals have strong predictive power for future...
Housing price forecastability: A factor analysis
DEFF Research Database (Denmark)
Møller, Stig Vinther; Bork, Lasse
2017-01-01
We examine U.S. housing price forecastability using principal component analysis (PCA), partial least squares (PLS), and sparse PLS (SPLS). We incorporate information from a large panel of 128 economic time series and show that macroeconomic fundamentals have strong predictive power for future...
Multivariate factor analysis of Girgentana goat milk composition
Directory of Open Access Journals (Sweden)
Pietro Giaccone
2010-01-01
Full Text Available The interpretation of the several variables that contribute to defining milk quality is difficult due to the high degree of correlation among them. In this case, one of the best methods of statistical processing is factor analysis, which belongs to the multivariate groups; for our study this particular statistical approach was employed. A total of 1485 individual goat milk samples from 117 Girgentana goats, were collected fortnightly from January to July, and analysed for physical and chemical composition, and clotting properties. Milk pH and tritable acidity were within the normal range for fresh goat milk. Morning milk yield resulted 704 ± 323 g with 3.93 ± 1.23% and 3.48±0.38% for fat and protein percentages, respectively. The milk urea content was 43.70 ± 8.28 mg/dl. The clotting ability of Girgentana milk was quite good, with a renneting time equal to 16.96 ± 3.08 minutes, a rate of curd formation of 2.01 ± 1.63 min- utes and a curd firmness of 25.08 ± 7.67 millimetres. Factor analysis was performed by applying axis orthogonal rotation (rotation type VARIMAX; the analysis grouped the milk components into three latent or common factors. The first, which explained 51.2% of the total covariance, was defined as “slow milks”, because it was linked to r and pH. The second latent factor, which explained 36.2% of the total covariance, was defined as “milk yield”, because it is positively correlated to the morning milk yield and to the urea con- tent, whilst negatively correlated to the fat percentage. The third latent factor, which explained 12.6% of the total covari- ance, was defined as “curd firmness,” because it is linked to protein percentage, a30 and titatrable acidity. With the aim of evaluating the influence of environmental effects (stage of kidding, parity and type of kidding, factor scores were anal- ysed with the mixed linear model. Results showed significant effects of the season of
Water Hammer Analysis by Characteristic Method
Directory of Open Access Journals (Sweden)
A. R. Lohrasbi
2008-01-01
Full Text Available Rapid changes in the velocity of fluid in closed conduits generate large pressure, which are transmitted through the system with the speed of sound. When the fluid medium is a liquid the pressure surges and related phenomena are described as water hammer. Water hammer is caused by normal operation of the system, such as valve opening or closure, pump starts and stoppages and by abnormal condition, such as power failure. Problem statement: Water hammer causes the additional pressure in water networks. This pressure maybe defects on pipes and connections. The likely effects of water hammer must be taken into account in the structural design of pipelines and in the design of operating procedures for pumps, valves, etc. Approach: The physical phenomena of water hammer and the mathematical model which provides the basis for design computations are described. Most water hammer analysis involves computer solution by the method of characteristics. In this study water hammer is modelled with this method and effect of valve opening and closure will be surveyed with a program that is used for this purpose and with a numerical example. Results: The more rapid the closure of the valve, the more rapid is the change in momentum and hence, greater is the additional pressure developed. Conclusions/Recommendations: For preventing of water hammer defects, is recommended that valves should be open or closed slowly. Also with using the method of characteristics, we can modelled all pipe networks, and see the affects of water hammer.
Spatial methods in areal administrative data analysis
Directory of Open Access Journals (Sweden)
Haijun Ma
2006-12-01
Full Text Available Administrative data often arise as electronic copies of paid bills generated from insurance companies including the Medicare and Medicaid programs. Such data are widely seen and analyzed in the public health area, as in investigations of cancer control, health service accessibility, and spatial epidemiology. In areas like political science and education, administrative data are also important. Administrative data are sometimes more readily available as summaries over each administrative unit (county, zip code, etc. in a particular set determined by geopolitical boundaries, or what statisticians refer to as areal data. However, the spatial dependence often present in administrative data is often ignored by health services researchers. This can lead to problems in estimating the true underlying spatial surface, including inefficient use of data and biased conclusions. In this article, we review hierarchical statistical modeling and boundary analysis (wombling methods for areal-level spatial data that can be easily carried out using freely available statistical computing packages. We also propose a new edge-domain method designed to detect geographical boundaries corresponding to abrupt changes in the areal-level surface. We illustrate our methods using county-level breast cancer late detection data from the state of Minnesota.
Institute of Scientific and Technical Information of China (English)
程晓春
2016-01-01
Objective:To evaluate the evidence-based method applied in nasogastric teaching nursing students and its influencing factors.Methods:254 nursing students were divided into two groups: control group 127 students received conventional, observation group 127 students received evidence-based teaching method, teaching effect of the two groups were compared.Results:The nasogastric phobia of the two groups was 92.13% and 88.19%,had no significant difference (P>0.05);nasogastric phobia SAS score was closely associated with teachers trust (P <0.05);After teaching,the SAS scores in observation group was significantly lower in than the control group, and self-efficacy scores and teaching assessment results were significantly increased than the control group (P<0.05);The nasogastric success rate in observation group was 96.85%, significantly higher than 74.02% in control group (P<0.05). Conclusion:The evidence-based method applied in nasogastric teaching of nursing students can alleviate fears, enhance their sense of self-efficacy,and their feeding success rate and the quality of teaching.%目的：评价循证法应用于护生鼻饲教学中的效果及其影响因素。方法：254名护生随机分组：对照组127名接受常规教学，观察组127名接受循证法教学，对比两组的教学效果。结果：两组分别有92.13%、88.19%的护生存在鼻饲恐惧症，组间比较无明显差异（P＞0.05）；鼻饲恐惧症SAS评分与教师信任度密切相关（P＜0.05）；教学结束后观察组的SAS评分较对照组显著降低，而自我效能评分及教学考核成绩较对照组显著提高（P＜0.05）；观察组教学后鼻饲成功率为96.85%，显著高于对照组的74.02%（P＜0.05）。结论：在护生鼻饲教学中应用循证法指导教学能够缓解学生的恐惧心理，增强其自我效能感，提高鼻饲成功率与教学质量。
Institute of Scientific and Technical Information of China (English)
齐力; 赵彦锋; 巫振富; 张路伟
2012-01-01
Total 870 soil samples were collected from the north of Henan Province over a 27 955 km2area. Two subgroups with 435 samples were respectively used in soil property map-making, i. e. the content of exchangeable cations (CEC) and the total nitrogen (TN). The difference of map-making results between two subgroups was calculated. The stability among Kriging method, inverse distance weight method (IDW) and polygon value represented by point value method (PRP) were compared and its' influencing factors were discussed. The results showed that; (i) RMSECRoot Mean Square Error)and R (correlation coefficient) between measured data and predicted data could not represent the stability of map-making, namely, the returning probability of the spatial pattern of soil properties. And the result was differential in precision validation when using different ways, (ii) The stability of Kriging and IDW were significantly superior to the PRP. The area with relative difference lower than 0. 3 didn't achieved 20% of the total area in Kriging and IDW mapping methods, but it achieved 51. 57% in PRP mapping method. The area with a high difference level was scattered in the difference map when using the former two methods, but it was centralized and showed by big polygons when using PRP. (iii) The stability of soil property map-making results was disturbed by both sample distribution and high variability of soils in local area. Sample distribution was much important in keeping stability in Kriging method than that in IDW and PRP methods. In the two latter ways high variability among data values showed much impressive effects.%本文以土壤CEC和土壤全氮为研究对象,在河南省黄河以北6个地市选取870个样点,并随机均分为两个数据集,进行了制图对比分析；同时研究了Kriging法、IDW法和以点代面法制图结果的稳定性及其与精度的关系,以及影响因素.结果表明:采用实测数据与预测数据的交叉验证并不能用来衡量制
Institute of Scientific and Technical Information of China (English)
区颖; 俞丽芳
2012-01-01
目前大学生的英语口语表达能力总体很差，在口语交流上也出现各种障碍，其原因除学生英语学习能力较差以外，态度问题、动机偏差、缺乏自信、不耐挫折、缺乏热情、过于紧张、性格内向等心理因素也起着关键的作用。对于大学生英语口语交流存在的障碍，可以通过激发兴趣、心理暗示、教师鼓励、减少心理压力、结交外国朋友等有效方法，指导学生进行口语训练，提高学生口语水平。% At present, college students' overall aptitude of oral English is weak, and there are also some obstacles in their oral English communications. In addition to the capacity problems, some psychological factors such as their attitude problems, insufficient learning motivation, lack of self-confidence, no tolerance to frustration, lack of enthusiasm, excessive intense and reluctance to communication also play important roles. In this article, some effective methods like stimulating students' learning interests, giving psychological suggestions, encouraging students, reducing the students' psychological pressure, asking the students to make foreign friends are put forward to guide the students to practice and improve their oral English.
Expert System Diagnosis Dental Disease Using Certainty Factor Method
Directory of Open Access Journals (Sweden)
Whisnu Ulinnuha Setiabudi
2017-05-01
Full Text Available Technological development is growing rapidly among with the increasing of human needs especially in mobile technology where the technology that often be used is android. The existence of this android facilitates the user in access of information. This android can be used for healthy needs, for example is detecting dental disease. One of the branches of computer science that can help society in detecting dental disease is expert system. In this research, making expert system to diagnosis dental disease by using certainty factor method. Dental disease diagnosis application can diagnose the patient based on griping of the patient about dental disease so it can be obtained diseases possibility of the patient itself. This application is an expert system application that operates on android platform. Furthermore, in the measurement accuracy of the system test performed by 20 patients, there were 19 cases of corresponding and 1 cases that do not fit. So, from system testing performed by 20 patients resulted in a 95% accuracy rate.
Multi-Spacecraft Turbulence Analysis Methods
Horbury, Tim S.; Osman, Kareem T.
Turbulence is ubiquitous in space plasmas, from the solar wind to supernova remnants, and on scales from the electron gyroradius to interstellar separations. Turbulence is responsible for transporting energy across space and between scales and plays a key role in plasma heating, particle acceleration and thermalisation downstream of shocks. Just as with other plasma processes such as shocks or reconnection, turbulence results in complex, structured and time-varying behaviour which is hard to measure with a single spacecraft. However, turbulence is a particularly hard phenomenon to study because it is usually broadband in nature: it covers many scales simultaneously. One must therefore use techniques to extract information on multiple scales in order to quantify plasma turbulence and its effects. The Cluster orbit takes the spacecraft through turbulent regions with a range of characteristics: the solar wind, magnetosheath, cusp and magnetosphere. In each, the nature of the turbulence (strongly driven or fully evolved; dominated by kinetic effects or largely on fluid scales), as well as characteristics of the medium (thermalised or not; high or low plasma sub- or super-Alfvenic) mean that particular techniques are better suited to the analysis of Cluster data in different locations. In this chapter, we consider a range of methods and how they are best applied to these different regions. Perhaps the most studied turbulent space plasma environment is the solar wind, see Bruno and Carbone [2005]; Goldstein et al. [2005] for recent reviews. This is the case for a number of reasons: it is scientifically important for cosmic ray and solar energetic particle scattering and propagation, for example. However, perhaps the most significant motivations for studying solar wind turbulence are pragmatic: large volumes of high quality measurements are available; the stability of the solar wind on the scales of hours makes it possible to identify statistically stationary intervals to
Lactose intolerance : analysis of underlying factors
Vonk, RJ; Priebe, MG; Koetse, HA; Stellaard, F; Lenoir-Wijnkoop, [No Value; Antoine, JM; Zhong, Y; Huang, CY
2003-01-01
Background We studied the degree of lactose digestion and orocecal transit time (OCTT) as possible causes for the variability of symptoms of lactose intolerance (LI) in a sample of a population with genetically determined low lactase activity. Methods Lactose digestion index (LDI) was measured by th
Factors Effecting Unemployment: A Cross Country Analysis
Directory of Open Access Journals (Sweden)
Aurangzeb
2013-01-01
Full Text Available This paper investigates macroeconomic determinants of the unemployment for India, China and Pakistan for the period 1980 to 2009. The investigation was conducted through co integration, granger causality and regression analysis. The variables selected for the study are unemployment, inflation, gross domestic product, exchange rate and the increasing rate of population. The results of regression analysis showed significant impact of all the variables for all three countries. GDP of Pakistan showed positive relation with the unemployment rate and the reason of that is the poverty level and underutilization of foreign investment. The result of granger causality showed that bidirectional causality does not exist between any of the variable for all three countries. Co integration result explored that long term relationship do exist among the variables for all the models. It is recommended that distribution of income needs to be improved for Pakistan in order to have positive impact of growth on the employment rate.
Directory of Open Access Journals (Sweden)
David B. Flora
2012-03-01
Full Text Available We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables.
Flora, David B; Labrish, Cathy; Chalmers, R Philip
2012-01-01
We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables.
An integration factor method for stochastic and stiff reaction–diffusion systems
Energy Technology Data Exchange (ETDEWEB)
Ta, Catherine; Wang, Dongyong; Nie, Qing, E-mail: qnie@uci.edu
2015-08-15
Stochastic effects are often present in the biochemical systems involving reactions and diffusions. When the reactions are stiff, existing numerical methods for stochastic reaction diffusion equations require either very small time steps for any explicit schemes or solving large nonlinear systems at each time step for the implicit schemes. Here we present a class of semi-implicit integration factor methods that treat the diffusion term exactly and reaction implicitly for a system of stochastic reaction–diffusion equations. Our linear stability analysis shows the advantage of such methods for both small and large amplitudes of noise. Direct use of the method to solving several linear and nonlinear stochastic reaction–diffusion equations demonstrates good accuracy, efficiency, and stability properties. This new class of methods, which are easy to implement, will have broader applications in solving stochastic reaction–diffusion equations arising from models in biology and physical sciences.
Molenaar, P.C.M.
1987-01-01
Outlines a frequency domain analysis of the dynamic factor model and proposes a solution to the problem of constructing a causal filter of lagged factor loadings. The method is illustrated with applications to simulated and real multivariate time series. The latter applications involve topographic a
Confirmatory Factor Analysis of the Elementary School Success Profile for Teachers
Webber, Kristina C.; Rizo, Cynthia F.; Bowen, Natasha K.
2012-01-01
Objectives: This study examines the factor structure and scale quality of data collected with the online Elementary School Success Profile (ESSP) for Teachers from a sample of teachers of 1,145 third through fifth graders. Methods: Confirmatory factor analysis (CFA) using Mplus and weighted least squares means and variances adjusted (WLSMV)…
MULTIDIMENSIONAL RELIABILITY OF INSTRUMENT STUDENTS’ SATISFACTION USING CONFIRMATORY FACTOR ANALYSIS
Directory of Open Access Journals (Sweden)
Gaguk Margono
2014-11-01
Full Text Available The purpose of this paper is to compare unidimensional reliability and multidimensional reliability of instrument students’ satisfaction as an internal costumer. Multidimensional reliability measurement is rarely used in the field of research. Multidimensional reliability is estimated by using Confirmatory Factor Analysis (CFA on the Structural Equation Model (SEM. Measurements and calculations are described in this article using instrument students’ satisfaction as an internal costumer. Survey method used in this study and sampling used simple random sampling. This instrument has been tried out to 173 students. The result is concluded that the measuringinstrument of students’ satisfaction as an internal costumer by using multidimensional reliability coefficient has higher accuracy when compared with a unidimensional reliability coefficient. Expected in advanced research used another formula multidimensional reliability, including when using SEM.
Accelerated Gibbs Sampling for Infinite Sparse Factor Analysis
Energy Technology Data Exchange (ETDEWEB)
Andrzejewski, D M
2011-09-12
The Indian Buffet Process (IBP) gives a probabilistic model of sparse binary matrices with an unbounded number of columns. This construct can be used, for example, to model a fixed numer of observed data points (rows) associated with an unknown number of latent features (columns). Markov Chain Monte Carlo (MCMC) methods are often used for IBP inference, and in this technical note, we provide a detailed review of the derivations of collapsed and accelerated Gibbs samplers for the linear-Gaussian infinite latent feature model. We also discuss and explain update equations for hyperparameter resampling in a 'full Bayesian' treatment and present a novel slice sampler capable of extending the accelerated Gibbs sampler to the case of infinite sparse factor analysis by allowing the use of real-valued latent features.
Efficiency limit factor analysis for the Francis-99 hydraulic turbine
Zeng, Y.; Zhang, L. X.; Guo, J. P.; Guo, Y. K.; Pan, Q. L.; Qian, J.
2017-01-01
The energy loss in hydraulic turbine is the most direct factor that affects the efficiency of the hydraulic turbine. Based on the analysis theory of inner energy loss of hydraulic turbine, combining the measurement data of the Francis-99, this paper calculates characteristic parameters of inner energy loss of the hydraulic turbine, and establishes the calculation model of the hydraulic turbine power. Taken the start-up test conditions given by Francis-99 as case, characteristics of the inner energy of the hydraulic turbine in transient and transformation law are researched. Further, analyzing mechanical friction in hydraulic turbine, we think that main ingredients of mechanical friction loss is the rotation friction loss between rotating runner and water body, and defined as the inner mechanical friction loss. The calculation method of the inner mechanical friction loss is given roughly. Our purpose is that explore and research the method and way increasing transformation efficiency of water flow by means of analysis energy losses in hydraulic turbine.
Method and tool for network vulnerability analysis
Swiler, Laura Painton; Phillips, Cynthia A.
2006-03-14
A computer system analysis tool and method that will allow for qualitative and quantitative assessment of security attributes and vulnerabilities in systems including computer networks. The invention is based on generation of attack graphs wherein each node represents a possible attack state and each edge represents a change in state caused by a single action taken by an attacker or unwitting assistant. Edges are weighted using metrics such as attacker effort, likelihood of attack success, or time to succeed. Generation of an attack graph is accomplished by matching information about attack requirements (specified in "attack templates") to information about computer system configuration (contained in a configuration file that can be updated to reflect system changes occurring during the course of an attack) and assumed attacker capabilities (reflected in "attacker profiles"). High risk attack paths, which correspond to those considered suited to application of attack countermeasures given limited resources for applying countermeasures, are identified by finding "epsilon optimal paths."
Digital methods for mediated discourse analysis
DEFF Research Database (Denmark)
Kjær, Malene; Larsen, Malene Charlotte
2015-01-01
practice of health care professionals (Author 1, 2014) and 2) young people’s identity construction on social media platforms (Author 2, 2010, 2015, in press). The paper’s contribution is a methodological discussion on digital data collection using methods such as online interviewing (via e-mail or chat......In this paper we discuss methodological strategies for collecting multimodal data using digital resources. The aim is to show how digital resources can provide ethnographic insights into mediated actions (Scollon, 2002) that can otherwise be difficult to observe or engage in, due to, for instance......, restrictions or privately mediated settings. Having used mediated discourse analysis (Scollon 2002, Scollon & Scollon, 2004) as a framework in two different research projects, we show how the framework, in correlation with digital resources for data gathering, provides new understandings of 1) the daily...
A Factor Analysis for Time Series.
1984-07-01
34Some results on multivariate autoregressive index models". Biometrika, 70, 145-156. . [11] Sanchez-Albornoz, N. (1975). Los precios agricolas durante...la segunda mitad del siglo XIX. Banco de Espana. [121 Sargent, T. J. and Sims, G. A. (1977). " Business cycle modeling without pretending to have too...much a priori economic theory", in New Methods in Business Cycle Research: Proceeding from a Conference, ed. C. A. Sims, Minneapolis, MNI Federal
Directory of Open Access Journals (Sweden)
Lee, J. J.
2016-01-01
Full Text Available Here we provide a description of the IRT estimation method known as Normal Ogive Harmonic Analysis Robust Method (NOHARM. Although in some ways this method has been superseded by new computer programs that also adopt a specifically factor-analytic approach, its fundamental principles remain useful in certain applications, which include calculating the residual covariance matrix and rescaling the distribution of the common factor (latent trait. These principles can be applied to parameter estimates obtained by any method.
Multivariate Analysis of Clinical Factors in Restenosis after Coronary Stenting
Institute of Scientific and Technical Information of China (English)
Wen Shangyu; Mao Jieming; Guo Liiun; Zhao Yiming; Zhang Fuchun; Guo Jingxlan; Cheng Mingzhe
2000-01-01
Ojbective To find the independent predictors for restenosis after coronary stenting.Methods Quantitative angiography was performed on 60 cases (67 successfully dilated lesions) after angioplasty over 6-months follow-up, and both univariate and multivariate logistic regression analysis were done to identify the correlations of restenosis with clinical factors. Results The total restenosis rate was 31.3％(21 of 67 lesions), and according to univariate analysis the patients who underwent coronary stenting ≥3.5mm had a lower rate of restenosis ( P ＜ 0. 01).Collateral circulation to the obstruction site, high maximal inflation pressure, smoking and the less minimal lumen diameter after PTCA made the rate of restenosis higherower ( P ＜ 0.05) . Multivariate logistic regression analysis showed that coronary stenting ≥ 3.5mm had a low rate of restenosis, but high maximal inflation pressure and smoking made the restenosis rate higher. Conclusion Coronary stent size, maximal inflation pressure and. smoking were independent predictors for restenosis.
Directory of Open Access Journals (Sweden)
Fu Suhua
2013-09-01
Full Text Available The slope length factor is one of the parameters of the Universal Soil Loss Equation (USLE and the Revised Universal Soil Loss Equation (RUSLE and is sometimes calculated based on a digital elevation model (DEM. The methods for calculating the slope length factor are important because the values obtained may depend on the methods used for calculation. The purpose of this study was to compare the difference in spatial distribution of the slope length factor between the different methods at a watershed scale. One method used the uniform slope length factor equation (USLFE where the effects of slope irregularities (such as slope gradient, etc. on soil erosion by water were not considered. The other method used segmented slope length factor equation(SSLFE which considered the effects of slope irregularities on soil erosion by water. The Arc Macro Language (AML Version 4 program for the revised universal soil loss equation(RUSLE.which uses the USLFE, was chosen to calculate the slope length factor. In a parallel analysis, the AML code of RUSLE Version 4 was modified according to the SSLFE to calculate the slope length factor. Two watersheds with different slope and gully densities were chosen. The results show that the slope length factor and soil loss using the USLFE method were lower than those using the SSLFE method, especially on downslopes watershed with more frequent steep slopes and higher gully densities. In addition, the slope length factor and soil loss calculated by the USLFE showed less spatial variation.
Directory of Open Access Journals (Sweden)
Jolita Krumplytė
2011-02-01
shadow economy causes are the following: too big tax burden (in the case of illegal work – significant gross and net wage gap, gaps in legislation, frequent law changes, distrust of country’s government, dissatisfaction with the quality of work of governmental institutions. The main causes of the sprawl of the shadow economy – the inability to compete without being involved in the shadow economy, the lack of unified declaration of income and insufficient control of state institutions.
Keywords: shadow economy, tax, Lithuania, factor, cause, expert evaluation method.
Pastaraisiais metais daugelyje šalių susiformavusi sudėtinga makroekonominė situacija – sparčiai mažėjantis bendrasis vidaus produktas, didėjantis nedarbo lygis, žymus šalies biudžeto pajamų sumažėjimas – sutrikdė šalies ekonominės ir socialinės politikos įgyvendinimo galimybes. Šalia šių problemų dažnai akcentuojama šešėlinė ekonomika, apibrėžiama kaip fenomenas, kurio egzistavimas neišvengiamas ekonomikos pakilimo laikotarpiu ir matomas spartus vystymasis ekonomikos recesijos ir nuosmukio etapuose. Mokslinėje literatūroje šešėlinė ekonomika apibūdinama kaip sudėtinga daugialypė reiškinių visuma. Pažymėtina, kad nėra bendros šešėlinės ekonomikos koncepcijos, neretai tapatinami terminai „šešėlinė ekonomika“ ir „oficialiai neapskaityta ekonomika“ arba vartojami kiti terminai ir apibrėžimai. Siekiant sumažinti šešėlinės ekonomikos mastą ir sustabdyti jos sparčią raidą, svarbu identifikuoti šį procesą lemiančius veiksnius ir prie
Exactly Solvable Hydrogen-like Potentials and Factorization Method
Rosas-Ortiz, J O
1998-01-01
It is introduced a set of factorization energies giving place to a generalization of the Schrödinger and Infeld-Hull factorization for the radial Hydrogen-like Hamiltonian. An algebraic intertwinning technique involving such factorization energies leads us to derive $n$-parametric families of potentials in general almost-isospectral to the Hydrogen-like radial Hamiltonians. The construction of a SUSY partner Hamiltonian using a factorization energy $\\epsilon_l^{(k)}$ greater than the ground state energy of the departure Hamiltonian is explicitely performed.
Immunofluorescence in cytogenetic analysis: method and applications
Directory of Open Access Journals (Sweden)
Jeppesen Peter
2000-01-01
Full Text Available Control of the genetic information encoded by DNA in mammalian chromosomes is mediated by proteins, some of which are only transiently attached, although others are intrinsically associated with nucleic acid in the complex mixture known as chromatin. Chromatin-associated proteins range from the ubiquitous and abundant histones down to the most specific and rare of transcription factors. Although many chromatin proteins are probably excluded from highly condensed mitotic chromosomes, a number are retained throughout the cell cycle and can be detected on chromosomes in metaphase spreads. Comparing the distribution of a chromosomal protein with known cytogenetic markers on metaphase chromosomes can provide an important and potentially highly informative first source of data on the function of the protein under consideration. The aim of the present study is to summarize some of the principles involved in obtaining suitable chromosome preparations for subsequent immunolocalization of protein antigens. Some applications of the method will be included to illustrate how this approach has increased our understanding of chromosome structure and genetic regulation.
Analysis of Socio-Economic Factors Influencing Farmers' Adoption ...
African Journals Online (AJOL)
Analysis of Socio-Economic Factors Influencing Farmers' Adoption of Rice ... Farming experience, household size, farm size and extension contact ... gender, market availability, education, extension contact, labour availability and farm size.
Exploring Technostress: Results of a Large Sample Factor Analysis
Directory of Open Access Journals (Sweden)
Steponas Jonušauskas
2016-06-01
Full Text Available With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ answers, revealing technostress causes and consequences as well as technostress prevalence in the population in a statistically validated pattern. A key elements of technostress based on factor analysis can serve for the construction of technostress measurement scales in further research.
Risk Factor Analysis for Oral Precancer among Slum Dwellers in ...
African Journals Online (AJOL)
Rajasthan Dental College, Jaipur, Rajasthan, 1Dental Wing, All India Institute of Medical Sciences (AIIMS), Bhopal,. 4Department of Public ... Keywords: Oral cancer, Risk factor analysis, Slum dwellers. Access this .... hygiene aid used in India.
The application of spectral distribution of product of two random matrices in the factor analysis
Institute of Scientific and Technical Information of China (English)
Bai-suo JIN; Bai-qi MIAO; Wu-yi YE; Zhen-xiang WU
2007-01-01
In the factor analysis model with large cross-section and time-series dimensions, we propose a new method to estimate the number of factors. Specially if the idiosyncratic terms satisfy a linear time series model, the estimators of the parameters can be obtained in the time series model.The theoretical properties of the estimators are also explored. A simulation study and an empirical analysis are conducted.
The application of spectral distribution of product of two random matrices in the factor analysis
Institute of Scientific and Technical Information of China (English)
2007-01-01
In the factor analysis model with large cross-section and time-series dimensions,we pro- pose a new method to estimate the number of factors.Specially if the idiosyncratic terms satisfy a linear time series model,the estimators of the parameters can be obtained in the time series model. The theoretical properties of the estimators are also explored.A simulation study and an empirical analysis are conducted.
Institute of Scientific and Technical Information of China (English)
WANG Cheng; DU Hang-gen; YIN Li-chun; HE Min; ZHANG Guo-jun; TIAN Yong; HAO Bi-lie
2013-01-01
Objective:The management of secondary normal pressure hydrocephalus (sNPH) is controversial.Many factors may affect the surgery effect.The purpose of this study was to identify the possible factors influencing prognosis and provide theoretical basis for clinical treatment of sNPH.Methods:A retrospective study was carried out to investigate the results of 31 patients with sNPH who underwent ventriculoperitoneal shunt surgery from January 2007 to December 2011.We processed the potential influencing factors by univariate analysis and the result further by multivariate logistic regression analysis.Results:Factors including age,disease duration and Glasgow coma scale (GCS) score before surgery significantly influenced the prognosis of sNPH (P＜0.05).Further logistic regression analysis showed that all the three factors are independent influencing factors.Conclusion:Age,disease duration and GCS score before surgery have positive predictive value in estimating favorable response to surgical treatment for sNPH.
Coloured Petri Nets: Basic Concepts, Analysis Methods and Practical Use. Vol. 2, Analysis Methods
DEFF Research Database (Denmark)
Jensen, Kurt
This three-volume work presents a coherent description of the theoretical and practical aspects of coloured Petri nets (CP-nets). The second volume contains a detailed presentation of the analysis methods for CP-nets. They allow the modeller to investigate dynamic properties of CP-nets. The main...
Confirmatory factor analysis of the Multidimensional Inventory of Perfectionism in Sport
Madigan, Daniel J.
2016-01-01
Objectives and Method: The Multidimensional Inventory of Perfectionism in Sport (MIPS; Stoeber, Otto, & Stoll, 2006) is a commonly used measure of perfectionism in sport. However, there is limited empirical evidence supporting its subscale structure and composition. Therefore, the present study investigated the factor structure of the MIPS in a sample of 470 athletes (mean age 20.0 years).\\ud \\ud Results: Confirmatory factor analysis showed that the data supported the hypothesized four-factor...
An automated Monte-Carlo based method for the calculation of cascade summing factors
Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.
2016-10-01
A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.
Stress intensity factor analysis of friction sliding at discontinuity interfaces and junctions
CSIR Research Space (South Africa)
Phan, AV
2003-12-01
Full Text Available A stress intensity factor (SIF) analysis for two dimensional fractures with frictional contact (crack friction) is presented. This analysis is carried out using the symmetric-Galerkin boundary element method, and a modified quarter-point crack tip...
The Pain Behaviour Checklist: factor analysis and validation.
Anciano, D
1986-11-01
A factor analysis was performed on Philips & Hunter's (1981) Pain Behaviour Checklist for headache sufferers. Three intuitively meaningful factors emerged. All were similarly associated with overall intensity; pain severity does not determine type of pain behaviour. Differences in pain behaviour emerged between migraine and tension headache groups.
48 CFR 2115.404-71 - Profit analysis factors.
2010-10-01
... account in assigning a plus weight. (5) Cost control. This factor is based on the Contractor's previously... TYPES CONTRACTING BY NEGOTIATION Contract Pricing 2115.404-71 Profit analysis factors. (a) The OPM... receive a plus weight, and poor performance or failure to comply with contract terms and conditions a zero...
Analysis on Family Factor in Construction of New Socialist Countryside
Institute of Scientific and Technical Information of China (English)
2012-01-01
This paper analyzes the family factor in the construction of new socialist countryside. It is believed that the family plays both the positive role and negative role in new socialist countryside construction. Based on this analysis,it puts forward corresponding countermeasures,including bringing into play the effect of family in promoting production and carrying forward excellent factors of family culture.
Exploratory Tobit factor analysis for multivariate censored data
Kamakura, WA; Wedel, M
2001-01-01
We propose Multivariate Tobit models with a factor structure on the covariance matrix. Such models are particularly useful in the exploratory analysis of multivariate censored data and the identification of latent variables from behavioral data. The factor structure provides a parsimonious
Connectivism in Postsecondary Online Courses: An Exploratory Factor Analysis
Hogg, Nanette; Lomicky, Carol S.
2012-01-01
This study explores 465 postsecondary students' experiences in online classes through the lens of connectivism. Downes' 4 properties of connectivism (diversity, autonomy, interactivity, and openness) were used as the study design. An exploratory factor analysis was performed. This study found a 4-factor solution. Subjects indicated that autonomy…
DuVernet, Amy M; Dierdorff, Erich C; Wilson, Mark A
2015-09-01
Work analysis is fundamental to designing effective human resource systems. The current investigation extends previous research by identifying the differential effects of common design decisions, purposes, and organizational contexts on the data generated by work analyses. The effects of 19 distinct factors that span choices of descriptor, collection method, rating scale, and data source, as well as project purpose and organizational features, are explored. Meta-analytic results cumulated from 205 articles indicate that many of these variables hold significant consequences for work analysis data. Factors pertaining to descriptor choice, collection method, rating scale, and the purpose for conducting the work analysis each showed strong associations with work analysis data. The source of the work analysis information and organizational context in which it was conducted displayed fewer relationships. Findings can be used to inform choices work analysts make about methodology and postcollection evaluations of work analysis information.
Confirmatory Factor Analysis of the Career Factors Inventory on a Community College Sample
Simon, Merril A.; Tovar, Esau
2004-01-01
A confirmatory factor analysis was conducted using AMOS 4.0 to validate the 21-item Career Factors Inventory on a community college student sample. The multidimensional inventory assesses types and levels of career indecision antecedents. The sample consisted of 512 ethnically diverse freshmen students; 46% were men and 54% were women.…
DEFF Research Database (Denmark)
Taghizadeh, Alireza; Mørk, Jesper; Chung, Il-Sug
2014-01-01
Four different numerical methods for calculating the quality factor and resonance wavelength of a nano or micro photonic cavity are compared. Good agreement was found for a wide range of quality factors. Advantages and limitations of the different methods are discussed.......Four different numerical methods for calculating the quality factor and resonance wavelength of a nano or micro photonic cavity are compared. Good agreement was found for a wide range of quality factors. Advantages and limitations of the different methods are discussed....
New numerical analysis method in computational mechanics: composite element method
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
A new type of FEM, called CEM (composite element method), is proposed to solve the static and dynamic problems of engineering structures with high accuracy and efficiency. The core of this method is to define two sets of coordinate systems for DOF's description after discretizing the structure, i.e. the nodal coordinate system UFEM(ξ) for employing the conventional FEM, and the field coordinate system UCT(ξ) for utilizing classical theory. Then, coupling these two sets of functional expressions could obtain the composite displacement field U(ξ) of CEM. The computations of the stiffness and mass matrices can follow the conventional procedure of FEM. Since the CEM inherents some good properties of the conventional FEM and classical analytical method, it has the powerful versatility to various complex geometric shapes and excellent approximation. Many examples are presented to demonstrate the ability of CEM.
New numerical analysis method in computational mechanics: composite element method
Institute of Scientific and Technical Information of China (English)
曾攀
2000-01-01
A new type of FEM, called CEM (composite element method), is proposed to solve the static and dynamic problems of engineering structures with high accuracy and efficiency. The core of this method is to define two sets of coordinate systems for DOF’ s description after discretizing the structure, i.e. the nodal coordinate system UFEM(ζ) for employing the conventional FEM, and the field coordinate system UCT(ζ) for utilizing classical theory. Then, coupling these two sets of functional expressions could obtain the composite displacement field U(ζ) of CEM. The computations of the stiffness and mass matrices can follow the conventional procedure of FEM. Since the CEM inherents some good properties of the conventional FEM and classical analytical method, it has the powerful versatility to various complex geometric shapes and excellent approximation. Many examples are presented to demonstrate the ability of CEM.
Microalbuminuria: It's Significance, risk factors and methods of ...
African Journals Online (AJOL)
Alasia Datonye
factors such as obesity , smoking , low birthweight , male. 71 ... found in nondiabetic obese subjects . In obese subjects ... cause increase in the excretion of albumin in urine. Albumin ... color blocks-yellow, light brown, medium brown, brick red,.
Reliability Analysis for Tunnel Supports System by Using Finite Element Method
Directory of Open Access Journals (Sweden)
E. Bukaçi
2016-09-01
Full Text Available Reliability analysis is a method that can be used in almost any geotechnical engineering problem. Using this method requires the knowledge of parameter uncertainties, which can be expressed by their standard deviation value. By performing reliability analysis to tunnel supports design, can be obtained a range of safety factors and by using them, probability of failure can be calculated. Problem becomes more complex when this analysis is performed for numerical methods, such as Finite Element Method. This paper gives a solution to how reliability analysis can be performed to design tunnel supports, by using Point Estimate Method to calculate reliability index. As a case study, is chosen one of the energy tunnels at Fan Hydropower plant, in Rrëshen Albania. As results, values of factor of safety and probability of failure are calculated. Also some suggestions using reliability analysis with numerical methods are given.
Application of texture analysis method for mammogram density classification
Nithya, R.; Santhi, B.
2017-07-01
Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.
Area specific stripping factors for AGS. A method for extracting stripping factors from survey data
Energy Technology Data Exchange (ETDEWEB)
Aage, H.K.; Korsbech, U. [Technical Univ. of Denmark (Denmark)
2006-04-15
In order to use Airborne Gamma-ray Spectrometry (AGS) for contamination mapping, for source search etc. one must to be able to eliminate the contribution to the spectra from natural radioactivity. This in general is done by a stripping technique. The parameters for performing a stripping have until recently been measured by recording gamma spectra at special calibration sites (pads). This may be cumbersome and the parameters may not be correct when used at low gamma energies for environmental spectra. During 2000-2001 DTU tested with success a new technique for Carborne Gamma-ray Spectrometry (CGS) where the spectra from the surveyed area (or from a similar area) were used for calculating the stripping parameters. It was possible to calculate usable stripping ratios for a number of low energy windows - and weak source signals not detectable by other means were discovered with the ASS technique. In this report it is shown that the ASS technique also works for AGS data, and it has been used for recent Danish AGS tests with point sources. (Check of calibration of AGS parameters.) By using the ASS technique with the Boden data (Barents Rescue) an exercise source was detected that has not been detected by any of the teams during the exercise. The ASS technique therefore seems to be better for search for radiation anomalies than any other method known presently. The experiences also tell that although the stripping can be performed correctly at any altitude there is a variation of the stripping parameters with altitude that has not yet been quite understood. However, even with the oddly variations the stripping worked as expected. It was also observed that one might calculate a single common set of usable stripping factors for all altitudes from the entire data set i.e. some average a, b and c values. When those stripping factors were used the stripping technique still worked well. (au)
Conditioning Analysis of Incomplete Cholesky Factorizations with Orthogonal Dropping
Energy Technology Data Exchange (ETDEWEB)
Napov, Artem [Free Univ. of Brussels (Belgium)
2013-08-01
The analysis of preconditioners based on incomplete Cholesky factorization in which the neglected (dropped) components are orthogonal to the approximations being kept is presented. General estimate for the condition number of the preconditioned system is given which only depends on the accuracy of individual approximations. The estimate is further improved if, for instance, only the newly computed rows of the factor are modified during each approximation step. In this latter case it is further shown to be sharp. The analysis is illustrated with some existing factorizations in the context of discretized elliptic partial differential equations.
Conditioning Analysis of Incomplete Cholesky Factorizations with Orthogonal Dropping
Energy Technology Data Exchange (ETDEWEB)
Napov, Artem [Free Univ. of Brussels (Belgium)
2013-08-01
The analysis of preconditioners based on incomplete Cholesky factorization in which the neglected (dropped) components are orthogonal to the approximations being kept is presented. General estimate for the condition number of the preconditioned system is given which only depends on the accuracy of individual approximations. The estimate is further improved if, for instance, only the newly computed rows of the factor are modified during each approximation step. In this latter case it is further shown to be sharp. The analysis is illustrated with some existing factorizations in the context of discretized elliptic partial differential equations.
Echinacea purpurea: Pharmacology, phytochemistry and analysis methods
Directory of Open Access Journals (Sweden)
Azadeh Manayi
2015-01-01
Full Text Available Echinacea purpurea (Asteraceae is a perennial medicinal herb with important immunostimulatory and anti-inflammatory properties, especially the alleviation of cold symptoms. The plant also attracted scientists′ attention to assess other aspects of its beneficial effects. For instance, antianxiety, antidepression, cytotoxicity, and antimutagenicity as induced by the plant have been revealed in various studies. The findings of the clinical trials are controversial in terms of side effects. While some studies revealed the beneficial effects of the plant on the patients and no severe adverse effects, some others have reported serious side effects including abdominal pain, angioedema, dyspnea, nausea, pruritus, rash, erythema, and urticaria. Other biological activities of the plant such as antioxidant, antibacterial, antiviral, and larvicidal activities have been reported in previous experimental studies. Different classes of secondary metabolites of the plant such as alkamides, caffeic acid derivatives, polysaccharides, and glycoproteins are believed to be biologically and pharmacologically active. Actually, concurrent determination and single analysis of cichoric acid and alkamides have been successfully developed mainly by using high-performance liquid chromatography (HPLC coupled with different detectors including UV spectrophotometric, coulometric electrochemical, and electrospray ionization mass spectrometric detectors. The results of the studies which were controversial revealed that in spite of major experiments successfully accomplished using E. purpurea, many questions remain unanswered and future investigations may aim for complete recognition of the plant′s mechanism of action using new, complementary methods.
Echinacea purpurea: Pharmacology, phytochemistry and analysis methods.
Manayi, Azadeh; Vazirian, Mahdi; Saeidnia, Soodabeh
2015-01-01
Echinacea purpurea (Asteraceae) is a perennial medicinal herb with important immunostimulatory and anti-inflammatory properties, especially the alleviation of cold symptoms. The plant also attracted scientists' attention to assess other aspects of its beneficial effects. For instance, antianxiety, antidepression, cytotoxicity, and antimutagenicity as induced by the plant have been revealed in various studies. The findings of the clinical trials are controversial in terms of side effects. While some studies revealed the beneficial effects of the plant on the patients and no severe adverse effects, some others have reported serious side effects including abdominal pain, angioedema, dyspnea, nausea, pruritus, rash, erythema, and urticaria. Other biological activities of the plant such as antioxidant, antibacterial, antiviral, and larvicidal activities have been reported in previous experimental studies. Different classes of secondary metabolites of the plant such as alkamides, caffeic acid derivatives, polysaccharides, and glycoproteins are believed to be biologically and pharmacologically active. Actually, concurrent determination and single analysis of cichoric acid and alkamides have been successfully developed mainly by using high-performance liquid chromatography (HPLC) coupled with different detectors including UV spectrophotometric, coulometric electrochemical, and electrospray ionization mass spectrometric detectors. The results of the studies which were controversial revealed that in spite of major experiments successfully accomplished using E. purpurea, many questions remain unanswered and future investigations may aim for complete recognition of the plant's mechanism of action using new, complementary methods.
Methods for the proximate analysis of peat
Energy Technology Data Exchange (ETDEWEB)
Sheppard, J.D.; Tibbetts, T.E.; Forgeron, D.W.
1986-01-01
An investigation was conducted into methods for determining the percentages of volatile matter and ash in peat. Experiments were performed on two types of sphagnum peat, a decomposed fuel peat and a commercial horticultural grade peat. The heating apparatus consisted of both a standard programmable furnace (Fisher Coal Analyser) and a thermogravimetric analyser with a module for differential scanning calorimetry (Mettler TA 3000 system). The results indicate that the seven minute test for volatile matter at either 900 C or 950 C does not fully differentiate volatiles from fixed carbon and, depending on the degree of decomposition, up to sixty minutes at 900 C may be required. The TGA system is very useful in discriminating between different fractions of volatile matter. The relative fractions are more important in determining burning characteristics than the total percentage of volatiles. Ashing must be performed under conditions sufficiently severe to ensure complete combustion of organics. The severity that is required is mainly dependent on the degree of decomposition and sample size. Use of TGA and DSC for studying the combustion of peat provides much more information than the standard proximate analysis. 14 refs.
Mapping Cigarettes Similarities using Cluster Analysis Methods
Directory of Open Access Journals (Sweden)
Lorentz JÃƒÂ¤ntschi
2007-09-01
Full Text Available The aim of the research was to investigate the relationship and/or occurrences in and between chemical composition information (tar, nicotine, carbon monoxide, market information (brand, manufacturer, price, and public health information (class, health warning as well as clustering of a sample of cigarette data. A number of thirty cigarette brands have been analyzed. Six categorical (cigarette brand, manufacturer, health warnings, class and four continuous (tar, nicotine, carbon monoxide concentrations and package price variables were collected for investigation of chemical composition, market information and public health information. Multiple linear regression and two clusterization techniques have been applied. The study revealed interesting remarks. The carbon monoxide concentration proved to be linked with tar and nicotine concentration. The applied clusterization methods identified groups of cigarette brands that shown similar characteristics. The tar and carbon monoxide concentrations were the main criteria used in clusterization. An analysis of a largest sample could reveal more relevant and useful information regarding the similarities between cigarette brands.
An Analysis of the SURF Method
Directory of Open Access Journals (Sweden)
Edouard Oyallon
2015-07-01
Full Text Available The SURF method (Speeded Up Robust Features is a fast and robust algorithm for local, similarity invariant representation and comparison of images. Similarly to many other local descriptor-based approaches, interest points of a given image are defined as salient features from a scale-invariant representation. Such a multiple-scale analysis is provided by the convolution of the initial image with discrete kernels at several scales (box filters. The second step consists in building orientation invariant descriptors, by using local gradient statistics (intensity and orientation. The main interest of the SURF approach lies in its fast computation of operators using box filters, thus enabling real-time applications such as tracking and object recognition. The SURF framework described in this paper is based on the PhD thesis of H. Bay [ETH Zurich, 2009], and more specifically on the paper co-written by H. Bay, A. Ess, T. Tuytelaars and L. Van Gool [Computer Vision and Image Understanding, 110 (2008, pp. 346–359]. An implementation is proposed and used to illustrate the approach for image matching. A short comparison with a state-of-the-art approach is also presented, the SIFT algorithm of D. Lowe [International Journal of Computer Vision, 60 (2004, pp. 91–110], with which SURF shares a lot in common.
Transient Analysis of Air-Core Coils by Moment Method
Fujita, Akira; Kato, Shohei; Hirai, Takao; Okabe, Shigemitu
In electric power system a threat of lighting surge is decreased by using ground wire and arrester, but the risk of failure of transformer is still high. Winding is the most familiar conductor configuration of electromagnetic field components such as transformer, resistors, reactance device etc. Therefore, it is important that we invest the lighting surge how to advance into winding, but the electromagnet coupling in a winding makes lighting surge analysis difficult. In this paper we present transient characteristics analysis of an air-core coils by moment method in frequency domain. We calculate the inductance from time response and impedance in low frequency, and compare them with the analytical equation which is based on Nagaoka factor.
Analysis on total factor productivity of Chinese provincial economy
Institute of Scientific and Technical Information of China (English)
GUO Qingwang; ZHAO Zhiyun; JIA Junxue
2006-01-01
This paper applies the nonparametric DEA-Malmquist index approach to estimate total factor productivity growth,efficiency change and the rate of technological progress from 1979 to 2003.This is done to conduct analysis on the total factor productivity of China's provincial economy.Analysis on the evolution of distribution dynamics of relative labor productivity,relative total factor productivity,relative efficiency and relative technological progress is done by using kernel density estimation flor the period from 1979 to 2003 in 29 provinces of China.Our analysis indicates that disparities of provincial economic growth are large and have been increasing owing to the relatively large and increasing disparities of total factor productivity growth especially the rate of technological progress.
Critical Security Methods : New Frameworks for Analysis
Voelkner, Nadine; Huysmans, Jef; Claudia, Aradau; Neal, Andrew
2015-01-01
Critical Security Methods offers a new approach to research methods in critical security studies. It argues that methods are not simply tools to bridge the gap between security theory and security practice. Rather, to practise methods critically means engaging in a more free and experimental interpl
Critical Security Methods : New Frameworks for Analysis
Voelkner, Nadine; Huysmans, Jef; Claudia, Aradau; Neal, Andrew
2015-01-01
Critical Security Methods offers a new approach to research methods in critical security studies. It argues that methods are not simply tools to bridge the gap between security theory and security practice. Rather, to practise methods critically means engaging in a more free and experimental
Computational analysis of methods for reduction of induced drag
Janus, J. M.; Chatterjee, Animesh; Cave, Chris
1993-01-01
The purpose of this effort was to perform a computational flow analysis of a design concept centered around induced drag reduction and tip-vortex energy recovery. The flow model solves the unsteady three-dimensional Euler equations, discretized as a finite-volume method, utilizing a high-resolution approximate Riemann solver for cell interface flux definitions. The numerical scheme is an approximately-factored block LU implicit Newton iterative-refinement method. Multiblock domain decomposition is used to partition the field into an ordered arrangement of blocks. Three configurations are analyzed: a baseline fuselage-wing, a fuselage-wing-nacelle, and a fuselage-wing-nacelle-propfan. Aerodynamic force coefficients, propfan performance coefficients, and flowfield maps are used to qualitatively access design efficacy. Where appropriate, comparisons are made with available experimental data.
Coarse Analysis of Microscopic Models using Equation-Free Methods
DEFF Research Database (Denmark)
Marschler, Christian
-dimensional models. The goal of this thesis is to investigate such high-dimensional multiscale models and extract relevant low-dimensional information from them. Recently developed mathematical tools allow to reach this goal: a combination of so-called equation-free methods with numerical bifurcation analysis....... Applications include the learning behavior in the barn owl’s auditory system, traffic jam formation in an optimal velocity model for circular car traffic and oscillating behavior of pedestrian groups in a counter-flow through a corridor with narrow door. The methods do not only quantify interesting properties...... factor for the complexity of models, e.g., in real-time applications. With the increasing amount of data generated by computer simulations a challenge is to extract valuable information from the models in order to help scientists and managers in a decision-making process. Although the dynamics...
Basic methods of linear functional analysis
Pryce, John D
2011-01-01
Introduction to the themes of mathematical analysis, geared toward advanced undergraduate and graduate students. Topics include operators, function spaces, Hilbert spaces, and elementary Fourier analysis. Numerous exercises and worked examples.1973 edition.
Zero modes method and form factors in quantum integrable models
Directory of Open Access Journals (Sweden)
S. Pakuliak
2015-04-01
Full Text Available We study integrable models solvable by the nested algebraic Bethe ansatz and possessing GL(3-invariant R-matrix. Assuming that the monodromy matrix of the model can be expanded into series with respect to the inverse spectral parameter, we define zero modes of the monodromy matrix entries as the first nontrivial coefficients of this series. Using these zero modes we establish new relations between form factors of the elements of the monodromy matrix. We prove that all of them can be obtained from the form factor of a diagonal matrix element in special limits of Bethe parameters. As a result we obtain determinant representations for form factors of all the entries of the monodromy matrix.
Smith, Michael; Wallace, Ken; Lewis, Loretta; Wagner, Christian
2015-11-01
The high level of uncertainty inherent in natural resource management requires planners to apply comprehensive risk analyses, often in situations where there are few resources. In this paper, we demonstrate a broadly applicable, novel and structured elicitation approach to identify important direct risk factors. This new approach combines expert calibration and fuzzy based mathematics to capture and aggregate subjective expert estimates of the likelihood that a set of direct risk factors will cause management failure. A specific case study is used to demonstrate the approach; however, the described methods are widely applicable in risk analysis. For the case study, the management target was to retain all species that characterise a set of natural biological elements. The analysis was bounded by the spatial distribution of the biological elements under consideration and a 20-year time frame. Fourteen biological elements were expected to be at risk. Eleven important direct risk factors were identified that related to surrounding land use practices, climate change, problem species (e.g., feral predators), fire and hydrological change. In terms of their overall influence, the two most important risk factors were salinisation and a lack of water which together pose a considerable threat to the survival of nine biological elements. The described approach successfully overcame two concerns arising from previous risk analysis work: (1) the lack of an intuitive, yet comprehensive scoring method enabling the detection and clarification of expert agreement and associated levels of uncertainty; and (2) the ease with which results can be interpreted and communicated while preserving a rich level of detail essential for informed decision making.
Institute of Scientific and Technical Information of China (English)
2011-01-01
By using factor analysis method and establishing analysis indicator system from four aspects including crop production,poultry farming,rural life and township enterprises,the difference,features,and types of factors influencing the rural environmental pollution in the hilly area in Sichuan Province,China.Results prove that the major factor influencing rural environmental pollution in the study area is livestock and poultry breeding,flowed by crop planting,rural life,and township enterprises.Hence future pollution prevention and control should set about from livestock and poultry breeding.Meanwhile,attention should be paid to the prevention and control of rural environmental pollution caused by rural life and township enterprise production.
Advanced Software Methods for Physics Analysis
Lista, L.
2006-01-01
Unprecedented data analysis complexity is experienced in modern High Energy Physics experiments. The complexity arises from the growing size of recorded data samples, the large number of data analyses performed by different users in each single experiment, and the level of complexity of each single analysis. For this reason, the requirements on software for data analysis impose a very high level of reliability. We present two concrete examples: the former from BaBar experience with the migration to a new Analysis Model with the definition of a new model for the Event Data Store, the latter about a toolkit for multivariate statistical and parametric Monte Carlo analysis developed using generic programming.
Bearing capacity analysis using the method of characteristics
Institute of Scientific and Technical Information of China (English)
Jian-Ping Sun; Zhi-Ye Zhao; Yi-Pik Cheng
2013-01-01
Using the method of characteristics,the bearing capacity for a strip footing is analyzed.The method of characteristics leads to an exact true limit load when the calculations of the three terms in the bearing capacity formula are consistent with one collapse mechanism and the soil satisfies the associated flow rule.At the same time,the method of characteristics avoids the assumption of arbitrary slip surfaces,and produces zones within which equilibrium and plastic yield are simultaneously satisfied for given boundary stresses.The exact solution without superposition approximation can still be expressed by Terzaghi's equation of bearing capacity,in which the bearing capacity factor Nyλ is dependent on the dimensionless parameterλand the friction angle φ.The influence of groundwater on the bearing capacity of the shallow strip footing is considered,which indicates that when the groundwater effect is taken into account,the error induced by the superposition approximation can be reduced as compared with dry soil condition.The results are presented in the form of charts which give the modified value (yλcw/Nyλc) of bearing capacity factor.Finally,an approximated analytical expression,which provides results in close agreement with those obtained by numerical analysis in this paper,has been suggested for practical application purposes.
Context-specific Factors and Contraceptive Use: A Mixed Method ...
African Journals Online (AJOL)
USER
The study aimed to outline context-specific factors associated ... Male partner support can drive cultural sensitivities towards accepting use of ... Despite the increase in global contraceptive use, ... An assessment of barriers ... collection were simultaneously carried out: ...... decisions, perceptions and gender dynamics among.
Comparison of three methods for wind turbine capacity factor estimation.
Ditkovich, Y; Kuperman, A
2014-01-01
Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.
Factor analysis of processes of corporate culture formation at industrial enterprises of Ukraine
Directory of Open Access Journals (Sweden)
Illiashenko Sergii
2016-06-01
Full Text Available Authors have analyzed and synthesized the features of formation and development of the corporate culture at industrial enterprises of Ukraine and on this basis developed recommendations for application in the management of strategic development. During the research authors used the following general scientific methods: at research of patterns of interaction national culture, corporate culture and the culture of the individual authors used logical generalization method; for determining factors influencing corporate culture formation with the level of occurrence authors used factor analysis; for trend analysis of the corporate culture development at appropriate levels authors used comparative method. Results of the analysis showed that macro- and microfactors are external and mezofaktors (adaptability of business and corporate governance, corporate ethics, corporate social responsibility and personnel policies, corporate finance are internal for an enterprise. Authors have identified areas for each of the factors, itemized obstacles to the establishment and development of corporate culture at Ukrainian industrial enterprises and proposed recommendations for these processes management.
Yang, Haixuan; Seoighe, Cathal
2016-01-01
Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm. PMID:27741311
Jolita Krumplytė
2011-01-01
In the scientific literature the shadow economy is defined as a complex multifaceted set of phenomena, whose existence is determined by various factors and causes. The article examines the shadow economy through tax administration perspective. The author’s chosen...
The application of Positive Matrix Factorization (PMF) to eco-efficiency analysis.
Wu, Jiaying; Wu, Zhijun; Holländer, Robert
2012-05-15
A new method for weighting and aggregating eco-efficiency indicators is of the utmost importance, if researchers in the field are to provide simplified and physically meaningful information to policy makers. To date, there is still considerable debate over which weighting and aggregating methods to use in this context. We apply a new variant of factor analysis, Positive Matrix Factorization (PMF), to a simple eco-efficiency analysis case study. PMF constrains its solutions to be non-negative, providing two important advantages over traditional factor analysis (FA) or principal component analysis (PCA): the rotational ambiguity of the solution space is reduced, and all the results are guaranteed to be physically meaningful. We suggest that PMF is better choice than either FA or PCA for eco-efficiency indicators, especially when dealing with complex social-economic and environmental data.
Identification of noise in linear data sets by factor analysis
Energy Technology Data Exchange (ETDEWEB)
Roscoe, B.A.; Hopke, P.K.
1981-01-01
The approach to classical factor analysis described in this paper, i.e. doing the analysis for varying numbers of factors without prior assumptions to the number of factors, prevents one from getting eroneous results by inherent computer code assumptions. Identification of a factor containing most of the variance of one variable with little variance of other variables, pinpoints a possible difficulty in the data, if the singularity has no obvious physical significance. Examination of the factor scores will determine whether the problem is isolated to a few samples or over all the samples. Having this information, one may then go back to the raw data and take the appropriate corrective action. Classical factor analysis has the ability to identify several types of errors in data after it has been generated. It is then ideally suited for scanning large data sets. The ease of the identification technique makes it a beneficial tool to use before reduction and analysis of large data sets and should, in the long run, save time and effort.
Using exploratory factor analysis in personality research: Best-practice recommendations
Directory of Open Access Journals (Sweden)
Sumaya Laher
2010-03-01
Full Text Available Orientation: Exploratory factor analysis is the method of choice with objective personality instruments, particularly to establish the construct validity and construct equivalence of trait-based instruments.Research purpose: This article presents more objective methods to determine the number of factors, most notably parallel analysis and Velicer’s minimum average partial (MAP. The benefits of rotation are also discussed. The article argues for more consistent use of Procrustes rotation and congruence coefficients in factor analytic studies.Motivation for the study: Exploratory factor analysis is often criticised for not being rigorous and objective enough in terms of the methods used to determine the number of factors, the rotations to be used and ultimately the validity of the factor structure.Research design, approach and method: The article adopts a theoretical stance to discuss the best-practice recommendations for factor analytic research in the field of psychology. Following this, an example located within personality assessment and using the NEO-PI-R specifically is presented. A total of 425 students at the University of the Witwatersrand completed the NEO-PI-R. These responses were subjected to a principal components analysis using varimax rotation. The rotated solution was subjected to a Procrustes rotation with Costa and McCrae’s (1992 matrix as the target matrix. Congruence coefficients were also computed.Main findings: The example indicates the use of the methods recommended in the article and demonstrates an objective way of determining the number of factors. It also provides an example of Procrustes rotation with coefficients of agreement as an indication of how factor analytic results may be presented more rigorously in local research.Practical/managerial implications: It is hoped that the recommendations in this article will have best-practice implications for both researchers and practitioners in the field who employ factor
Solving Generalised Riccati Differential Equations by Homotopy Analysis Method
Directory of Open Access Journals (Sweden)
J. Vahidi
2013-07-01
Full Text Available In this paper, the quadratic Riccati differential equation is solved by means of an analytic technique, namely the homotopy analysis method (HAM. Comparisons are made between Adomian’s decomposition method (ADM and the exact solution and the homotopy analysis method. The results reveal that the proposed method is very effective and simple.
21 CFR 163.5 - Methods of analysis.
2010-04-01
... CONSUMPTION CACAO PRODUCTS General Provisions § 163.5 Methods of analysis. Shell and cacao fat content in cacao products shall be determined by the following methods of analysis prescribed in “Official Methods..._locations.html. (a) Shell content—12th ed. (1975), methods 13.010-13.014, under the heading “Shell in...
Biological risk factors for suicidal behaviors: a meta-analysis.
Chang, B P; Franklin, J C; Ribeiro, J D; Fox, K R; Bentley, K H; Kleiman, E M; Nock, M K
2016-09-13
Prior studies have proposed a wide range of potential biological risk factors for future suicidal behaviors. Although strong evidence exists for biological correlates of suicidal behaviors, it remains unclear if these correlates are also risk factors for suicidal behaviors. We performed a meta-analysis to integrate the existing literature on biological risk factors for suicidal behaviors and to determine their statistical significance. We conducted a systematic search of PubMed, PsycInfo and Google Scholar for studies that used a biological factor to predict either suicide attempt or death by suicide. Inclusion criteria included studies with at least one longitudinal analysis using a biological factor to predict either of these outcomes in any population through 2015. From an initial screen of 2541 studies we identified 94 cases. Random effects models were used for both meta-analyses and meta-regression. The combined effect of biological factors produced statistically significant but relatively weak prediction of suicide attempts (weighted mean odds ratio (wOR)=1.41; CI: 1.09-1.81) and suicide death (wOR=1.28; CI: 1.13-1.45). After accounting for publication bias, prediction was nonsignificant for both suicide attempts and suicide death. Only two factors remained significant after accounting for publication bias-cytokines (wOR=2.87; CI: 1.40-5.93) and low levels of fish oil nutrients (wOR=1.09; CI: 1.01-1.19). Our meta-analysis revealed that currently known biological factors are weak predictors of future suicidal behaviors. This conclusion should be interpreted within the context of the limitations of the existing literature, including long follow-up intervals and a lack of tests of interactions with other risk factors. Future studies addressing these limitations may more effectively test for potential biological risk factors.