WorldWideScience

Sample records for factor analysis method

  1. A comparison of confirmatory factor analysis methods : Oblique multiple group method versus confirmatory common factor method

    NARCIS (Netherlands)

    Stuive, Ilse

    2007-01-01

    Confirmatieve Factor Analyse (CFA) is een vaak gebruikte methode wanneer onderzoekers een bepaalde veronderstelling hebben over de indeling van items in één of meerdere subtests en willen onderzoeken of deze indeling ook wordt ondersteund door verzamelde onderzoeksgegevens. De meest gebruikte

  2. Comparative Analysis Of Dempster Shafer Method With Certainty Factor Method For Diagnose Stroke Diseases

    Directory of Open Access Journals (Sweden)

    Erwin Kuit Panggabean

    2018-02-01

    Full Text Available The development of artificial intelligence technology that has occurred has allowed expert systems to be applied in detecting disease using programming languages. One in terms of providing information about a variety of disease problems that have recently been feared by Indonesian society, namely stroke. Expert system method used is dempster shafer and certainty factor method is used to analyze the comparison of both methods in stroke.Based on the analysis result, it is found that certainty factor is better than demster shafer and more accurate in handling the knowledge representation of stoke disease according to the symptoms of disease obtained from one hospital in medan city, uniqueness of algorithm that exist in both methods.

  3. Exploratory factor analysis and reliability analysis with missing data: A simple method for SPSS users

    Directory of Open Access Journals (Sweden)

    Bruce Weaver

    2014-09-01

    Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.

  4. Deterministic factor analysis: methods of integro-differentiation of non-integral order

    Directory of Open Access Journals (Sweden)

    Valentina V. Tarasova

    2016-12-01

    Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes

  5. Methods of selecting factors in the analysis of the real estates market

    OpenAIRE

    Jasińska, Elżbieta; Preweda, Edward

    2006-01-01

    In the paper the problem of selecting the method of choosing factors in factorial analysis is presented. For the database of 61 real estates the process of singling out the factors was carried out with the use of all the methods proposed in the STATISTICA 6.0 pack. A particular attention was paid on the number of differentiated factors and the efficiency of subsequent methods for the analysis of the real estates market. Edward Preweda

  6. A Comparison of Distribution Free and Non-Distribution Free Factor Analysis Methods

    Science.gov (United States)

    Ritter, Nicola L.

    2012-01-01

    Many researchers recognize that factor analysis can be conducted on both correlation matrices and variance-covariance matrices. Although most researchers extract factors from non-distribution free or parametric methods, researchers can also extract factors from distribution free or non-parametric methods. The nature of the data dictates the method…

  7. [A factor analysis method for contingency table data with unlimited multiple choice questions].

    Science.gov (United States)

    Toyoda, Hideki; Haiden, Reina; Kubo, Saori; Ikehara, Kazuya; Isobe, Yurie

    2016-02-01

    The purpose of this study is to propose a method of factor analysis for analyzing contingency tables developed from the data of unlimited multiple-choice questions. This method assumes that the element of each cell of the contingency table has a binominal distribution and a factor analysis model is applied to the logit of the selection probability. Scree plot and WAIC are used to decide the number of factors, and the standardized residual, the standardized difference between the sample, and the proportion ratio, is used to select items. The proposed method was applied to real product impression research data on advertised chips and energy drinks. Since the results of the analysis showed that this method could be used in conjunction with conventional factor analysis model, and extracted factors were fully interpretable, and suggests the usefulness of the proposed method in the study of psychology using unlimited multiple-choice questions.

  8. Qualitative and quantitative methods for human factor analysis and assessment in NPP. Investigations and results

    International Nuclear Information System (INIS)

    Hristova, R.; Kalchev, B.; Atanasov, D.

    2005-01-01

    We consider here two basic groups of methods for analysis and assessment of the human factor in the NPP area and give some results from performed analyses as well. The human factor is the human interaction with the design equipment, with the working environment and takes into account the human capabilities and limits. In the frame of the qualitative methods for analysis of the human factor are considered concepts and structural methods for classifying of the information, connected with the human factor. Emphasize is given to the HPES method for human factor analysis in NPP. Methods for quantitative assessment of the human reliability are considered. These methods allow assigning of probabilities to the elements of the already structured information about human performance. This part includes overview of classical methods for human reliability assessment (HRA, THERP), and methods taking into account specific information about human capabilities and limits and about the man-machine interface (CHR, HEART, ATHEANA). Quantitative and qualitative results concerning human factor influence in the initiating events occurrences in the Kozloduy NPP are presented. (authors)

  9. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    Energy Technology Data Exchange (ETDEWEB)

    Ronald Laurids Boring

    2010-11-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  10. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    International Nuclear Information System (INIS)

    Boring, Ronald Laurids

    2010-01-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  11. Quantitative EDXS analysis of organic materials using the ζ-factor method

    International Nuclear Information System (INIS)

    Fladischer, Stefanie; Grogger, Werner

    2014-01-01

    In this study we successfully applied the ζ-factor method to perform quantitative X-ray analysis of organic thin films consisting of light elements. With its ability to intrinsically correct for X-ray absorption, this method significantly improved the quality of the quantification as well as the accuracy of the results compared to conventional techniques in particular regarding the quantification of light elements. We describe in detail the process of determining sensitivity factors (ζ-factors) using a single standard specimen and the involved parameter optimization for the estimation of ζ-factors for elements not contained in the standard. The ζ-factor method was then applied to perform quantitative analysis of organic semiconducting materials frequently used in organic electronics. Finally, the results were verified and discussed concerning validity and accuracy. - Highlights: • The ζ-factor method is used for quantitative EDXS analysis of light elements. • We describe the process of determining ζ-factors from a single standard in detail. • Organic semiconducting materials are successfully quantified

  12. Logistic Regression and Path Analysis Method to Analyze Factors influencing Students’ Achievement

    Science.gov (United States)

    Noeryanti, N.; Suryowati, K.; Setyawan, Y.; Aulia, R. R.

    2018-04-01

    Students' academic achievement cannot be separated from the influence of two factors namely internal and external factors. The first factors of the student (internal factors) consist of intelligence (X1), health (X2), interest (X3), and motivation of students (X4). The external factors consist of family environment (X5), school environment (X6), and society environment (X7). The objects of this research are eighth grade students of the school year 2016/2017 at SMPN 1 Jiwan Madiun sampled by using simple random sampling. Primary data are obtained by distributing questionnaires. The method used in this study is binary logistic regression analysis that aims to identify internal and external factors that affect student’s achievement and how the trends of them. Path Analysis was used to determine the factors that influence directly, indirectly or totally on student’s achievement. Based on the results of binary logistic regression, variables that affect student’s achievement are interest and motivation. And based on the results obtained by path analysis, factors that have a direct impact on student’s achievement are students’ interest (59%) and students’ motivation (27%). While the factors that have indirect influences on students’ achievement, are family environment (97%) and school environment (37).

  13. Effect of abiotic and biotic stress factors analysis using machine learning methods in zebrafish.

    Science.gov (United States)

    Gutha, Rajasekar; Yarrappagaari, Suresh; Thopireddy, Lavanya; Reddy, Kesireddy Sathyavelu; Saddala, Rajeswara Reddy

    2018-03-01

    In order to understand the mechanisms underlying stress responses, meta-analysis of transcriptome is made to identify differentially expressed genes (DEGs) and their biological, molecular and cellular mechanisms in response to stressors. The present study is aimed at identifying the effect of abiotic and biotic stress factors, and it is found that several stress responsive genes are common for both abiotic and biotic stress factors in zebrafish. The meta-analysis of micro-array studies revealed that almost 4.7% i.e., 108 common DEGs are differentially regulated between abiotic and biotic stresses. This shows that there is a global coordination and fine-tuning of gene regulation in response to these two types of challenges. We also performed dimension reduction methods, principal component analysis, and partial least squares discriminant analysis which are able to segregate abiotic and biotic stresses into separate entities. The supervised machine learning model, recursive-support vector machine, could classify abiotic and biotic stresses with 100% accuracy using a subset of DEGs. Beside these methods, the random forests decision tree model classified five out of 8 stress conditions with high accuracy. Finally, Functional enrichment analysis revealed the different gene ontology terms, transcription factors and miRNAs factors in the regulation of stress responses. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Study on Performance Shaping Factors (PSFs) Quantification Method in Human Reliability Analysis (HRA)

    International Nuclear Information System (INIS)

    Kim, Ar Ryum; Jang, Inseok Jang; Seong, Poong Hyun; Park, Jinkyun; Kim, Jong Hyun

    2015-01-01

    The purpose of HRA implementation is 1) to achieve the human factor engineering (HFE) design goal of providing operator interfaces that will minimize personnel errors and 2) to conduct an integrated activity to support probabilistic risk assessment (PRA). For these purposes, various HRA methods have been developed such as technique for human error rate prediction (THERP), simplified plant analysis risk human reliability assessment (SPAR-H), cognitive reliability and error analysis method (CREAM) and so on. In performing HRA, such conditions that influence human performances have been represented via several context factors called performance shaping factors (PSFs). PSFs are aspects of the human's individual characteristics, environment, organization, or task that specifically decrements or improves human performance, thus respectively increasing or decreasing the likelihood of human errors. Most HRA methods evaluate the weightings of PSFs by expert judgment and explicit guidance for evaluating the weighting is not provided. It has been widely known that the performance of the human operator is one of the critical factors to determine the safe operation of NPPs. HRA methods have been developed to identify the possibility and mechanism of human errors. In performing HRA methods, the effect of PSFs which may increase or decrease human error should be investigated. However, the effect of PSFs were estimated by expert judgment so far. Accordingly, in order to estimate the effect of PSFs objectively, the quantitative framework to estimate PSFs by using PSF profiles is introduced in this paper

  15. Identification of advanced human factors engineering analysis, design and evaluation methods

    International Nuclear Information System (INIS)

    Plott, C.; Ronan, A. M.; Laux, L.; Bzostek, J.; Milanski, J.; Scheff, S.

    2006-01-01

    NUREG-0711 Rev.2, 'Human Factors Engineering Program Review Model,' provides comprehensive guidance to the Nuclear Regulatory Commission (NRC) in assessing the human factors practices employed by license applicants for Nuclear Power Plant control room designs. As software based human-system interface (HSI) technologies supplant traditional hardware-based technologies, the NRC may encounter new HSI technologies or seemingly unconventional approaches to human factors design, analysis, and evaluation methods which NUREG-0711 does not anticipate. A comprehensive survey was performed to identify advanced human factors engineering analysis, design and evaluation methods, tools, and technologies that the NRC may encounter in near term future licensee applications. A review was conducted to identify human factors methods, tools, and technologies relevant to each review element of NUREG-0711. Additionally emerging trends in technology which have the potential to impact review elements, such as Augmented Cognition, and various wireless tools and technologies were identified. The purpose of this paper is to provide an overview of the survey results and to highlight issues that could be revised or adapted to meet with emerging trends. (authors)

  16. Factors Influencing Achievement in Undergraduate Social Science Research Methods Courses: A Mixed Methods Analysis

    Science.gov (United States)

    Markle, Gail

    2017-01-01

    Undergraduate social science research methods courses tend to have higher than average rates of failure and withdrawal. Lack of success in these courses impedes students' progression through their degree programs and negatively impacts institutional retention and graduation rates. Grounded in adult learning theory, this mixed methods study…

  17. Source apportionment of PAH in Hamilton Harbour suspended sediments: comparison of two factor analysis methods.

    Science.gov (United States)

    Sofowote, Uwayemi M; McCarry, Brian E; Marvin, Christopher H

    2008-08-15

    A total of 26 suspended sediment samples collected over a 5-year period in Hamilton Harbour, Ontario, Canada and surrounding creeks were analyzed for a suite of polycyclic aromatic hydrocarbons and sulfur heterocycles. Hamilton Harbour sediments contain relatively high levels of polycyclic aromatic compounds and heavy metals due to emissions from industrial and mobile sources. Two receptor modeling methods using factor analyses were compared to determine the profiles and relative contributions of pollution sources to the harbor; these methods are principal component analyses (PCA) with multiple linear regression analysis (MLR) and positive matrix factorization (PMF). Both methods identified four factors and gave excellent correlation coefficients between predicted and measured levels of 25 aromatic compounds; both methods predicted similar contributions from coal tar/coal combustion sources to the harbor (19 and 26%, respectively). One PCA factor was identified as contributions from vehicular emissions (61%); PMF was able to differentiate vehicular emissions into two factors, one attributed to gasoline emissions sources (28%) and the other to diesel emissions sources (24%). Overall, PMF afforded better source identification than PCA with MLR. This work constitutes one of the few examples of the application of PMF to the source apportionment of sediments; the addition of sulfur heterocycles to the analyte list greatly aided in the source identification process.

  18. Factor analysis

    CERN Document Server

    Gorsuch, Richard L

    2013-01-01

    Comprehensive and comprehensible, this classic covers the basic and advanced topics essential for using factor analysis as a scientific tool in psychology, education, sociology, and related areas. Emphasizing the usefulness of the techniques, it presents sufficient mathematical background for understanding and sufficient discussion of applications for effective use. This includes not only theory but also the empirical evaluations of the importance of mathematical distinctions for applied scientific analysis.

  19. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    Science.gov (United States)

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  20. Analyzing the Impacts of Alternated Number of Iterations in Multiple Imputation Method on Explanatory Factor Analysis

    Directory of Open Access Journals (Sweden)

    Duygu KOÇAK

    2017-11-01

    Full Text Available The study aims to identify the effects of iteration numbers used in multiple iteration method, one of the methods used to cope with missing values, on the results of factor analysis. With this aim, artificial datasets of different sample sizes were created. Missing values at random and missing values at complete random were created in various ratios by deleting data. For the data in random missing values, a second variable was iterated at ordinal scale level and datasets with different ratios of missing values were obtained based on the levels of this variable. The data were generated using “psych” program in R software, while “dplyr” program was used to create codes that would delete values according to predetermined conditions of missing value mechanism. Different datasets were generated by applying different iteration numbers. Explanatory factor analysis was conducted on the datasets completed and the factors and total explained variances are presented. These values were first evaluated based on the number of factors and total variance explained of the complete datasets. The results indicate that multiple iteration method yields a better performance in cases of missing values at random compared to datasets with missing values at complete random. Also, it was found that increasing the number of iterations in both missing value datasets decreases the difference in the results obtained from complete datasets.

  1. Factors Analysis And Profit Achievement For Trading Company By Using Rough Set Method

    Directory of Open Access Journals (Sweden)

    Muhammad Ardiansyah Sembiring

    2017-06-01

    Full Text Available This research has been done to analysis the financial raport fortrading company and it is  intimately  related  to  some  factors  which  determine  the profit of company. The result of this reseach is showed about  New Knowledge and perform of the rule. In  discussion, by followed data mining process and using Rough Set method. Rough Set is to analyzed the performance of the result. This  reseach will be assist to the manager of company with draw the intactandobjective. Rough set method is also to difined  the rule of discovery process and started the formation about Decision System, Equivalence Class, Discernibility Matrix,  Discernibility Matrix Modulo D, Reduction and General Rules. Rough set method is efective model about the performing analysis in the company.   Keywords : Data Mining, General Rules, Profit,. Rough Set.

  2. A probabilistic analysis method to evaluate the effect of human factors on plant safety

    International Nuclear Information System (INIS)

    Ujita, H.

    1987-01-01

    A method to evaluate the effect of human factors on probabilistic safety analysis (PSA) is developed. The main features of the method are as follows: 1. A time-dependent multibranch tree is constructed to treat time dependency of human error probability. 2. A sensitivity analysis is done to determine uncertainty in the PSA due to branch time of human error occurrence, human error data source, extraneous act probability, and human recovery probability. The method is applied to a large-break, loss-of-coolant accident of a boiling water reactor-5. As a result, core melt probability and risk do not depend on the number of time branches, which means that a small number of branches are sufficient. These values depend on the first branch time and the human error probability

  3. Evaluation of Parallel Analysis Methods for Determining the Number of Factors

    Science.gov (United States)

    Crawford, Aaron V.; Green, Samuel B.; Levy, Roy; Lo, Wen-Juo; Scott, Lietta; Svetina, Dubravka; Thompson, Marilyn S.

    2010-01-01

    Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria…

  4. Factors and methods of analysis and estimation of furniture making enterprises competitiveness

    Directory of Open Access Journals (Sweden)

    Vitaliy Aleksandrovich Zhigarev

    2015-06-01

    Full Text Available Objective to describe the author39s methodology for estimating the furnituremaking enterprises competitiveness with a view to carry out the economic evaluation of the efficiency of furniture production the evaluation of the internal component of the furniture production efficiency the identification of factors influencing the efficiency of furnituremaking companies and areas for improving it through improvements in product range production and sales policy of the enterprise. The research subject is modern methods and principles of competitiveness management applicable in a rapidly changing market environment. Methods in general the research methodology consists of six stages differentiated by methods objectives and required outcomes. The first stage of the research was to study the nature of demand within the target market of a furnituremaking enterprise. The second stage was to study the expenditures of a furnituremaking enterprise for implementing individual production and sales strategies. The third stage was to study competition in the market. The fourth stage was the analysis of possibilities of a furnituremaking enterprise in producing and selling furniture in terms of factor values combinations. The fifth stage was the reexamination of the demand with a view to its distribution according to the factor space. The final sixth stage was processing of data obtained at the previous stages and carrying out the necessary calculations. Results in general the above methodology of economic evaluation of the efficiency of furniture production based on the previously developed model gives the managers of enterprises an algorithm for assessing both market and firmlevel component of the furniture production efficiency allowing the subsequent identification and evaluation of the efficiency factors and the development of measures to improve the furniture production and sale efficiency as well as the assortment rationalization production and sales policy

  5. SUPERPIXEL BASED FACTOR ANALYSIS AND TARGET TRANSFORMATION METHOD FOR MARTIAN MINERALS DETECTION

    Directory of Open Access Journals (Sweden)

    X. Wu

    2018-04-01

    Full Text Available The Factor analysis and target transformation (FATT is an effective method to test for the presence of particular mineral on Martian surface. It has been used both in thermal infrared (Thermal Emission Spectrometer, TES and near-infrared (Compact Reconnaissance Imaging Spectrometer for Mars, CRISM hyperspectral data. FATT derived a set of orthogonal eigenvectors from a mixed system and typically selected first 10 eigenvectors to least square fit the library mineral spectra. However, minerals present only in a limited pixels will be ignored because its weak spectral features compared with full image signatures. Here, we proposed a superpixel based FATT method to detect the mineral distributions on Mars. The simple linear iterative clustering (SLIC algorithm was used to partition the CRISM image into multiple connected image regions with spectral homogeneous to enhance the weak signatures by increasing their proportion in a mixed system. A least square fitting was used in target transformation and performed to each region iteratively. Finally, the distribution of the specific minerals in image was obtained, where fitting residual less than a threshold represent presence and otherwise absence. We validate our method by identifying carbonates in a well analysed CRISM image in Nili Fossae on Mars. Our experimental results indicate that the proposed method work well both in simulated and real data sets.

  6. Prediction of quality attributes of chicken breast fillets by using Vis/NIR spectroscopy combined with factor analysis method

    Science.gov (United States)

    Visible/near-infrared (Vis/NIR) spectroscopy with wavelength range between 400 and 2500 nm combined with factor analysis method was tested to predict quality attributes of chicken breast fillets. Quality attributes, including color (L*, a*, b*), pH, and drip loss were analyzed using factor analysis ...

  7. Quantification method analysis of the relationship between occupant injury and environmental factors in traffic accidents.

    Science.gov (United States)

    Ju, Yong Han; Sohn, So Young

    2011-01-01

    Injury analysis following a vehicle crash is one of the most important research areas. However, most injury analyses have focused on one-dimensional injury variables, such as the AIS (Abbreviated Injury Scale) or the IIS (Injury Impairment Scale), at a time in relation to various traffic accident factors. However, these studies cannot reflect the various injury phenomena that appear simultaneously. In this paper, we apply quantification method II to the NASS (National Automotive Sampling System) CDS (Crashworthiness Data System) to find the relationship between the categorical injury phenomena, such as the injury scale, injury position, and injury type, and the various traffic accident condition factors, such as speed, collision direction, vehicle type, and seat position. Our empirical analysis indicated the importance of safety devices, such as restraint equipment and airbags. In addition, we found that narrow impact, ejection, air bag deployment, and higher speed are associated with more severe than minor injury to the thigh, ankle, and leg in terms of dislocation, abrasion, or laceration. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Human factors analysis and design methods for nuclear waste retrieval systems. Human factors design methodology and integration plan

    Energy Technology Data Exchange (ETDEWEB)

    Casey, S.M.

    1980-06-01

    The purpose of this document is to provide an overview of the recommended activities and methods to be employed by a team of human factors engineers during the development of a nuclear waste retrieval system. This system, as it is presently conceptualized, is intended to be used for the removal of storage canisters (each canister containing a spent fuel rod assembly) located in an underground salt bed depository. This document, and the others in this series, have been developed for the purpose of implementing human factors engineering principles during the design and construction of the retrieval system facilities and equipment. The methodology presented has been structured around a basic systems development effort involving preliminary development, equipment development, personnel subsystem development, and operational test and evaluation. Within each of these phases, the recommended activities of the human engineering team have been stated, along with descriptions of the human factors engineering design techniques applicable to the specific design issues. Explicit examples of how the techniques might be used in the analysis of human tasks and equipment required in the removal of spent fuel canisters have been provided. Only those techniques having possible relevance to the design of the waste retrieval system have been reviewed. This document is intended to provide the framework for integrating human engineering with the rest of the system development effort. The activities and methodologies reviewed in this document have been discussed in the general order in which they will occur, although the time frame (the total duration of the development program in years and months) in which they should be performed has not been discussed.

  9. Human factors analysis and design methods for nuclear waste retrieval systems. Human factors design methodology and integration plan

    International Nuclear Information System (INIS)

    Casey, S.M.

    1980-06-01

    The purpose of this document is to provide an overview of the recommended activities and methods to be employed by a team of human factors engineers during the development of a nuclear waste retrieval system. This system, as it is presently conceptualized, is intended to be used for the removal of storage canisters (each canister containing a spent fuel rod assembly) located in an underground salt bed depository. This document, and the others in this series, have been developed for the purpose of implementing human factors engineering principles during the design and construction of the retrieval system facilities and equipment. The methodology presented has been structured around a basic systems development effort involving preliminary development, equipment development, personnel subsystem development, and operational test and evaluation. Within each of these phases, the recommended activities of the human engineering team have been stated, along with descriptions of the human factors engineering design techniques applicable to the specific design issues. Explicit examples of how the techniques might be used in the analysis of human tasks and equipment required in the removal of spent fuel canisters have been provided. Only those techniques having possible relevance to the design of the waste retrieval system have been reviewed. This document is intended to provide the framework for integrating human engineering with the rest of the system development effort. The activities and methodologies reviewed in this document have been discussed in the general order in which they will occur, although the time frame (the total duration of the development program in years and months) in which they should be performed has not been discussed

  10. Reliability Analysis of a Composite Wind Turbine Blade Section Using the Model Correction Factor Method: Numerical Study and Validation

    DEFF Research Database (Denmark)

    Dimitrov, Nikolay Krasimirov; Friis-Hansen, Peter; Berggreen, Christian

    2013-01-01

    by the composite failure criteria. Each failure mode has been considered in a separate component reliability analysis, followed by a system analysis which gives the total probability of failure of the structure. The Model Correction Factor method used in connection with FORM (First-Order Reliability Method) proved...

  11. Franck-Condon Factors for Diatomics: Insights and Analysis Using the Fourier Grid Hamiltonian Method

    Science.gov (United States)

    Ghosh, Supriya; Dixit, Mayank Kumar; Bhattacharyya, S. P.; Tembe, B. L.

    2013-01-01

    Franck-Condon factors (FCFs) play a crucial role in determining the intensities of the vibrational bands in electronic transitions. In this article, a relatively simple method to calculate the FCFs is illustrated. An algorithm for the Fourier Grid Hamiltonian (FGH) method for computing the vibrational wave functions and the corresponding energy…

  12. Assessment of modern methods of human factor reliability analysis in PSA studies

    International Nuclear Information System (INIS)

    Holy, J.

    2001-12-01

    The report is structured as follows: Classical terms and objects (Probabilistic safety assessment as a framework for human reliability assessment; Human failure within the PSA model; Basic types of operator failure modelled in a PSA study and analyzed by HRA methods; Qualitative analysis of human reliability; Quantitative analysis of human reliability used; Process of analysis of nuclear reactor operator reliability in a PSA study); New terms and objects (Analysis of dependences; Errors of omission; Errors of commission; Error forcing context); and Overview and brief assessment of human reliability analysis (Basic characteristics of the methods; Assets and drawbacks of the use of each of HRA method; History and prospects of the use of the methods). (P.A.)

  13. The Intersectoral Collaboration Document for Cancer Risk Factors Reduction: Method and Stakeholder Analysis

    Directory of Open Access Journals (Sweden)

    Ali-Asghar Kolahi

    2016-03-01

    Full Text Available Background and Objective: Cancers are one of the most important public health issues and the third leading cause of mortality after cardiovascular diseases and injuries in Iran. The most common cancers reported in the recent years have been included skin, stomach, breast, colon, bladder, leukemia, and esophagus respectively. Control of cancer as one of the three main health system priorities of Iran, needs a specific roadmap and clear task definition for involved organizations. This study provides stakeholder analysis include determining the roles of Ministry of Health and Medical Education as the custodian of the national health and the duties of other beneficiary organizations to reduce the risk of cancer for cooperation with a scientific approach and systematic methodology.Materials and Methods: This health system research project was performed by participation of Social Determinants of Health Research Center of Shahid Beheshti University of Medical Sciences, Office of the Non-Communicable Diseases of the Ministry of Health and Medical Education and other stakeholders in 2013. At first, the strategic committee was established and the stakeholders were identified and analyzed. Then the quantitative data were collected by searching in national database concern incidence, prevalence, and burden of all types of cancers. At the last with the qualitative approach, a systematic review of the studies, documents and reports was conducted as well as conversing for the national strategic plans of Iran and other countries and the experts’ views regarding management of the cancer risk factors. In practice, role and responsibilities of each stakeholder were practically analyzed. Then the risk factors identified and the effective evidence-based interventions were determined for each cancer and finally the role of the Ministry of Health were set as the responsible or co-worker and also the role of the other organizations separately clarified in each

  14. Comparison of Seven Methods for Boolean Factor Analysis and Their Evaluation by Information Gain

    Czech Academy of Sciences Publication Activity Database

    Frolov, A.; Húsek, Dušan; Polyakov, P.Y.

    2016-01-01

    Roč. 27, č. 3 (2016), s. 538-550 ISSN 2162-237X R&D Projects: GA MŠk ED1.1.00/02.0070 Institutional support: RVO:67985807 Keywords : associative memory * bars problem (BP) * Boolean factor analysis (BFA) * data mining * dimension reduction * Hebbian learning rule * information gain * likelihood maximization (LM) * neural network application * recurrent neural network * statistics Subject RIV: IN - Informatics, Computer Science Impact factor: 6.108, year: 2016

  15. Comparing 3 dietary pattern methods--cluster analysis, factor analysis, and index analysis--With colorectal cancer risk: The NIH-AARP Diet and Health Study.

    Science.gov (United States)

    Reedy, Jill; Wirfält, Elisabet; Flood, Andrew; Mitrou, Panagiota N; Krebs-Smith, Susan M; Kipnis, Victor; Midthune, Douglas; Leitzmann, Michael; Hollenbeck, Albert; Schatzkin, Arthur; Subar, Amy F

    2010-02-15

    The authors compared dietary pattern methods-cluster analysis, factor analysis, and index analysis-with colorectal cancer risk in the National Institutes of Health (NIH)-AARP Diet and Health Study (n = 492,306). Data from a 124-item food frequency questionnaire (1995-1996) were used to identify 4 clusters for men (3 clusters for women), 3 factors, and 4 indexes. Comparisons were made with adjusted relative risks and 95% confidence intervals, distributions of individuals in clusters by quintile of factor and index scores, and health behavior characteristics. During 5 years of follow-up through 2000, 3,110 colorectal cancer cases were ascertained. In men, the vegetables and fruits cluster, the fruits and vegetables factor, the fat-reduced/diet foods factor, and all indexes were associated with reduced risk; the meat and potatoes factor was associated with increased risk. In women, reduced risk was found with the Healthy Eating Index-2005 and increased risk with the meat and potatoes factor. For men, beneficial health characteristics were seen with all fruit/vegetable patterns, diet foods patterns, and indexes, while poorer health characteristics were found with meat patterns. For women, findings were similar except that poorer health characteristics were seen with diet foods patterns. Similarities were found across methods, suggesting basic qualities of healthy diets. Nonetheless, findings vary because each method answers a different question.

  16. Qualitative and quantitative methods for human factor analysis and assessment in NPP. Investigations and results

    International Nuclear Information System (INIS)

    Hristova, R.; Kalchev, B.; Atanasov, D.

    2005-01-01

    A description of the most frequently used approaches for human reliability assesment is given. The relation between different human factor causes for human induced events in Kozloduy NPP during the period 2000 - 2003 is discussed. A comparison between the contribution of the casual factors for events occurrences in Kozloduy NPP and an Japanese NPP is presented. It can be concluded that for both NPPs the most important casual factors are: 1) written procedures and documents; 2) man-machine interface; 3) environmental working conditions; 4) working practice; 5) training and qualification; 6) supervising methods

  17. Model Correction Factor Method

    DEFF Research Database (Denmark)

    Christensen, Claus; Randrup-Thomsen, Søren; Morsing Johannesen, Johannes

    1997-01-01

    The model correction factor method is proposed as an alternative to traditional polynomial based response surface techniques in structural reliability considering a computationally time consuming limit state procedure as a 'black box'. The class of polynomial functions is replaced by a limit...... of the model correction factor method, is that in simpler form not using gradient information on the original limit state function or only using this information once, a drastic reduction of the number of limit state evaluation is obtained together with good approximations on the reliability. Methods...

  18. Exchange factor method: an alternative zonal formulation for analysis of radiating enclosures containing participating media

    International Nuclear Information System (INIS)

    Larsen, M.E.

    1983-01-01

    The exchange factor method (EFM) is introduced and compared to the zone method (ZM). In both the EFM and ZM the region of interest is discretized into volume and surface elements, each considered to be isothermal, which are small enough to give the required resolution. A suitable set of state variables for the system is composed of the surface element radiosities and the gas element emissive powers. The EFM defines exchange factors as dimensionless total-exchange areas for radiant interchange between volume and surface elements by all possible absorption/re-emission paths, but excluding wall reflections. In the EFM, the exchange factors replace the direct-exchange areas of the ZM and are used to write energy balances for each area and volume element in the system. As in the ZM, the radiant energy balance equations result in a set of algebraic equations linear in the system state variables. The distinguishing feature of the EFM is that exchange factors may be measurable quantities. Relationships between the EFM exchange factors and the ZM direct-exchange areas are presented. EFM conservation and reciprocity laws, analogous to those of the ZM, are also included. Temperature and heat flux distributions, predicted using the EFM, for two- and three-dimensional enclosures containing absorbing/emitting, isotropically scattering, and conducting media are included. An application of the EFM is proposed which calls for the measurement of exchange factors in a scale model of the enclosure to be analyzed. The measurement of these factors in an enclosure containing an isotropically scattering medium is discussed. The effects of isotropic scattering and absorption/re-emission processes are shown to be indistinguishable in their contribution to exchange factor paths

  19. Analysis of factors affecting the development of food crop varieties bred by mutation method in China

    International Nuclear Information System (INIS)

    Wang Zhidong; Hu Ruifa

    2002-01-01

    The research developed a production function on crop varieties developed by mutation method in order to explore factors affecting the development of new varieties. It is found that the research investment, human capital and radiation facilities were the most important factors that affected the development and cultivation area of new varieties through the mutation method. It is concluded that not all institutions involved in the breeding activities using mutation method must have radiation facilities and the national government only needed to invest in those key research institutes, which had strong research capacities. The saved research budgets can be used in the entrusting the institutes that have stronger research capacities with irradiating more breeding materials developed by the institutes that have weak research capacities, by which more opportunities to breed better varieties can be created

  20. Experimental analysis on removal factor of smear method in measurement of surface contamination

    International Nuclear Information System (INIS)

    Sugiura, Nobuyuki; Taira, Junichi; Takenaka, Keisuke; Yamanaka, Kazuo; Sugai, Kenji; Kosako, Toshiso

    2007-01-01

    The smear test is one of the important ways to measure surface contamination. The loose contamination under the high background radiation, which is more significant in handling non-sealed radioisotopes, can be evaluated by this method. The removal factor is defined as the ratio of the activity removed from the surface by one smear to the whole activity of the removable surface contamination. The removal factor is greatly changed by the quality and condition of surface materials. In this study, the values of removal factor at several typical surface conditions were evaluated experimentally and the practical application of those values was considered. It is required the smear should be pressed by moderate pressure when wiping the surface. The pressure from 1.0 kg to 1.5 kg per filter paper was recommended. The removal factor showed lower value in wiping by the pressure below 1.0 kg. The value of 0.5 for the removal factor could be applied to the smooth surface of linoleum, concrete coated with paint or epoxy resin, stainless steel and glass with the statistical allowance. (author)

  1. Speeding Fermat's factoring method

    Science.gov (United States)

    McKee, James

    A factoring method is presented which, heuristically, splits composite n in O(n^{1/4+epsilon}) steps. There are two ideas: an integer approximation to sqrt(q/p) provides an O(n^{1/2+epsilon}) algorithm in which n is represented as the difference of two rational squares; observing that if a prime m divides a square, then m^2 divides that square, a heuristic speed-up to O(n^{1/4+epsilon}) steps is achieved. The method is well-suited for use with small computers: the storage required is negligible, and one never needs to work with numbers larger than n itself.

  2. Factor analysis methods and validity evidence: A systematic review of instrument development across the continuum of medical education

    Science.gov (United States)

    Wetzel, Angela Payne

    Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across the continuum of medical education had not been previously identified. Therefore, the purpose for this study was critical review of instrument development articles employing exploratory factor or principal component analysis published in medical education (2006--2010) to describe and assess the reporting of methods and validity evidence based on the Standards for Educational and Psychological Testing and factor analysis best practices. Data extraction of 64 articles measuring a variety of constructs that have been published throughout the peer-reviewed medical education literature indicate significant errors in the translation of exploratory factor analysis best practices to current practice. Further, techniques for establishing validity evidence tend to derive from a limited scope of methods including reliability statistics to support internal structure and support for test content. Instruments reviewed for this study lacked supporting evidence based on relationships with other variables and response process, and evidence based on consequences of testing was not evident. Findings suggest a need for further professional development within the medical education researcher community related to (1) appropriate factor analysis methodology and reporting and (2) the importance of pursuing multiple sources of reliability and validity evidence to construct a well-supported argument for the inferences made from the instrument. Medical education researchers and educators should be cautious in adopting instruments from the literature and carefully review available evidence. Finally, editors and reviewers are encouraged to recognize

  3. Methods and analysis of factors impact on the efficiency of the photovoltaic generation

    International Nuclear Information System (INIS)

    Li Tianze; Zhang Xia; Jiang Chuan; Hou Luan

    2011-01-01

    First of all, the thesis elaborates two important breakthroughs which happened In the field of the application of solar energy in the 1950s.The 21st century the development of solar photovoltaic power generation will have the following characteristics: the continued high growth of industrial development, the significantly reducing cost of the solar cell, the large-scale high-tech development of photovoltaic industries, the breakthroughs of the film battery technology, the rapid development of solar PV buildings integration and combined to the grids. The paper makes principles of solar cells the theoretical analysis. On the basis, we study the conversion efficiency of solar cells, find the factors impact on the efficiency of the photovoltaic generation, solve solar cell conversion efficiency of technical problems through the development of new technology, and open up new ways to improve the solar cell conversion efficiency. Finally, the paper connecting with the practice establishes policies and legislation to the use of encourage renewable energy, development strategy, basic applied research etc.

  4. Methods and analysis of factors impact on the efficiency of the photovoltaic generation

    Science.gov (United States)

    Tianze, Li; Xia, Zhang; Chuan, Jiang; Luan, Hou

    2011-02-01

    First of all, the thesis elaborates two important breakthroughs which happened In the field of the application of solar energy in the 1950s.The 21st century the development of solar photovoltaic power generation will have the following characteristics: the continued high growth of industrial development, the significantly reducing cost of the solar cell, the large-scale high-tech development of photovoltaic industries, the breakthroughs of the film battery technology, the rapid development of solar PV buildings integration and combined to the grids. The paper makes principles of solar cells the theoretical analysis. On the basis, we study the conversion efficiency of solar cells, find the factors impact on the efficiency of the photovoltaic generation, solve solar cell conversion efficiency of technical problems through the development of new technology, and open up new ways to improve the solar cell conversion efficiency. Finally, the paper connecting with the practice establishes policies and legislation to the use of encourage renewable energy, development strategy, basic applied research etc.

  5. Foundations of factor analysis

    CERN Document Server

    Mulaik, Stanley A

    2009-01-01

    Introduction Factor Analysis and Structural Theories Brief History of Factor Analysis as a Linear Model Example of Factor AnalysisMathematical Foundations for Factor Analysis Introduction Scalar AlgebraVectorsMatrix AlgebraDeterminants Treatment of Variables as Vectors Maxima and Minima of FunctionsComposite Variables and Linear Transformations Introduction Composite Variables Unweighted Composite VariablesDifferentially Weighted Composites Matrix EquationsMulti

  6. Human factors analysis and design methods for nuclear waste retrieval systems. Volume II. A compendium of human factors design data

    International Nuclear Information System (INIS)

    Casey, S.M.

    1980-04-01

    This document is a compilation of human factors engineering design recommendations and data, selected and organized to assist in the design of a nuclear waste retrieval system. Design guidelines from a variety of sources have been evaluated, edited, and expanded for inclusion in this document, and, where appropriate, portions of text from selected sources have been included in their entirety. A number of human factors engineering guidelines for equipment designers have been written over the past three decades, each tailored to the needs of the specific system being designed. In the case of this particular document, a review of the preliminary human operator functions involved in each phase of the retrieval process was performed, resulting in the identification of areas of design emphasis upon which this document should be based. Documents containing information and design data on each of these areas were acquired, and data and design guidelines related to the previously identified areas of emphasis were extracted and reorganized. For each system function, actions were first assigned to operator and/or machine, and the operator functions were then described. Separate lists of operator functions were developed for each of the areas of retrieval activities - survey and mapping, remining, floor flange emplacement, plug and canister overcoring, plug and canister removal and transport, and CWSRS activity. These functions and the associated man-machine interface were grouped into categories based on task similarity, and the principal topics of human factors design emphasis were extracted. These topic areas are reflected in the contents of the 12 sections of this document

  7. Unbiased proteomics analysis demonstrates significant variability in mucosal immune factor expression depending on the site and method of collection.

    Directory of Open Access Journals (Sweden)

    Kenzie M Birse

    Full Text Available Female genital tract secretions are commonly sampled by lavage of the ectocervix and vaginal vault or via a sponge inserted into the endocervix for evaluating inflammation status and immune factors critical for HIV microbicide and vaccine studies. This study uses a proteomics approach to comprehensively compare the efficacy of these methods, which sample from different compartments of the female genital tract, for the collection of immune factors. Matching sponge and lavage samples were collected from 10 healthy women and were analyzed by tandem mass spectrometry. Data was analyzed by a combination of differential protein expression analysis, hierarchical clustering and pathway analysis. Of the 385 proteins identified, endocervical sponge samples collected nearly twice as many unique proteins as cervicovaginal lavage (111 vs. 61 with 55% of proteins common to both (213. Each method/site identified 73 unique proteins that have roles in host immunity according to their gene ontology. Sponge samples enriched for specific inflammation pathways including acute phase response proteins (p = 3.37×10(-24 and LXR/RXR immune activation pathways (p = 8.82×10(-22 while the role IL-17A in psoriasis pathway (p = 5.98×10(-4 and the complement system pathway (p = 3.91×10(-3 were enriched in lavage samples. Many host defense factors were differentially enriched (p<0.05 between sites including known/potential antimicrobial factors (n = 21, S100 proteins (n = 9, and immune regulatory factors such as serpins (n = 7. Immunoglobulins (n = 6 were collected at comparable levels in abundance in each site although 25% of those identified were unique to sponge samples. This study demonstrates significant differences in types and quantities of immune factors and inflammation pathways collected by each sampling technique. Therefore, clinical studies that measure mucosal immune activation or factors assessing HIV transmission should utilize

  8. A multifactorial analysis of obesity as CVD risk factor: Use of neural network based methods in a nutrigenetics context

    Directory of Open Access Journals (Sweden)

    Valavanis Ioannis K

    2010-09-01

    Full Text Available Abstract Background Obesity is a multifactorial trait, which comprises an independent risk factor for cardiovascular disease (CVD. The aim of the current work is to study the complex etiology beneath obesity and identify genetic variations and/or factors related to nutrition that contribute to its variability. To this end, a set of more than 2300 white subjects who participated in a nutrigenetics study was used. For each subject a total of 63 factors describing genetic variants related to CVD (24 in total, gender, and nutrition (38 in total, e.g. average daily intake in calories and cholesterol, were measured. Each subject was categorized according to body mass index (BMI as normal (BMI ≤ 25 or overweight (BMI > 25. Two artificial neural network (ANN based methods were designed and used towards the analysis of the available data. These corresponded to i a multi-layer feed-forward ANN combined with a parameter decreasing method (PDM-ANN, and ii a multi-layer feed-forward ANN trained by a hybrid method (GA-ANN which combines genetic algorithms and the popular back-propagation training algorithm. Results PDM-ANN and GA-ANN were comparatively assessed in terms of their ability to identify the most important factors among the initial 63 variables describing genetic variations, nutrition and gender, able to classify a subject into one of the BMI related classes: normal and overweight. The methods were designed and evaluated using appropriate training and testing sets provided by 3-fold Cross Validation (3-CV resampling. Classification accuracy, sensitivity, specificity and area under receiver operating characteristics curve were utilized to evaluate the resulted predictive ANN models. The most parsimonious set of factors was obtained by the GA-ANN method and included gender, six genetic variations and 18 nutrition-related variables. The corresponding predictive model was characterized by a mean accuracy equal of 61.46% in the 3-CV testing sets

  9. A multifactorial analysis of obesity as CVD risk factor: use of neural network based methods in a nutrigenetics context.

    Science.gov (United States)

    Valavanis, Ioannis K; Mougiakakou, Stavroula G; Grimaldi, Keith A; Nikita, Konstantina S

    2010-09-08

    Obesity is a multifactorial trait, which comprises an independent risk factor for cardiovascular disease (CVD). The aim of the current work is to study the complex etiology beneath obesity and identify genetic variations and/or factors related to nutrition that contribute to its variability. To this end, a set of more than 2300 white subjects who participated in a nutrigenetics study was used. For each subject a total of 63 factors describing genetic variants related to CVD (24 in total), gender, and nutrition (38 in total), e.g. average daily intake in calories and cholesterol, were measured. Each subject was categorized according to body mass index (BMI) as normal (BMI ≤ 25) or overweight (BMI > 25). Two artificial neural network (ANN) based methods were designed and used towards the analysis of the available data. These corresponded to i) a multi-layer feed-forward ANN combined with a parameter decreasing method (PDM-ANN), and ii) a multi-layer feed-forward ANN trained by a hybrid method (GA-ANN) which combines genetic algorithms and the popular back-propagation training algorithm. PDM-ANN and GA-ANN were comparatively assessed in terms of their ability to identify the most important factors among the initial 63 variables describing genetic variations, nutrition and gender, able to classify a subject into one of the BMI related classes: normal and overweight. The methods were designed and evaluated using appropriate training and testing sets provided by 3-fold Cross Validation (3-CV) resampling. Classification accuracy, sensitivity, specificity and area under receiver operating characteristics curve were utilized to evaluate the resulted predictive ANN models. The most parsimonious set of factors was obtained by the GA-ANN method and included gender, six genetic variations and 18 nutrition-related variables. The corresponding predictive model was characterized by a mean accuracy equal of 61.46% in the 3-CV testing sets. The ANN based methods revealed factors

  10. Factors in Agile Methods Adoption

    Directory of Open Access Journals (Sweden)

    Samia Abdalhamid

    2017-05-01

    Full Text Available There are many factors that can affect the process of adopting Agile methods during software developing. This paper illustrates the critical factors in Agile methods adoption in software organizations. To present the success and failure factors, an exploratory study is carried out among the critical factors of success and failure from existing studies. Dimensions and Factors are introduced utilizing success and failure dimensions. The mind map was used to clarify these factors.

  11. Factor analysis on hazards for safety assessment in decommissioning workplace of nuclear facilities using a semantic differential method

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kwan-Seong [Korea Atomic Energy Research Institute, 1045 Daedeok-daero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of)], E-mail: ksjeongl@kaeri.re.kr; Lim, Hyeon-Kyo [Chungbuk National University, 410 Sungbong-ro, Heungduk-gu, Cheongju, Chungbuk 361-763 (Korea, Republic of)

    2009-10-15

    The decommissioning of nuclear facilities must be accomplished according to its structural conditions and radiological characteristics. An effective risk analysis requires basic knowledge about possible risks, characteristics of potential hazards, and comprehensive understanding of the associated cause-effect relationships within a decommissioning for nuclear facilities. The hazards associated with a decommissioning plan are important not only because they may be a direct cause of harm to workers but also because their occurrence may, indirectly, result in increased radiological and non-radiological hazards. Workers need to be protected by eliminating or reducing the radiological and non-radiological hazards that may arise during routine decommissioning activities as well as during accidents. Therefore, to prepare the safety assessment for decommissioning of nuclear facilities, the radiological and non-radiological hazards should be systematically identified and classified. With a semantic differential method of screening factor and risk perception factor, the radiological and non-radiological hazards are screened and identified.

  12. Analysis of factors influencing fire damage to concrete using nonlinear resonance vibration method

    Energy Technology Data Exchange (ETDEWEB)

    Park, Gang Kyu; Park, Sun Jong; Kwak, Hyo Gyoung [Civil and Environmental Engineering, Korea Advanced Institute of Science and Technology, KAIST, Daejeon (Korea, Republic of); Yim, Hong Jae [Dept. of Construction and Disaster Prevention Engineering, Kyungpook National University, Sangju (Korea, Republic of)

    2015-04-15

    In this study, the effects of different mix proportions and fire scenarios (exposure temperatures and post-fire-curing periods) on fire-damaged concrete were analyzed using a nonlinear resonance vibration method based on nonlinear acoustics. The hysteretic nonlinearity parameter was obtained, which can sensitively reflect the damage level of fire-damaged concrete. In addition, a splitting tensile strength test was performed on each fire-damaged specimen to evaluate the residual property. Using the results, a prediction model for estimating the residual strength of fire-damaged concrete was proposed on the basis of the correlation between the hysteretic nonlinearity parameter and the ratio of splitting tensile strength.

  13. Prioritization of the Factors Affecting Bank Efficiency Using Combined Data Envelopment Analysis and Analytical Hierarchy Process Methods

    Directory of Open Access Journals (Sweden)

    Mehdi Fallah Jelodar

    2016-01-01

    Full Text Available Bank branches have a vital role in the economy of all countries. They collect assets from various sources and put them in the hand of those sectors that need liquidity. Due to the limited financial and human resources and capitals and also because of the unlimited and new customers’ needs and strong competition between banks and financial and credit institutions, the purpose of this study is to provide an answer to the question of which of the factors affecting performance, creating value, and increasing shareholder dividends are superior to others and consequently managers should pay more attention to them. Therefore, in this study, the factors affecting performance (efficiency in the areas of management, personnel, finance, and customers were segmented and obtained results were ranked using both methods of Data Envelopment Analysis and hierarchical analysis. In both of these methods, the leadership style in the area of management; the recruitment and resource allocation in the area of financing; the employees’ satisfaction, dignity, and self-actualization in the area of employees; and meeting the new needs of customers got more weights.

  14. Analysis of numerical methods

    CERN Document Server

    Isaacson, Eugene

    1994-01-01

    This excellent text for advanced undergraduates and graduate students covers norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, and other topics. It offers a careful analysis and stresses techniques for developing new methods, plus many examples and problems. 1966 edition.

  15. Factorization method of quadratic template

    Science.gov (United States)

    Kotyrba, Martin

    2017-07-01

    Multiplication of two numbers is a one-way function in mathematics. Any attempt to distribute the outcome to its roots is called factorization. There are many methods such as Fermat's factorization, Dixońs method or quadratic sieve and GNFS, which use sophisticated techniques fast factorization. All the above methods use the same basic formula differing only in its use. This article discusses a newly designed factorization method. Effective implementation of this method in programs is not important, it only represents and clearly defines its properties.

  16. Benefit Evaluation of Wind Turbine Generators in Wind Farms Using Capacity-Factor Analysis and Economic-Cost Methods

    DEFF Research Database (Denmark)

    Chen, Zhe; Wang, L.; Yeh, T-H.

    2009-01-01

    Due to the recent price spike of the international oil and the concern of global warming, the development and deployment of renewable energy become one of the most important energy policies around the globe. Currently, there are different capacities and hub heights for commercial wind turbine gen...... height for WTGs that have been installed in Taiwan. Important outcomes affecting wind cost of energy in comparison with economic results using the proposed economic-analysis methods for different WFs are also presented.......Due to the recent price spike of the international oil and the concern of global warming, the development and deployment of renewable energy become one of the most important energy policies around the globe. Currently, there are different capacities and hub heights for commercial wind turbine...... generators (WTGs). To fully capture wind energy, different wind farms (WFs) should select adequate capacity of WTGs to effectively harvest wind energy and maximize their economic benefit. To establish selection criterion, this paper first derives the equations for capacity factor (CF) and pairing performance...

  17. The factorization method and supersymmetry

    International Nuclear Information System (INIS)

    Alves, N.A.; Drigo Filho, E.

    1988-01-01

    Applying the factorization method, we generalize the harmonic - oscillator and the Coulomb potentials, both in arbitrary dimensions. We also show that this method allows the determination of the superpotentials and the supersymmetric partners associated with each of those systems. (author) [pt

  18. Factor analysis and scintigraphy

    International Nuclear Information System (INIS)

    Di Paola, R.; Penel, C.; Bazin, J.P.; Berche, C.

    1976-01-01

    The goal of factor analysis is usually to achieve reduction of a large set of data, extracting essential features without previous hypothesis. Due to the development of computerized systems, the use of largest sampling, the possibility of sequential data acquisition and the increase of dynamic studies, the problem of data compression can be encountered now in routine. Thus, results obtained for compression of scintigraphic images were first presented. Then possibilities given by factor analysis for scan processing were discussed. At last, use of this analysis for multidimensional studies and specially dynamic studies were considered for compression and processing [fr

  19. Reliability Analysis of Offshore Jacket Structures with Wave Load on Deck using the Model Correction Factor Method

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Friis-Hansen, P.; Nielsen, J.S.

    2006-01-01

    failure/collapse of jacket type platforms with wave in deck loads using the so-called Model Correction Factor Method (MCFM). A simple representative model for the RSR measure is developed and used in the MCFM technique. A realistic example is evaluated and it is seen that it is possible to perform...

  20. First course in factor analysis

    CERN Document Server

    Comrey, Andrew L

    2013-01-01

    The goal of this book is to foster a basic understanding of factor analytic techniques so that readers can use them in their own research and critically evaluate their use by other researchers. Both the underlying theory and correct application are emphasized. The theory is presented through the mathematical basis of the most common factor analytic models and several methods used in factor analysis. On the application side, considerable attention is given to the extraction problem, the rotation problem, and the interpretation of factor analytic results. Hence, readers are given a background of

  1. An easy guide to factor analysis

    CERN Document Server

    Kline, Paul

    2014-01-01

    Factor analysis is a statistical technique widely used in psychology and the social sciences. With the advent of powerful computers, factor analysis and other multivariate methods are now available to many more people. An Easy Guide to Factor Analysis presents and explains factor analysis as clearly and simply as possible. The author, Paul Kline, carefully defines all statistical terms and demonstrates step-by-step how to work out a simple example of principal components analysis and rotation. He further explains other methods of factor analysis, including confirmatory and path analysis, a

  2. Factor Analysis for Clustered Observations.

    Science.gov (United States)

    Longford, N. T.; Muthen, B. O.

    1992-01-01

    A two-level model for factor analysis is defined, and formulas for a scoring algorithm for this model are derived. A simple noniterative method based on decomposition of total sums of the squares and cross-products is discussed and illustrated with simulated data and data from the Second International Mathematics Study. (SLD)

  3. Decomposition Analysis of the Factors that Influence Energy Related Air Pollutant Emission Changes in China Using the SDA Method

    Directory of Open Access Journals (Sweden)

    Shichun Xu

    2017-09-01

    Full Text Available We decompose factors affecting China’s energy-related air pollutant (NOx, PM2.5, and SO2 emission changes into different effects using structural decomposition analysis (SDA. We find that, from 2005 to 2012, investment increased NOx, PM2.5, and SO2 emissions by 14.04, 7.82 and 15.59 Mt respectively, and consumption increased these emissions by 11.09, 7.98, and 12.09 Mt respectively. Export and import slightly increased the emissions on the whole, but the rate of the increase has slowed down, possibly reflecting the shift in China’s foreign trade structure. Energy intensity largely reduced NOx, PM2.5, and SO2 emissions by 12.49, 14.33 and 23.06 Mt respectively, followed by emission efficiency that reduces these emissions by 4.57, 9.08, and 17.25 Mt respectively. Input-output efficiency slightly reduces the emissions. At sectoral and sub-sectoral levels, consumption is a great driving factor in agriculture and commerce, whereas investment is a great driving factor in transport, construction, and some industrial subsectors such as iron and steel, nonferrous metals, building materials, coking, and power and heating supply. Energy intensity increases emissions in transport, chemical products and manufacturing, but decreases emissions in all other sectors and subsectors. Some policies arising from our study results are discussed.

  4. Investigating the Factors Affecting theZahedan’s Aquifer Hydrogeochemistry Using Foctor Analysis, Saturation Indices and Composite Diagrams’ Methods

    Directory of Open Access Journals (Sweden)

    J. Dowlati

    2014-12-01

    Full Text Available Zahedan aquifer is located in the northernof Zahedanwatedshed. It is essential to evaluate the quality of groundwater resources due to proving some part of drinking water, agricultural and industrial waters of this city. In order to carry out ground water quality monitoring, and assess the controlling possesses and determine cations and anions sources of the groundwater, 26 wells were sampled and water quality parameters were measured.The results of the analysis showed that almost all of the samples proved very saline and electrical conductivity varied from 1,359 to 12,620μS cm−1. In the Zahedan aquifer, sodium, chloride and sulfate were predominant Cation and Anions respectively, and sodium-chloride Na-Cl( and sodium - sulfateNa-So4 were dominant types of the groundwater. The factor analysis of samples results indicates that the two natural and human factors controlled about the 83/30% and 74/37% of the quality variations of the groundwater respectively in October and February. The first and major factor related to the natural processes of ion exchange and dissolution had a correlation with positive loadings of EC, Ca2+, Mg2+, Na+, Cl-, K+ and So42- and controls the 65.25% of the quality variations of the ground water in October and the 58.82% in February. The second factor related toCa2+, No3- constituted the18.05% of the quality variations in October and 15.56% in February, and given the urban development and less agricultural development in the aquifer, is dependent on human activities. For the samples collected in October, the saturation indices of calcite, gypsum and dolomite minerals showed saturated condition and calcite and dolomite in February showed saturated condition for more than 60% and 90% of samples and gypsum index revealed under-saturated condition for almost all samples.The unsaturated condition of Zahedan groundwater aquifer is resulted from the insufficient time for retaining water in the aquifer to dissolve the minerals

  5. "Factor Analysis Using ""R"""

    Directory of Open Access Journals (Sweden)

    A. Alexander Beaujean

    2013-02-01

    Full Text Available R (R Development Core Team, 2011 is a very powerful tool to analyze data, that is gaining in popularity due to its costs (its free and flexibility (its open-source. This article gives a general introduction to using R (i.e., loading the program, using functions, importing data. Then, using data from Canivez, Konold, Collins, and Wilson (2009, this article walks the user through how to use the program to conduct factor analysis, from both an exploratory and confirmatory approach.

  6. Combining analysis of variance and three‐way factor analysis methods for studying additive and multiplicative effects in sensory panel data

    DEFF Research Database (Denmark)

    Romano, Rosaria; Næs, Tormod; Brockhoff, Per Bruun

    2015-01-01

    Data from descriptive sensory analysis are essentially three‐way data with assessors, samples and attributes as the three ways in the data set. Because of this, there are several ways that the data can be analysed. The paper focuses on the analysis of sensory characteristics of products while...... in the use of the scale with reference to the existing structure of relationships between sensory descriptors. The multivariate assessor model will be tested on a data set from milk. Relations between the proposed model and other multiplicative models like parallel factor analysis and analysis of variance...

  7. The Infinitesimal Jackknife with Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  8. Missing in space: an evaluation of imputation methods for missing data in spatial analysis of risk factors for type II diabetes.

    Science.gov (United States)

    Baker, Jannah; White, Nicole; Mengersen, Kerrie

    2014-11-20

    Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.

  9. The surface analysis methods

    International Nuclear Information System (INIS)

    Deville, J.P.

    1998-01-01

    Nowadays, there are a lot of surfaces analysis methods, each having its specificity, its qualities, its constraints (for instance vacuum) and its limits. Expensive in time and in investment, these methods have to be used deliberately. This article appeals to non specialists. It gives some elements of choice according to the studied information, the sensitivity, the use constraints or the answer to a precise question. After having recalled the fundamental principles which govern these analysis methods, based on the interaction between radiations (ultraviolet, X) or particles (ions, electrons) with matter, two methods will be more particularly described: the Auger electron spectroscopy (AES) and x-rays photoemission spectroscopy (ESCA or XPS). Indeed, they are the most widespread methods in laboratories, the easier for use and probably the most productive for the analysis of surface of industrial materials or samples submitted to treatments in aggressive media. (O.M.)

  10. Analysis of factors controlling soil phosphorus loss with surface runoff in Huihe National Nature Reserve by principal component and path analysis methods.

    Science.gov (United States)

    He, Jing; Su, Derong; Lv, Shihai; Diao, Zhaoyan; Bu, He; Wo, Qiang

    2018-01-01

    Phosphorus (P) loss with surface runoff accounts for the P input to and acceleration of eutrophication of the freshwater. Many studies have focused on factors affecting P loss with surface runoff from soils, but rarely on the relationship among these factors. In the present study, rainfall simulation on P loss with surface runoff was conducted in Huihe National Nature Reserve, in Hulunbeier grassland, China, and the relationships between P loss with surface runoff, soil properties, and rainfall conditions were examined. Principal component analysis and path analysis were used to analyze the direct and indirect effects on P loss with surface runoff. The results showed that P loss with surface runoff was closely correlated with soil electrical conductivity, soil pH, soil Olsen P, soil total nitrogen (TN), soil total phosphorus (TP), and soil organic carbon (SOC). The main driving factors which influenced P loss with surface runoff were soil TN, soil pH, soil Olsen P, and soil water content. Path analysis and determination coefficient analysis indicated that the standard multiple regression equation for P loss with surface runoff and each main factor was Y = 7.429 - 0.439 soil TN - 6.834 soil pH + 1.721 soil Olsen-P + 0.183 soil water content (r = 0.487, p runoff. The effect of physical and chemical properties of undisturbed soils on P loss with surface runoff was discussed, and the soil water content and soil Olsen P were strongly positive influences on the P loss with surface runoff.

  11. Methods of Multivariate Analysis

    CERN Document Server

    Rencher, Alvin C

    2012-01-01

    Praise for the Second Edition "This book is a systematic, well-written, well-organized text on multivariate analysis packed with intuition and insight . . . There is much practical wisdom in this book that is hard to find elsewhere."-IIE Transactions Filled with new and timely content, Methods of Multivariate Analysis, Third Edition provides examples and exercises based on more than sixty real data sets from a wide variety of scientific fields. It takes a "methods" approach to the subject, placing an emphasis on how students and practitioners can employ multivariate analysis in real-life sit

  12. A Novel Method of Evaluating Key Factors for Success in a Multifaceted Critical Care Fellowship Using Data Envelopment Analysis.

    Science.gov (United States)

    Tiwari, Vikram; Kumar, Avinash B

    2018-01-01

    The current system of summative multi-rater evaluations and standardized tests to determine readiness to graduate from critical care fellowships has limitations. We sought to pilot the use of data envelopment analysis (DEA) to assess what aspects of the fellowship program contribute the most to an individual fellow's success. DEA is a nonparametric, operations research technique that uses linear programming to determine the technical efficiency of an entity based on its relative usage of resources in producing the outcome. Retrospective cohort study. Critical care fellows (n = 15) in an Accreditation Council for Graduate Medical Education (ACGME) accredited fellowship at a major academic medical center in the United States. After obtaining institutional review board approval for this retrospective study, we analyzed the data of 15 anesthesiology critical care fellows from academic years 2013-2015. The input-oriented DEA model develops a composite score for each fellow based on multiple inputs and outputs. The inputs included the didactic sessions attended, the ratio of clinical duty works hours to the procedures performed (work intensity index), and the outputs were the Multidisciplinary Critical Care Knowledge Assessment Program (MCCKAP) score and summative evaluations of fellows. A DEA efficiency score that ranged from 0 to 1 was generated for each of the fellows. Five fellows were rated as DEA efficient, and 10 fellows were characterized in the DEA inefficient group. The model was able to forecast the level of effort needed for each inefficient fellow, to achieve similar outputs as their best performing peers. The model also identified the work intensity index as the key element that characterized the best performers in our fellowship. DEA is a feasible method of objectively evaluating peer performance in a critical care fellowship beyond summative evaluations alone and can potentially be a powerful tool to guide individual performance during the fellowship.

  13. Communication Network Analysis Methods.

    Science.gov (United States)

    Farace, Richard V.; Mabee, Timothy

    This paper reviews a variety of analytic procedures that can be applied to network data, discussing the assumptions and usefulness of each procedure when applied to the complexity of human communication. Special attention is paid to the network properties measured or implied by each procedure. Factor analysis and multidimensional scaling are among…

  14. The Hull Method for Selecting the Number of Common Factors

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L.

    2011-01-01

    A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…

  15. Analysis of factors controlling sediment phosphorus flux potential of wetlands in Hulun Buir grassland by principal component and path analysis method.

    Science.gov (United States)

    He, Jing; Su, Derong; Lv, Shihai; Diao, Zhaoyan; Ye, Shengxing; Zheng, Zhirong

    2017-11-08

    Phosphorus (P) flux potential can predict the trend of phosphorus release from wetland sediments to water and provide scientific parameters for further monitoring and management for phosphorus flux from wetland sediments to overlying water. Many studies have focused on factors affecting sediment P flux potential in sediment-water interface, but rarely on the relationship among these factors. In the present study, experiment on sediment P flux potential in sediment-water interface was conducted in six wetlands in Hulun Buir grassland, China and the relationships among sediment P flux potential in sediment-water interface, sediment physical properties, and sediment chemical characteristics were examined. Principal component analysis and path analysis were used to discuss these data in correlation coefficient, direct, and indirect effects on sediment P flux potential in sediment-water interface. Results indicated that the major factors affecting sediment P flux potential in sediment-water interface were amount of organophosphate-degradation bacterium in sediment, Ca-P content, and total phosphorus concentrations. The factors of direct influence sediment P flux potential were sediment Ca-P content, Olsen-P content, SOC content, and sediment Al-P content. The indirect influence sediment P flux potential in sediment-water interface was sediment Olsen-P content, sediment SOC content, sediment Ca-P content, and sediment Al-P content. And the standard multiple regression describing the relationship between sediment P flux potential in sediment-water interface and its major effect factors was Y = 5.849 - 1.025X 1  - 1.995X 2  + 0.188X 3  - 0.282X 4 (r = 0.9298, p < 0.01, n = 96), where Y is sediment P flux potential in sediment-water interface, X 1 is sediment Ca-P content, X 2 is sediment Olsen-P content, X 3 is sediment SOC content, and X 4 is sediment Al-P content. Therefore, future research will focus on these sediment properties to analyze the

  16. On the factorization method in quantum mechanics

    OpenAIRE

    Rosas-Ortiz, J. Oscar

    1998-01-01

    New exactly solvable problems have already been studied by using a modification of the factorization method introduced by Mielnik. We review this method and its connection with the traditional factorization method. The survey includes the discussion on a generalization of the factorization energies used in the traditional Infeld and Hull method.

  17. Influencing factors and kinetics analysis on the leaching of iron from boron carbide waste-scrap with ultrasound-assisted method.

    Science.gov (United States)

    Li, Xin; Xing, Pengfei; Du, Xinghong; Gao, Shuaibo; Chen, Chen

    2017-09-01

    In this paper, the ultrasound-assisted leaching of iron from boron carbide waste-scrap was investigated and the optimization of different influencing factors had also been performed. The factors investigated were acid concentration, liquid-solid ratio, leaching temperature, ultrasonic power and frequency. The leaching of iron with conventional method at various temperatures was also performed. The results show the maximum iron leaching ratios are 87.4%, 94.5% for 80min-leaching with conventional method and 50min-leaching with ultrasound assistance, respectively. The leaching of waste-scrap with conventional method fits the chemical reaction-controlled model. The leaching with ultrasound assistance fits chemical reaction-controlled model, diffusion-controlled model for the first stage and second stage, respectively. The assistance of ultrasound can greatly improve the iron leaching ratio, accelerate the leaching rate, shorten leaching time and lower the residual iron, comparing with conventional method. The advantages of ultrasound-assisted leaching were also confirmed by the SEM-EDS analysis and elemental analysis of the raw material and leached solid samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Factors affecting construction performance: exploratory factor analysis

    Science.gov (United States)

    Soewin, E.; Chinda, T.

    2018-04-01

    The present work attempts to develop a multidimensional performance evaluation framework for a construction company by considering all relevant measures of performance. Based on the previous studies, this study hypothesizes nine key factors, with a total of 57 associated items. The hypothesized factors, with their associated items, are then used to develop questionnaire survey to gather data. The exploratory factor analysis (EFA) was applied to the collected data which gave rise 10 factors with 57 items affecting construction performance. The findings further reveal that the items constituting ten key performance factors (KPIs) namely; 1) Time, 2) Cost, 3) Quality, 4) Safety & Health, 5) Internal Stakeholder, 6) External Stakeholder, 7) Client Satisfaction, 8) Financial Performance, 9) Environment, and 10) Information, Technology & Innovation. The analysis helps to develop multi-dimensional performance evaluation framework for an effective measurement of the construction performance. The 10 key performance factors can be broadly categorized into economic aspect, social aspect, environmental aspect, and technology aspects. It is important to understand a multi-dimension performance evaluation framework by including all key factors affecting the construction performance of a company, so that the management level can effectively plan to implement an effective performance development plan to match with the mission and vision of the company.

  19. Obstetric anal sphincter injury, risk factors and method of delivery - an 8-year analysis across two tertiary referral centers.

    LENUS (Irish Health Repository)

    Hehir, Mark P

    2013-10-01

    Obstetric anal sphincter injury (OASIS) represents a major cause of maternal morbidity and is a risk factor for the development of fecal incontinence. We set out to analyze the incidence of OASIS and its association with mode of delivery in two large obstetric hospitals across an 8-year study period.

  20. THE LATENT INTERCONNECTION OF THE FACTORS OF ATHEROSCLEROSIS PROGRESSION WITH A THICKNESS OF INTIMA-MEDIA BY USE OF MULTIDIMENSIONAL STATISTICAL METHODS ANALYSIS

    Directory of Open Access Journals (Sweden)

    A. P. Shavrin

    2011-01-01

    Full Text Available The aim – the study of latent relationships between indicators of the thickness of intima-media (CMM and infectious, immune, inflammatory and metabolic factors in patients with varying degrees of severity of vascular changes in these multivariate methods of statistical analysis.Materials and methods. Study included 220 patients (mean age – 43,9 ± 0,5 years who were divided into 3 groups. Group 1 consisted of thepatients with no risk factors of cardiovascular disease (CVD, the 2nd – the presence of the above factors, in third – with atherosclerotic plaques in the carotid artery. Every patient had conducted a comprehensive survey, which included an ultrasound of vessels on the apparatus Aloka 5000 with the measurement of the thickness of KIM, the study of lipid panel, the definition of C-reactive protein and cytokines – tumor necrosis factor-α, interferon-γ, interleukin-1, -8, -4, antibodies to cytomegalovirus immunoglobulin (CMV, herpes simplex virus type 1, C. pneumoniae, H. pylori and β-hemolytic streptococcus group A. The immune system status was assessed by indicators of innate and acquired immunity.Results. According to cluster analysis, all groups of patients revealed the presence of close relationships with linear thickness KIM, infectious, immune and metabolic markers, and in patients with atherosclerotic plaques in blood vessels links with indicators of inflammation are additionally found. Using factor analysis latent variables exist revealed, consisting of indices and thickness of the CMM, in group 1 – blood lipids, in the 2nd – infectious factors (CMV, C. pneumoniae and immune parameters. In the 3rd group vascular wall was linked with infectious diseases, immune and inflammatory indices and blood lipids, and systolic and diastolic blood pressure.Conclusion. The closest relationship with vascular wall of the studied parameters was observed in patients with risk factors of cardiovasculardisease, and in the

  1. Methods for RNA Analysis

    DEFF Research Database (Denmark)

    Olivarius, Signe

    of the transcriptome, 5’ end capture of RNA is combined with next-generation sequencing for high-throughput quantitative assessment of transcription start sites by two different methods. The methods presented here allow for functional investigation of coding as well as noncoding RNA and contribute to future...... RNAs rely on interactions with proteins, the establishment of protein-binding profiles is essential for the characterization of RNAs. Aiming to facilitate RNA analysis, this thesis introduces proteomics- as well as transcriptomics-based methods for the functional characterization of RNA. First, RNA...

  2. Determining the Number of Factors in P-Technique Factor Analysis

    Science.gov (United States)

    Lo, Lawrence L.; Molenaar, Peter C. M.; Rovine, Michael

    2017-01-01

    Determining the number of factors is a critical first step in exploratory factor analysis. Although various criteria and methods for determining the number of factors have been evaluated in the usual between-subjects R-technique factor analysis, there is still question of how these methods perform in within-subjects P-technique factor analysis. A…

  3. Analysis and research of influence factors ranking of fuzzy language translation accuracy in literary works based on catastrophe progression method

    Directory of Open Access Journals (Sweden)

    Wei Dong

    2017-02-01

    Full Text Available This paper researches the problem of decline in translation accuracy caused by language “vagueness” in literary translation, and proposes to use the catastrophe model for importance ranking of various factors affecting the fuzzy language translation accuracy in literary works, and finally gives out the order of factors to be considered before translation. The multi-level evaluation system can be used to construct the relevant catastrophe progression model, and the normalization formula can be used to calculate the relative membership degree of each system and evaluation index, and make evaluation combined with the evaluation criteria table. The results show that, in the fuzzy language translation, in order to improve the translation accuracy, there is a need to consider the indicators ranking: A2 fuzzy language context → A1 words attribute → A3 specific meaning of digital words; B2 fuzzy semantics, B3 blur color words → B1 multiple meanings of words → B4 fuzzy digital words; C3 combination with context and cultural background, C4 specific connotation of color words → C1 combination with words emotion, C2 selection of words meaning → C5 combination with digits and language background.

  4. Observer variation factor on advanced method for accurate, robust, and efficient spectral fitting of java based magnetic resonance user interface for MRS data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Suk Jun [Dept. of Biomedical Laboratory Science, College of Health Science, Cheongju University, Cheongju (Korea, Republic of); Yu, Seung Man [Dept. of Radiological Science, College of Health Science, Gimcheon University, Gimcheon (Korea, Republic of)

    2016-06-15

    The purpose of this study was examined the measurement error factor on AMARES of jMRUI method for magnetic resonance spectroscopy (MRS) quantitative analysis by skilled and unskilled observer method and identified the reason of independent observers. The Point-resolved spectroscopy sequence was used to acquired magnetic resonance spectroscopy data of 10 weeks male Sprague-Dawley rat liver. The methylene protons ((-CH2-)n) of 1.3 ppm and water proton (H2O) of 4.7 ppm ratio was calculated by LCModel software for using the reference data. The seven unskilled observers were calculated total lipid (methylene/water) using the jMRUI AMARES technique twice every 1 week, and we conducted interclass correlation coefficient (ICC) statistical analysis by SPSS software. The inter-observer reliability (ICC) of Cronbach's alpha value was less than 0.1. The average value of seven observer's total lipid (0.096±0.038) was 50% higher than LCModel reference value. The jMRUI AMARES analysis method is need to minimize the presence of the residual metabolite by identified metabolite MRS profile in order to obtain the same results as the LCModel.

  5. Time dependent view factor methods

    International Nuclear Information System (INIS)

    Kirkpatrick, R.C.

    1998-03-01

    View factors have been used for treating radiation transport between opaque surfaces bounding a transparent medium for several decades. However, in recent years they have been applied to problems involving intense bursts of radiation in enclosed volumes such as in the laser fusion hohlraums. In these problems, several aspects require treatment of time dependence

  6. Review of human factors guidelines and methods

    International Nuclear Information System (INIS)

    Rhodes, W.; Szlapetis, I.; Hay, T.; Weihrer, S.

    1995-04-01

    The review examines the use of human factors guidelines and methods in high technology applications, with emphasis on application to the nuclear industry. An extensive literature review was carried out identifying over 250 applicable documents, with 30 more documents identified during interviews with experts in human factors. Surveys were sent to 15 experts, of which 11 responded. The survey results indicated guidelines used and why these were favoured. Thirty-three of the most applicable guideline documents were described in detailed annotated bibliographies. A bibliographic list containing over 280 references was prepared. Thirty guideline documents were rated for their completeness, validity, applicability and practicality. The experts survey indicated the use of specific techniques. Ten human factors methods of analysis were described in general summaries, including procedures, applications, and specific techniques. Detailed descriptions of the techniques were prepared and each technique rated for applicability and practicality. Recommendations for further study of areas of importance to human factors in the nuclear field in Canada are given. (author). 8 tabs., 2 figs

  7. Review of human factors guidelines and methods

    Energy Technology Data Exchange (ETDEWEB)

    Rhodes, W; Szlapetis, I; Hay, T; Weihrer, S [Rhodes and Associates Inc., Toronto, ON (Canada)

    1995-04-01

    The review examines the use of human factors guidelines and methods in high technology applications, with emphasis on application to the nuclear industry. An extensive literature review was carried out identifying over 250 applicable documents, with 30 more documents identified during interviews with experts in human factors. Surveys were sent to 15 experts, of which 11 responded. The survey results indicated guidelines used and why these were favoured. Thirty-three of the most applicable guideline documents were described in detailed annotated bibliographies. A bibliographic list containing over 280 references was prepared. Thirty guideline documents were rated for their completeness, validity, applicability and practicality. The experts survey indicated the use of specific techniques. Ten human factors methods of analysis were described in general summaries, including procedures, applications, and specific techniques. Detailed descriptions of the techniques were prepared and each technique rated for applicability and practicality. Recommendations for further study of areas of importance to human factors in the nuclear field in Canada are given. (author). 8 tabs., 2 figs.

  8. Analysis on the factors affecting the preparation of TIO2-ADUN composite sol by sol-gel method

    International Nuclear Information System (INIS)

    Wang Hui; Yin Rongcai; Liu Jinhong

    2010-01-01

    With C 2 H 2 O 5 and water as solvent and TBT as precursor and HNO 3 as the activator and valorize, the process for preparing TiO 2 -ADUN composite Sol method was studied. The influence of different reaction conditions on Sol-Gel time was analyzed in this study. The optimal reaction condition are: reaction temperature 20-25 degree C; pH value of reaction mixture 2-5; HNO 3 value of reaction mixture 0.3-0.5 ml; molar rations of alcohol to TBT 10, of water to TBT 2-3, respectively. A concentrated ADUN solution with Ti Sol , urea, water as additive is dispersed into uniform which are prepared by external mlii. (authors)

  9. Human factors analysis and design methods for nuclear waste retrieval systems. Volume III. User's guide for the computerized event-tree analysis technique

    International Nuclear Information System (INIS)

    Casey, S.M.; Deretsky, Z.

    1980-08-01

    This document provides detailed instructions for using the Computerized Event-Tree Analysis Technique (CETAT), a program designed to assist a human factors analyst in predicting event probabilities in complex man-machine configurations found in waste retrieval systems. The instructions contained herein describe how to (a) identify the scope of a CETAT analysis, (b) develop operator performance data, (c) enter an event-tree structure, (d) modify a data base, and (e) analyze event paths and man-machine system configurations. Designed to serve as a tool for developing, organizing, and analyzing operator-initiated event probabilities, CETAT simplifies the tasks of the experienced systems analyst by organizing large amounts of data and performing cumbersome and time consuming arithmetic calculations. The principal uses of CETAT in the waste retrieval development project will be to develop models of system reliability and evaluate alternative equipment designs and operator tasks. As with any automated technique, however, the value of the output will be a function of the knowledge and skill of the analyst using the program

  10. Methods for Risk Analysis

    International Nuclear Information System (INIS)

    Alverbro, Karin

    2010-01-01

    Many decision-making situations today affect humans and the environment. In practice, many such decisions are made without an overall view and prioritise one or other of the two areas. Now and then these two areas of regulation come into conflict, e.g. the best alternative as regards environmental considerations is not always the best from a human safety perspective and vice versa. This report was prepared within a major project with the aim of developing a framework in which both the environmental aspects and the human safety aspects are integrated, and decisions can be made taking both fields into consideration. The safety risks have to be analysed in order to be successfully avoided and one way of doing this is to use different kinds of risk analysis methods. There is an abundance of existing methods to choose from and new methods are constantly being developed. This report describes some of the risk analysis methods currently available for analysing safety and examines the relationships between them. The focus here is mainly on human safety aspects

  11. Factor analysis of multivariate data

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A.; Mahadevan, R.

    A brief introduction to factor analysis is presented. A FORTRAN program, which can perform the Q-mode and R-mode factor analysis and the singular value decomposition of a given data matrix is presented in Appendix B. This computer program, uses...

  12. Method of signal analysis

    International Nuclear Information System (INIS)

    Berthomier, Charles

    1975-01-01

    A method capable of handling the amplitude and the frequency time laws of a certain kind of geophysical signals is described here. This method is based upon the analytical signal idea of Gabor and Ville, which is constructed either in the time domain by adding an imaginary part to the real signal (in-quadrature signal), or in the frequency domain by suppressing negative frequency components. The instantaneous frequency of the initial signal is then defined as the time derivative of the phase of the analytical signal, and his amplitude, or envelope, as the modulus of this complex signal. The method is applied to three types of magnetospheric signals: chorus, whistlers and pearls. The results obtained by analog and numerical calculations are compared to results obtained by classical systems using filters, i.e. based upon a different definition of the concept of frequency. The precision with which the frequency-time laws are determined leads then to the examination of the principle of the method and to a definition of instantaneous power density spectrum attached to the signal, and to the first consequences of this definition. In this way, a two-dimensional representation of the signal is introduced which is less deformed by the analysis system properties than the usual representation, and which moreover has the advantage of being obtainable practically in real time [fr

  13. Combining structural-thermal coupled field FE analysis and the Taguchi method to evaluate the relative contributions of multi-factors in a premolar adhesive MOD restoration.

    Science.gov (United States)

    Lin, Chun-Li; Chang, Yen-Hsiang; Lin, Yi-Feng

    2008-08-01

    The aim of this study was to determine the relative contribution of changes in restorative material, cavity dimensions, adhesive layer adaptation, and load conditions on the biomechanical response of an adhesive Class II MOD restoration during oral temperature changes. A validated finite-element (FE) model was used to perform the structural-thermal coupled field analyses and the Taguchi method was employed to identify the significance of each design factor in controlling the stress. The results indicated that thermal expansion in restorative material amplified the thermal effect and dominated the tooth stress value (69%) at high temperatures. The percentage contributions of the load conditions, cavity depth, and cement modulus increased the effect on tooth stress values 46%, 32%, and 14%, respectively, when the tooth temperature was returned to 37 degrees C. Load conditions were also the main factor influencing the resin cement stress values, irrespective of temperature changes. Increased stress values occurred with composite resin, lateral force, a deeper cavity, and a higher luting cement modulus. The combined use of FE analysis and the Taguchi method efficiently identified that a deeper cavity might increase the risk of a restored tooth fracture, as well as a ceramic inlay with a lower thermal expansion, attaining a proper occlusal adjustment to reduce the lateral occlusal force and low modulus luting material application to obtain a better force-transmission mechanism are recommended.

  14. Methods for geochemical analysis

    Science.gov (United States)

    Baedecker, Philip A.

    1987-01-01

    The laboratories for analytical chemistry within the Geologic Division of the U.S. Geological Survey are administered by the Office of Mineral Resources. The laboratory analysts provide analytical support to those programs of the Geologic Division that require chemical information and conduct basic research in analytical and geochemical areas vital to the furtherance of Division program goals. Laboratories for research and geochemical analysis are maintained at the three major centers in Reston, Virginia, Denver, Colorado, and Menlo Park, California. The Division has an expertise in a broad spectrum of analytical techniques, and the analytical research is designed to advance the state of the art of existing techniques and to develop new methods of analysis in response to special problems in geochemical analysis. The geochemical research and analytical results are applied to the solution of fundamental geochemical problems relating to the origin of mineral deposits and fossil fuels, as well as to studies relating to the distribution of elements in varied geologic systems, the mechanisms by which they are transported, and their impact on the environment.

  15. Multiple factor analysis by example using R

    CERN Document Server

    Pagès, Jérôme

    2014-01-01

    Multiple factor analysis (MFA) enables users to analyze tables of individuals and variables in which the variables are structured into quantitative, qualitative, or mixed groups. Written by the co-developer of this methodology, Multiple Factor Analysis by Example Using R brings together the theoretical and methodological aspects of MFA. It also includes examples of applications and details of how to implement MFA using an R package (FactoMineR).The first two chapters cover the basic factorial analysis methods of principal component analysis (PCA) and multiple correspondence analysis (MCA). The

  16. COMPUTER METHODS OF GENETIC ANALYSIS.

    Directory of Open Access Journals (Sweden)

    A. L. Osipov

    2017-02-01

    Full Text Available The basic statistical methods used in conducting the genetic analysis of human traits. We studied by segregation analysis, linkage analysis and allelic associations. Developed software for the implementation of these methods support.

  17. Assessment on the leakage hazard of landfill leachate using three-dimensional excitation-emission fluorescence and parallel factor analysis method.

    Science.gov (United States)

    Pan, Hongwei; Lei, Hongjun; Liu, Xin; Wei, Huaibin; Liu, Shufang

    2017-09-01

    A large number of simple and informal landfills exist in developing countries, which pose as tremendous soil and groundwater pollution threats. Early warning and monitoring of landfill leachate pollution status is of great importance. However, there is a shortage of affordable and effective tools and methods. In this study, a soil column experiment was performed to simulate the pollution status of leachate using three-dimensional excitation-emission fluorescence (3D-EEMF) and parallel factor analysis (PARAFAC) models. Sum of squared residuals (SSR) and principal component analysis (PCA) were used to determine the optimal components for PARAFAC. A one-way analysis of variance showed that the component scores of the soil column leachate were significant influenced by landfill leachate (plandfill to that of natural soil could be used to evaluate the leakage status of landfill leachate. Furthermore, a hazard index (HI) and a hazard evaluation standard were established. A case study of Kaifeng landfill indicated a low hazard (level 5) by the use of HI. In summation, HI is presented as a tool to evaluate landfill pollution status and for the guidance of municipal solid waste management. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Analysis of radiation-natural convection interactions in 1-G and low-G environments using the discrete exchange factor method

    International Nuclear Information System (INIS)

    Kassemi, M.

    1990-01-01

    In this paper a new numerical method is presented for the analysis of combined natural convection and radiation heat transfer which has application in many engineering situations such as materials processing, combustion and fire research. Because of the recent interest in the performance of these engineering processes in the low-gravity environment of space, attention is devoted to both 1-g and low-g applications. The numerical study is based on a two-dimensional mathematical model represented by a set of coupled nonlinear partial differential equations for conservation of mass, momentum, and energy and the integro-differential equations which describe radiative heat transfer. Radiative exchange is formulated using the discrete exchange factor method (DEF). This method considers point to point exchange and provides accurate results over a wide range of radiation parameters. The desirable features of DEF are briefly described. Our numerical results show that radiation significantly influences the flow and heat transfer in the enclosure. In both low-g and 1-g applications, radiation modifies the temperature profiles and enhances the convective heat transfer at the cold wall. In a low-g environment, convection is weak, and radiation can easily become the dominant heat transfer mode. It is also shown that in the top-heated enclosure, volumetric heating by radiation gives rise to an intricate cell pattern in the cavity

  19. Integrating human factors into process hazard analysis

    International Nuclear Information System (INIS)

    Kariuki, S.G.; Loewe, K.

    2007-01-01

    A comprehensive process hazard analysis (PHA) needs to address human factors. This paper describes an approach that systematically identifies human error in process design and the human factors that influence its production and propagation. It is deductive in nature and therefore considers human error as a top event. The combinations of different factors that may lead to this top event are analysed. It is qualitative in nature and is used in combination with other PHA methods. The method has an advantage because it does not look at the operator error as the sole contributor to the human failure within a system but a combination of all underlying factors

  20. SWOT ANALYSIS ON SAMPLING METHOD

    Directory of Open Access Journals (Sweden)

    CHIS ANCA OANA

    2014-07-01

    Full Text Available Audit sampling involves the application of audit procedures to less than 100% of items within an account balance or class of transactions. Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors. It should be applied correctly for a fair view of financial statements, to satisfy the needs of all financial users. In order to be applied correctly the method must be understood by all its users and mainly by auditors. Otherwise the risk of not applying it correctly would cause loose of reputation and discredit, litigations and even prison. Since there is not a unitary practice and methodology for applying the technique, the risk of incorrectly applying it is pretty high. The SWOT analysis is a technique used that shows the advantages, disadvantages, threats and opportunities. We applied SWOT analysis in studying the sampling method, from the perspective of three players: the audit company, the audited entity and users of financial statements. The study shows that by applying the sampling method the audit company and the audited entity both save time, effort and money. The disadvantages of the method are difficulty in applying and understanding its insight. Being largely used as an audit method and being a factor of a correct audit opinion, the sampling method’s advantages, disadvantages, threats and opportunities must be understood by auditors.

  1. Factors Influencing Acceptance Of Contraceptive Methods

    Directory of Open Access Journals (Sweden)

    Anita Gupta

    1997-04-01

    Full Text Available Research Problem: What are the factors influencing acceptance of contraceptive methods. Objective: To study the determinants influencing contra­ceptive acceptance. Study design: Population based cross - sectional study. Setting: Rural area of East Delhi. Participants: Married women in the reproductive age group. Sample:Stratified sampling technique was used to draw the sample. Sample Size: 328 married women of reproductive age group. Study Variables: Socio-economic status, Type of contraceptive, Family size, Male child. Outcome Variables: Acceptance of contraceptives Statistical Analysis: By proportions. Result: Prevalence of use of contraception at the time of data collection was 40.5%. Tubectomy and vasectomy were most commonly used methods. (59.4%, n - 133. Educational status of the women positively influenced the contraceptive acceptance but income did not. Desire for more children was single most important deterrent for accepting contraception. Recommendations: (i             Traditional method of contraception should be given more attention. (ii            Couplesshould be brought in the contraceptive use net at the early stage of marriage.

  2. Human factors methods in DOE nuclear facilities

    International Nuclear Information System (INIS)

    Bennett, C.T.; Banks, W.W.; Waters, R.J.

    1993-01-01

    The US Department of Energy (DOE) is in the process of developing a series of guidelines for the use of human factors standards, procedures, and methods to be used in nuclear facilities. This paper discusses the philosophy and process being used to develop a DOE human factors methods handbook to be used during the design cycle. The following sections will discuss: (1) basic justification for the project; (2) human factors design objectives and goals; and (3) role of human factors engineering (HFE) in the design cycle

  3. Lithuanian Population Aging Factors Analysis

    Directory of Open Access Journals (Sweden)

    Agnė Garlauskaitė

    2015-05-01

    Full Text Available The aim of this article is to identify the factors that determine aging of Lithuania’s population and to assess the influence of these factors. The article shows Lithuanian population aging factors analysis, which consists of two main parts: the first describes the aging of the population and its characteristics in theoretical terms. Second part is dedicated to the assessment of trends that influence the aging population and demographic factors and also to analyse the determinants of the aging of the population of Lithuania. After analysis it is concluded in the article that the decline in the birth rate and increase in the number of emigrants compared to immigrants have the greatest impact on aging of the population, so in order to show the aging of the population, a lot of attention should be paid to management of these demographic processes.

  4. Methods of nonlinear analysis

    CERN Document Server

    Bellman, Richard Ernest

    1970-01-01

    In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat

  5. Development of advanced MCR task analysis methods

    International Nuclear Information System (INIS)

    Na, J. C.; Park, J. H.; Lee, S. K.; Kim, J. K.; Kim, E. S.; Cho, S. B.; Kang, J. S.

    2008-07-01

    This report describes task analysis methodology for advanced HSI designs. Task analyses was performed by using procedure-based hierarchical task analysis and task decomposition methods. The results from the task analysis were recorded in a database. Using the TA results, we developed static prototype of advanced HSI and human factors engineering verification and validation methods for an evaluation of the prototype. In addition to the procedure-based task analysis methods, workload estimation based on the analysis of task performance time and analyses for the design of information structure and interaction structures will be necessary

  6. Transforming Rubrics Using Factor Analysis

    Science.gov (United States)

    Baryla, Ed; Shelley, Gary; Trainor, William

    2012-01-01

    Student learning and program effectiveness is often assessed using rubrics. While much time and effort may go into their creation, it is equally important to assess how effective and efficient the rubrics actually are in terms of measuring competencies over a number of criteria. This study demonstrates the use of common factor analysis to identify…

  7. Bayesian methods for data analysis

    CERN Document Server

    Carlin, Bradley P.

    2009-01-01

    Approaches for statistical inference Introduction Motivating Vignettes Defining the Approaches The Bayes-Frequentist Controversy Some Basic Bayesian Models The Bayes approach Introduction Prior Distributions Bayesian Inference Hierarchical Modeling Model Assessment Nonparametric Methods Bayesian computation Introduction Asymptotic Methods Noniterative Monte Carlo Methods Markov Chain Monte Carlo Methods Model criticism and selection Bayesian Modeling Bayesian Robustness Model Assessment Bayes Factors via Marginal Density Estimation Bayes Factors

  8. Analysis of the similarity factors of the villages in the areas of the nuclear power plants from the premature death-rate performed by fuzzy logic method

    International Nuclear Information System (INIS)

    Letkovicova, M.; Rehak, R.; Korec, J.; Mihaly, B.; Prikazsky, V.

    1998-01-01

    Our paper examines the surrounding areas of NPP from the proportion of premature death-rate which is one of the complex indicators of the health situation of the population. Specially, attention is focused on NPP in Bohunice (SE-EBO) which has been in operation for the last 30 years and NPP Mochovce (SE-EMO) which was still under construction when data was collected. WHO considers every death of the individual before 65 years of age a premature death case, except death cases of children younger that 1 year. Because of the diversity of the population, this factor is a standard for the population of Slovak Republic (SR) as well as for the european population. The objective of the work is to prove, that even a long term production of energy in NPP does not evoke health problems for the population living in the surrounding areas, which could be recorded through analysis of premature death cases. Using the fuzzy logic method when searching for similar objects and evaluating the influence of the NPP on its surrounding area seems more natural than classical accumulation method, which separates objects into groups. When using the classical accumulation method, the objects in particular accumulation group are more similar than 2 objects in different accumulation groups. When using the fuzzy logic method the similarity is defined more naturally. Within the observed regions of the NPP, the percentage of directly standardized premature death cases is almost identical with the average for the SR. The most closely observed region of SE-EMO up to 5 kilometers zone even shows the lowest percentage. Also we did not record any areas that would have unfavourable values from the wind streams perspective neither than from the local water streams recipients of SE-EBO Manivier and Dudvah. The region of SE-EMO is also within the SR average, unfavourable coherent areas of premature death case are non existent. Galanta city region comes out of the comparison with the relatively worse

  9. Exploratory factor analysis in Rehabilitation Psychology: a content analysis.

    Science.gov (United States)

    Roberson, Richard B; Elliott, Timothy R; Chang, Jessica E; Hill, Jessica N

    2014-11-01

    Our objective was to examine the use and quality of exploratory factor analysis (EFA) in articles published in Rehabilitation Psychology. Trained raters examined 66 separate exploratory factor analyses in 47 articles published between 1999 and April 2014. The raters recorded the aim of the EFAs, the distributional statistics, sample size, factor retention method(s), extraction and rotation method(s), and whether the pattern coefficients, structure coefficients, and the matrix of association were reported. The primary use of the EFAs was scale development, but the most widely used extraction and rotation method was principle component analysis, with varimax rotation. When determining how many factors to retain, multiple methods (e.g., scree plot, parallel analysis) were used most often. Many articles did not report enough information to allow for the duplication of their results. EFA relies on authors' choices (e.g., factor retention rules extraction, rotation methods), and few articles adhered to all of the best practices. The current findings are compared to other empirical investigations into the use of EFA in published research. Recommendations for improving EFA reporting practices in rehabilitation psychology research are provided.

  10. Identification of key factors affecting the water pollutant concentration in the sluice-controlled river reaches of the Shaying River in China via statistical analysis methods.

    Science.gov (United States)

    Dou, Ming; Zhang, Yan; Zuo, Qiting; Mi, Qingbin

    2015-08-01

    The construction of sluices creates a strong disturbance in water environmental factors within a river. The change in water pollutant concentrations of sluice-controlled river reaches (SCRRs) is more complex than that of natural river segments. To determine the key factors affecting water pollutant concentration changes in SCRRs, river reaches near the Huaidian Sluice in the Shaying River of China were selected as a case study, and water quality monitoring experiments based on different regulating modes were implemented in 2009 and 2010. To identify the key factors affecting the change rates for the chemical oxygen demand of permanganate (CODMn) and ammonia nitrogen (NH3-N) concentrations in the SCRRs of the Huaidian Sluice, partial correlation analysis, principal component analysis and principal factor analysis were used. The results indicate four factors, i.e., the inflow quantity from upper reaches, opening size of sluice gates, water pollutant concentration from upper reaches, and turbidity before the sluice, which are the common key factors for the CODMn and NH3-N concentration change rates. Moreover, the dissolved oxygen before a sluice is a key factor for the permanganate concentration from CODMn change rate, and the water depth before a sluice is a key factor for the NH3-N concentration change rate. Multiple linear regressions between the water pollutant concentration change rate and key factors were established via multiple linear regression analyses, and the quantitative relationship between the CODMn and NH3-N concentration change rates and key affecting factors was analyzed. Finally, the mechanism of action for the key factors affecting the water pollutant concentration changes was analyzed. The results reveal that the inflow quantity from upper reaches, opening size of sluice gates, permanganate concentration from CODMn from upper reaches and dissolved oxygen before the sluice have a negative influence and the turbidity before the sluice has a positive

  11. Multivariate analysis: models and method

    International Nuclear Information System (INIS)

    Sanz Perucha, J.

    1990-01-01

    Data treatment techniques are increasingly used since computer methods result of wider access. Multivariate analysis consists of a group of statistic methods that are applied to study objects or samples characterized by multiple values. A final goal is decision making. The paper describes the models and methods of multivariate analysis

  12. Multivariate analysis methods in physics

    International Nuclear Information System (INIS)

    Wolter, M.

    2007-01-01

    A review of multivariate methods based on statistical training is given. Several multivariate methods useful in high-energy physics analysis are discussed. Selected examples from current research in particle physics are discussed, both from the on-line trigger selection and from the off-line analysis. Also statistical training methods are presented and some new application are suggested [ru

  13. Methods in algorithmic analysis

    CERN Document Server

    Dobrushkin, Vladimir A

    2009-01-01

    …helpful to any mathematics student who wishes to acquire a background in classical probability and analysis … This is a remarkably beautiful book that would be a pleasure for a student to read, or for a teacher to make into a year's course.-Harvey Cohn, Computing Reviews, May 2010

  14. Complementing Gender Analysis Methods.

    Science.gov (United States)

    Kumar, Anant

    2016-01-01

    The existing gender analysis frameworks start with a premise that men and women are equal and should be treated equally. These frameworks give emphasis on equal distribution of resources between men and women and believe that this will bring equality which is not always true. Despite equal distribution of resources, women tend to suffer and experience discrimination in many areas of their lives such as the power to control resources within social relationships, and the need for emotional security and reproductive rights within interpersonal relationships. These frameworks believe that patriarchy as an institution plays an important role in women's oppression, exploitation, and it is a barrier in their empowerment and rights. Thus, some think that by ensuring equal distribution of resources and empowering women economically, institutions like patriarchy can be challenged. These frameworks are based on proposed equality principle which puts men and women in competing roles. Thus, the real equality will never be achieved. Contrary to the existing gender analysis frameworks, the Complementing Gender Analysis framework proposed by the author provides a new approach toward gender analysis which not only recognizes the role of economic empowerment and equal distribution of resources but suggests to incorporate the concept and role of social capital, equity, and doing gender in gender analysis which is based on perceived equity principle, putting men and women in complementing roles that may lead to equality. In this article the author reviews the mainstream gender theories in development from the viewpoint of the complementary roles of gender. This alternative view is argued based on existing literature and an anecdote of observations made by the author. While criticizing the equality theory, the author offers equity theory in resolving the gender conflict by using the concept of social and psychological capital.

  15. Analysis method on the factors of slope disaster occurring place with its application. Shamen saigai hassei chiten ni okeru soin bunseki no ichishuho to oyosei ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Setojima, M [Kokusai Kogyo Co. Ltd., Tokyo (Japan); Shiraishi, K [Public works Research Institute, Tokyo (Japan)

    1992-06-10

    A proposal is made on a method to analyze a slope disaster in detail from a viewpoint of ''where the disaster occurred'' (to analyze in a line pattern the apertures of a collapse and main scarps in a land slide). Methodological emphasis was placed on superimposing aerial photograph images taken before and after a disaster over imaged information on factors based on land conditions including topographical characteristics. Applicability of this method was discussed using a landslide location in Niigata Prefecture as an example. This paper describes the result of tb discussion by the following items: Imaging the aerial photographs and correcting their geometrical distortion; extracting items of line-pattern information from the aerial photographs and classifying the land coverage; and analyzing factors of causing the landslide using an image superimposing process. The papers also mentioned a view that this method may be applied effectively on disasters occurring on ground surface. 11 refs., 8 figs., 2 tabs.

  16. Factor Analysis, AMMI Stability Value (ASV Parameter and GGE Bi-Plot Graphical Method of Quantitative and Qualitative Traits in Potato Genotypes

    Directory of Open Access Journals (Sweden)

    Davood Hassanpanah

    2016-10-01

    Full Text Available Quantitative and qualitative traits and stability of marketable tuber yield of 14 promising potato clones, along with three commercial cultivars (Agria, Marfona and Savalan as checks, were evaluated at the Ardabil Agricultural and Natural Resources Research Station during 2013 and 2014. The experiment was based on a randomized complete block design with four replications. During growing period and after harvest, traits like main stem number per plant, plant height, tuber number and weight per plant, total and marketable tuber yield, dry matter percentage, baking type, hollow heart, tuber inner ring and discoloration of raw tuber flesh after 24 hours were measured. Combined ANONA for quantitative traits showed that there were significant differences among promising clones as to total and marketable tuber yield, tuber number and weight per plant, plant height, tuber mean weight, main stem number per plant and dry matter percentage and their interactions with year in total and marketable tuber yield. The clone 9 (397078-3 with the least amount of marketable tuber yield had significant difference with clones 4 (397045-13, 1 (397031-16, 3 (397031-11, 6 (397009-8 and 12 (397067-6 in 2013 and with clone 4 (397045-13 and Agria cultivar in 2014. The clones 4(397045-13, 1 (397031-16 and 12 (397067-6 had uniform tuber, yellow to dark-yellow skin and light-yellow to yellow flesh color, tuber shape of oval round and round, shallow to mid shallow eyes, no tuber inner ring, hollow heart and tuber inner crack and mid-late maturity. They were selected for home consumption of chips, french-fries and frying. Based on the results of factor analysis, "tuber yield", "number of tuber" and "plant structural and quality "were named as first, second and third quality determining factors respectively. In this experiment, GGE Bi-plot model and AMMI Stability Value (ASV parameter, were acceptable methods for the selection of marketable tuber yield stability which found to

  17. The Influencing Factor Analysis on the Performance Evaluation of Assembly Line Balancing Problem Level 1 (SALBP-1) Based on ANOVA Method

    Science.gov (United States)

    Chen, Jie; Hu, Jiangnan

    2017-06-01

    Industry 4.0 and lean production has become the focus of manufacturing. A current issue is to analyse the performance of the assembly line balancing. This study focus on distinguishing the factors influencing the assembly line balancing. The one-way ANOVA method is applied to explore the significant degree of distinguished factors. And regression model is built to find key points. The maximal task time (tmax ), the quantity of tasks (n), and degree of convergence of precedence graph (conv) are critical for the performance of assembly line balancing. The conclusion will do a favor to the lean production in the manufacturing.

  18. STOCHASTIC METHODS IN RISK ANALYSIS

    Directory of Open Access Journals (Sweden)

    Vladimíra OSADSKÁ

    2017-06-01

    Full Text Available In this paper, we review basic stochastic methods which can be used to extend state-of-the-art deterministic analytical methods for risk analysis. We can conclude that the standard deterministic analytical methods highly depend on the practical experience and knowledge of the evaluator and therefore, the stochastic methods should be introduced. The new risk analysis methods should consider the uncertainties in input values. We present how large is the impact on the results of the analysis solving practical example of FMECA with uncertainties modelled using Monte Carlo sampling.

  19. COMPETITIVE INTELLIGENCE ANALYSIS - SCENARIOS METHOD

    Directory of Open Access Journals (Sweden)

    Ivan Valeriu

    2014-07-01

    Full Text Available Keeping a company in the top performing players in the relevant market depends not only on its ability to develop continually, sustainably and balanced, to the standards set by the customer and competition, but also on the ability to protect its strategic information and to know in advance the strategic information of the competition. In addition, given that economic markets, regardless of their profile, enable interconnection not only among domestic companies, but also between domestic companies and foreign companies, the issue of economic competition moves from the national economies to the field of interest of regional and international economic organizations. The stakes for each economic player is to keep ahead of the competition and to be always prepared to face market challenges. Therefore, it needs to know as early as possible, how to react to others’ strategy in terms of research, production and sales. If a competitor is planning to produce more and cheaper, then it must be prepared to counteract quickly this movement. Competitive intelligence helps to evaluate the capabilities of competitors in the market, legally and ethically, and to develop response strategies. One of the main goals of the competitive intelligence is to acknowledge the role of early warning and prevention of surprises that could have a major impact on the market share, reputation, turnover and profitability in the medium and long term of a company. This paper presents some aspects of competitive intelligence, mainly in terms of information analysis and intelligence generation. Presentation is theoretical and addresses a structured method of information analysis - scenarios method – in a version that combines several types of analysis in order to reveal some interconnecting aspects of the factors governing the activity of a company.

  20. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  1. Basic methods of isotope analysis

    International Nuclear Information System (INIS)

    Ochkin, A.V.; Rozenkevich, M.B.

    2000-01-01

    The bases of the most applied methods of the isotope analysis are briefly presented. The possibilities and analytical characteristics of the mass-spectrometric, spectral, radiochemical and special methods of the isotope analysis, including application of the magnetic resonance, chromatography and refractometry, are considered [ru

  2. Immunoautoradiographic analysis of epidermal growth factor receptors: a sensitive method for the in situ identification of receptor proteins and for studying receptor specificity

    International Nuclear Information System (INIS)

    Fernandez-Pol, J.A.

    1982-01-01

    The use of an immunoautoradiographic system for the detection and analysis of epidermal growth factor (EGF) receptors in human epidermoid carcinoma A-431 cells is reported. By utilizing this technique, the interaction between EGF and its membrane receptor in A-431 cells can be rapidly visualized. The procedure is simple, rapid, and very sensitive, and it provides conclusive evidence that the 150K dalton protein is the receptor fo EGF in A-431 cells. In summary, the immunoautoradiographic procedure brings to the analysis of hormone rceptor proteins the power that the radioimmunoassay technique has brought to the analysis of hormones. Thus, this assay system is potentially applicable in a wide spectrum in many fields of nuclear medicine and biology

  3. Human factors analysis of incident/accident report

    International Nuclear Information System (INIS)

    Kuroda, Isao

    1992-01-01

    Human factors analysis of accident/incident has different kinds of difficulties in not only technical, but also psychosocial background. This report introduces some experiments of 'Variation diagram method' which is able to extend to operational and managemental factors. (author)

  4. Probabilistic methods for rotordynamics analysis

    Science.gov (United States)

    Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.

    1991-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.

  5. Human factors estimation methods using physiological informations

    International Nuclear Information System (INIS)

    Takano, Ken-ichi; Yoshino, Kenji; Nakasa, Hiroyasu

    1984-01-01

    To enhance the operational safety in the nuclear power plant, it is necessary to decrease abnormal phenomena due to human errors. Especially, it is essential to basically understand human behaviors under the work environment for plant maintenance workers, inspectors, and operators. On the above stand point, this paper presents the results of literature survey on the present status of human factors engineering technology applicable to the nuclear power plant and also discussed the following items: (1) Application fields where the ergonomical evaluation is needed for workers safety. (2) Basic methodology for investigating the human performance. (3) Features of the physiological information analysis among various types of ergonomical techniques. (4) Necessary conditions for the application of in-situ physiological measurement to the nuclear power plant. (5) Availability of the physiological information analysis. (6) Effectiveness of the human factors engineering methodology, especially physiological information analysis in the case of application to the nuclear power plant. The above discussions lead to the demonstration of high applicability of the physiological information analysis to nuclear power plant, in order to improve the work performance. (author)

  6. Factor analysis improves the selection of prescribing indicators

    DEFF Research Database (Denmark)

    Rasmussen, Hanne Marie Skyggedal; Søndergaard, Jens; Sokolowski, Ineta

    2006-01-01

    OBJECTIVE: To test a method for improving the selection of indicators of general practitioners' prescribing. METHODS: We conducted a prescription database study including all 180 general practices in the County of Funen, Denmark, approximately 472,000 inhabitants. Principal factor analysis was us...... appropriate and inappropriate prescribing, as revealed by the correlation of the indicators in the first factor. CONCLUSION: Correlation and factor analysis is a feasible method that assists the selection of indicators and gives better insight into prescribing patterns....

  7. Analysis of Precision of Activation Analysis Method

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Nørgaard, K.

    1973-01-01

    The precision of an activation-analysis method prescribes the estimation of the precision of a single analytical result. The adequacy of these estimates to account for the observed variation between duplicate results from the analysis of different samples and materials, is tested by the statistic T...

  8. ANALYSIS OF THE FACTORS AFFECTING THE AVERAGE

    Directory of Open Access Journals (Sweden)

    Carmen BOGHEAN

    2013-12-01

    Full Text Available Productivity in agriculture most relevantly and concisely expresses the economic efficiency of using the factors of production. Labour productivity is affected by a considerable number of variables (including the relationship system and interdependence between factors, which differ in each economic sector and influence it, giving rise to a series of technical, economic and organizational idiosyncrasies. The purpose of this paper is to analyse the underlying factors of the average work productivity in agriculture, forestry and fishing. The analysis will take into account the data concerning the economically active population and the gross added value in agriculture, forestry and fishing in Romania during 2008-2011. The distribution of the average work productivity per factors affecting it is conducted by means of the u-substitution method.

  9. Analysis apparatus and method of analysis

    International Nuclear Information System (INIS)

    1976-01-01

    A continuous streaming method developed for the excution of immunoassays is described in this patent. In addition, a suitable apparatus for the method was developed whereby magnetic particles are automatically employed for the consecutive analysis of a series of liquid samples via the RIA technique

  10. Nonlinear programming analysis and methods

    CERN Document Server

    Avriel, Mordecai

    2012-01-01

    This text provides an excellent bridge between principal theories and concepts and their practical implementation. Topics include convex programming, duality, generalized convexity, analysis of selected nonlinear programs, techniques for numerical solutions, and unconstrained optimization methods.

  11. Chemical methods of rock analysis

    National Research Council Canada - National Science Library

    Jeffery, P. G; Hutchison, D

    1981-01-01

    A practical guide to the methods in general use for the complete analysis of silicate rock material and for the determination of all those elements present in major, minor or trace amounts in silicate...

  12. Confirmatory factor analysis using Microsoft Excel.

    Science.gov (United States)

    Miles, Jeremy N V

    2005-11-01

    This article presents a method for using Microsoft (MS) Excel for confirmatory factor analysis (CFA). CFA is often seen as an impenetrable technique, and thus, when it is taught, there is frequently little explanation of the mechanisms or underlying calculations. The aim of this article is to demonstrate that this is not the case; it is relatively straightforward to produce a spreadsheet in MS Excel that can carry out simple CFA. It is possible, with few or no programming skills, to effectively program a CFA analysis and, thus, to gain insight into the workings of the procedure.

  13. Novel absorptivity centering method utilizing normalized and factorized spectra for analysis of mixtures with overlapping spectra in different matrices using built-in spectrophotometer software.

    Science.gov (United States)

    Lotfy, Hayam Mahmoud; Omran, Yasmin Rostom

    2018-07-05

    A novel, simple, rapid, accurate, and economical spectrophotometric method, namely absorptivity centering (a-Centering) has been developed and validated for the simultaneous determination of mixtures with partially and completely overlapping spectra in different matrices using either normalized or factorized spectrum using built-in spectrophotometer software without a need of special purchased program. Mixture I (Mix I) composed of Simvastatin (SM) and Ezetimibe (EZ) is the one with partial overlapping spectra formulated as tablets, while mixture II (Mix II) formed by Chloramphenicol (CPL) and Prednisolone acetate (PA) is that with complete overlapping spectra formulated as eye drops. These procedures do not require any separation steps. Resolution of spectrally overlapping binary mixtures has been achieved getting recovered zero-order (D 0 ) spectrum of each drug, then absorbance was recorded at their maxima 238, 233.5, 273 and 242.5 nm for SM, EZ, CPL and PA, respectively. Calibration graphs were established with good correlation coefficients. The method shows significant advantages as simplicity, minimal data manipulation besides maximum reproducibility and robustness. Moreover, it was validated according to ICH guidelines. Selectivity was tested using laboratory-prepared mixtures. Accuracy, precision and repeatability were found to be within the acceptable limits. The proposed method is good enough to be applied to an assay of drugs in their combined formulations without any interference from excipients. The obtained results were statistically compared with those of the reported and official methods by applying t-test and F-test at 95% confidence level concluding that there is no significant difference with regard to accuracy and precision. Generally, this method could be used successfully for the routine quality control testing. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Development and validation of a method for the determination of regulated fragrance allergens by High-Performance Liquid Chromatography and Parallel Factor Analysis 2.

    Science.gov (United States)

    Pérez-Outeiral, Jessica; Elcoroaristizabal, Saioa; Amigo, Jose Manuel; Vidal, Maider

    2017-12-01

    This work presents the development and validation of a multivariate method for quantitation of 6 potentially allergenic substances (PAS) related to fragrances by ultrasound-assisted emulsification microextraction coupled with HPLC-DAD and PARAFAC2 in the presence of other 18 PAS. The objective is the extension of a previously proposed univariate method to be able to determine the 24 PAS currently considered as allergens. The suitability of the multivariate approach for the qualitative and quantitative analysis of the analytes is discussed through datasets of increasing complexity, comprising the assessment and validation of the method performance. PARAFAC2 showed to adequately model the data facing up different instrumental and chemical issues, such as co-elution profiles, overlapping spectra, unknown interfering compounds, retention time shifts and baseline drifts. Satisfactory quality parameters of the model performance were obtained (R 2 ≥0.94), as well as meaningful chromatographic and spectral profiles (r≥0.97). Moreover, low errors of prediction in external validation standards (below 15% in most cases) as well as acceptable quantification errors in real spiked samples (recoveries from 82 to 119%) confirmed the suitability of PARAFAC2 for resolution and quantification of the PAS. The combination of the previously proposed univariate approach, for the well-resolved peaks, with the developed multivariate method allows the determination of the 24 regulated PAS. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. A study of environmental polluting factors by neutron activation method

    International Nuclear Information System (INIS)

    Paunoiu, C.; Doca, C.

    2004-01-01

    The paper presents: a) some importance factors of the environmental pollution; b) the theoretical aspects of the Neutron Activation Analysis (NAA) used in the study of the environmental pollution; c) the NAA specific hardware and software facilities existing at the Institute for Nuclear Research; d) a direct application of the NAA method in the study of the environmental pollution for Pitesti city by the analysis of some ground and vegetation samples; e) results and conclusions. (authors)

  16. Seismic design and analysis methods

    International Nuclear Information System (INIS)

    Varpasuo, P.

    1993-01-01

    Seismic load is in many areas of the world the most important loading situation from the point of view of structural strength. Taking this into account it is understandable, that there has been a strong allocation of resources in the seismic analysis during the past ten years. In this study there are three areas of the center of gravity: (1) Random vibrations; (2) Soil-structure interaction and (3) The methods for determining structural response. The solution of random vibration problems is clarified with the aid of applications in this study and from the point of view of mathematical treatment and mathematical formulations it is deemed sufficient to give the relevant sources. In the soil-structure interaction analysis the focus has been the significance of frequency dependent impedance functions. As a result it was obtained, that the description of the soil with the aid of frequency dependent impedance functions decreases the structural response and it is thus always the preferred method when compared to more conservative analysis types. From the methods to determine the C structural response the following four were tested: (1) The time history method; (2) The complex frequency-response method; (3) Response spectrum method and (4) The equivalent static force method. The time history appeared to be the most accurate method and the complex frequency-response method did have the widest area of application. (orig.). (14 refs., 35 figs.)

  17. Diagnostic methods of tubal factor in infertility

    International Nuclear Information System (INIS)

    Korzon, T.; Mielnik, J.; Gosciniak, W.

    1993-01-01

    The diagnostic methods of tubal factor in infertility have been presented. In details have been discussed PJ, PK HSG and pelviscopy. These examinations themselves constitute the basic ones in infertility. We turned our attention into technical details and possible mistakes which may occur at the time of performing them, these misinterpretations may lead to absolutely wrong conclusion and diagnosis. Authors have wide experience in performing the discussed examinations and this allows them to share their opinion. Over the years several thousand of PK and HSG examinations have been carried out and also 1000 laparoscopies. (author)

  18. Analysis Of Factors Affecting Gravity-Induced Deflection For Large And Thin Wafers In Flatness Measurement Using Three-Point-Support Method

    Directory of Open Access Journals (Sweden)

    Liu Haijun

    2015-12-01

    Full Text Available Accurate flatness measurement of silicon wafers is affected greatly by the gravity-induced deflection (GID of the wafers, especially for large and thin wafers. The three-point-support method is a preferred method for the measurement, in which the GID uniquely determined by the positions of the supports could be calculated and subtracted. The accurate calculation of GID is affected by the initial stress of the wafer and the positioning errors of the supports. In this paper, a finite element model (FEM including the effect of initial stress was developed to calculate GID. The influence of the initial stress of the wafer on GID calculation was investigated and verified by experiment. A systematic study of the effects of positioning errors of the support ball and the wafer on GID calculation was conducted. The results showed that the effect of the initial stress could not be neglected for ground wafers. The wafer positioning error and the circumferential error of the support were the most influential factors while the effect of the vertical positioning error was negligible in GID calculation.

  19. This research is to study the factors which influence the business success of small business ‘processed rotan’. The data employed in the study are primary data within the period of July to August 2013, 30 research observations through census method. Method of analysis used in the study is multiple linear regressions. The results of analysis showed that the factors of labor, innovation and promotion have positive and significant influence on the business success of small business ‘processed rotan’ simultaneously. The analysis also showed that partially labor has positive and significant influence on the business success, yet innovation and promotion have insignificant and positive influence on the business success.

    OpenAIRE

    Nasution, Inggrita Gusti Sari; Muchtar, Yasmin Chairunnisa

    2013-01-01

    This research is to study the factors which influence the business success of small business ‘processed rotan’. The data employed in the study are primary data within the period of July to August 2013, 30 research observations through census method. Method of analysis used in the study is multiple linear regressions. The results of analysis showed that the factors of labor, innovation and promotion have positive and significant influence on the business success of small busine...

  20. Contribution of lymph node staging method and prognostic factors in malignant ovarian sex cord-stromal tumors: A world wide database analysis.

    Science.gov (United States)

    Wang, Jieyu; Li, Jun; Chen, Ruifang; Lu, Xin

    2018-07-01

    To investigate the clinicopathologic prognostic factors in patients with malignant sex cord-stromal tumors (SCSTs) with lymph node dissection, and at the same time, to evaluate the influence of the log odds of positive lymph nodes (LODDS) on their survival. Patients diagnosed with malignant SCSTs who underwent lymph node dissection were extracted from the 1988-2013 Surveillance, Epidemiology, and End Results (SEER) database. Overall survival (OS) and cancer-specific survival (CSS) were estimated by Kaplan-Meier curves. The Cox proportional hazards regression model was used to identify independent predictors of survival. 576 patients with malignant SCSTs and with lymphadenectomy were identified, including 468 (81.3%) patients with granulosa cell tumors (GCTs) and 80 (13.9%) patients with Sertoli-Leydig cell tumors (SLCTs). 399 (69.3%) patients and 118 (20.5%) patients were in the LODDS < -1 group and -1 ≤ LODDS < -0.5 group, respectively. The 10-year OS rate was 80.9% and CSS was 87.2% in the LODDS < -0.5 group, whereas the survival rates for other groups were 68.5% and 73.3%. On multivariate analysis, age 50 years or less (p < 0.001), tumor size of 10 cm or less (p < 0.001), early-stage disease (p < 0.001), and GCT histology (p ≤ 0.001) were the significant prognostic factors for improved survival. LODDS < -0.5 was associated with a favorable prognosis (OS: p = 0.051; CSS:P = 0.055). Younger age, smaller tumor size, early stage, and GCT histologic type are independent prognostic factors for improved survival in patients with malignant SCST with lymphadenectomy. Stratified LODDS could be regarded as an effective value to assess the lymph node status, and to predict the survival status of patients. Copyright © 2018 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  1. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  2. A factor analysis to detect factors influencing building national brand

    Directory of Open Access Journals (Sweden)

    Naser Azad

    Full Text Available Developing a national brand is one of the most important issues for development of a brand. In this study, we present factor analysis to detect the most important factors in building a national brand. The proposed study uses factor analysis to extract the most influencing factors and the sample size has been chosen from two major auto makers in Iran called Iran Khodro and Saipa. The questionnaire was designed in Likert scale and distributed among 235 experts. Cronbach alpha is calculated as 84%, which is well above the minimum desirable limit of 0.70. The implementation of factor analysis provides six factors including “cultural image of customers”, “exciting characteristics”, “competitive pricing strategies”, “perception image” and “previous perceptions”.

  3. "Why did you really do it?" A mixed-method analysis of the factors underpinning motivations to register as a body donor.

    Science.gov (United States)

    Cornwall, Jon; Poppelwell, Zoe; McManus, Ruth

    2018-05-15

    Individuals who register as body donors do so for various reasons, with aiding medical science a common motivation. Despite awareness of several key reasons for donation, there are few in-depth explorations of these motivations to contextualize persons' reasons for donating. This study undertakes a mixed-method exploration of motivations for body donation to facilitate deeper understanding of the reasons underpinning donor registration. A survey of all newly registered body donors at a New Zealand university was performed over a single year. The survey included basic demographic information, a categorical question on reason for donation, a free-text question on donation motivation, and a free-text question allowing "other" comments on body donation. Basic statistical analysis was performed on demographic and categorical data, and thematic analysis used on free-text responses. From 169 registrants, 126 people (average age 70.5 years; 72 female) returned completed surveys (response rate 75%). Categorical data indicate a primary motivation of aiding medical science (86%). Fifty-one respondents (40%) provided free-text data on motivation, with other comments related to motivation provided by forty-one (33%). Common themes included reference to usefulness, uniqueness (pathophysiology and anatomy), gift-giving, kinship, and impermanence of the physical body. Consistent with previous studies, the primary reason for body donation was aiding medical science, however underpinning this was a complex layer of themes and sub-themes shaping motivations for choices. Findings provide important information that can guide development of robust informed consent processes, aid appropriate thanksgiving service delivery, and further contextualize the importance of medical professionals in body donation culture. Anat Sci Educ. © 2018 American Association of Anatomists. © 2018 American Association of Anatomists.

  4. Limitations of systemic accident analysis methods

    Directory of Open Access Journals (Sweden)

    Casandra Venera BALAN

    2016-12-01

    Full Text Available In terms of system theory, the description of complex accidents is not limited to the analysis of the sequence of events / individual conditions, but highlights nonlinear functional characteristics and frames human or technical performance in relation to normal functioning of the system, in safety conditions. Thus, the research of the system entities as a whole is no longer an abstraction of a concrete situation, but an exceeding of the theoretical limits set by analysis based on linear methods. Despite the issues outlined above, the hypothesis that there isn’t a complete method for accident analysis is supported by the nonlinearity of the considered function or restrictions, imposing a broad vision of the elements introduced in the analysis, so it can identify elements corresponding to nominal parameters or trigger factors.

  5. EXPLORATORY FACTOR ANALYSIS (EFA IN CONSUMER BEHAVIOR AND MARKETING RESEARCH

    Directory of Open Access Journals (Sweden)

    Marcos Pascual Soler

    2012-06-01

    Full Text Available Exploratory Factor Analysis (EFA is one of the most widely used statistical procedures in social research. The main objective of this work is to describe the most common practices used by researchers in the consumer behavior and marketing area. Through a literature review methodology the practices of AFE in five consumer behavior and marketing journals(2000-2010 were analyzed. Then, the choices made by the researchers concerning factor model, retention criteria, rotation, factors interpretation and other relevant issues to factor analysis were analized. The results suggest that researchers routinely conduct analyses using such questionable methods. Suggestions for improving the use of factor analysis and the reporting of results are presented and a checklist (Exploratory Factor Analysis Checklist, EFAC is provided to help editors, reviewers, and authors improve reporting exploratory factor analysis.

  6. Flows method in global analysis

    International Nuclear Information System (INIS)

    Duong Minh Duc.

    1994-12-01

    We study the gradient flows method for W r,p (M,N) where M and N are Riemannian manifold and r may be less than m/p. We localize some global analysis problem by constructing gradient flows which only change the value of any u in W r,p (M,N) in a local chart of M. (author). 24 refs

  7. Analysis of Bernstein's factorization circuit

    NARCIS (Netherlands)

    Lenstra, A.K.; Shamir, A.; Tomlinson, J.; Tromer, E.; Zheng, Y.

    2002-01-01

    In [1], Bernstein proposed a circuit-based implementation of the matrix step of the number field sieve factorization algorithm. These circuits offer an asymptotic cost reduction under the measure "construction cost x run time". We evaluate the cost of these circuits, in agreement with [1], but argue

  8. An unconventional method of quantitative microstructural analysis

    International Nuclear Information System (INIS)

    Rastani, M.

    1995-01-01

    The experiment described here introduces a simple methodology which could be used to replace the time-consuming and expensive conventional methods of metallographic and quantitative analysis of thermal treatment effect on microstructure. The method is ideal for the microstructural evaluation of tungsten filaments and other wire samples such as copper wire which can be conveniently coiled. Ten such samples were heat treated by ohmic resistance at temperatures which were expected to span the recrystallization range. After treatment, the samples were evaluated in the elastic recovery test. The normalized elastic recovery factor was defined in terms of these deflections. Experimentally it has shown that elastic recovery factor depends on the degree of recrystallization. In other words this factor is used to determine the fraction of unrecrystallized material. Because the elastic recovery method examines the whole filament rather than just one section through the filament as in metallographical method, it more accurately measures the degree of recrystallization. The method also takes a considerably shorter time and cost compared to the conventional method

  9. An Analysis of Construction Accident Factors Based on Bayesian Network

    OpenAIRE

    Yunsheng Zhao; Jinyong Pei

    2013-01-01

    In this study, we have an analysis of construction accident factors based on bayesian network. Firstly, accidents cases are analyzed to build Fault Tree method, which is available to find all the factors causing the accidents, then qualitatively and quantitatively analyzes the factors with Bayesian network method, finally determines the safety management program to guide the safety operations. The results of this study show that bad condition of geological environment has the largest posterio...

  10. Quantitative analysis method for ship construction quality

    Directory of Open Access Journals (Sweden)

    FU Senzong

    2017-03-01

    Full Text Available The excellent performance of a ship is assured by the accurate evaluation of its construction quality. For a long time, research into the construction quality of ships has mainly focused on qualitative analysis due to a shortage of process data, which results from limited samples, varied process types and non-standardized processes. Aiming at predicting and controlling the influence of the construction process on the construction quality of ships, this article proposes a reliability quantitative analysis flow path for the ship construction process and fuzzy calculation method. Based on the process-quality factor model proposed by the Function-Oriented Quality Control (FOQC method, we combine fuzzy mathematics with the expert grading method to deduce formulations calculating the fuzzy process reliability of the ordinal connection model, series connection model and mixed connection model. The quantitative analysis method is applied in analyzing the process reliability of a ship's shaft gear box installation, which proves the applicability and effectiveness of the method. The analysis results can be a useful reference for setting key quality inspection points and optimizing key processes.

  11. Data Analysis Methods for Paleogenomics

    DEFF Research Database (Denmark)

    Avila Arcos, Maria del Carmen

    (Danmarks Grundforskningfond) 'Centre of Excellence in GeoGenetics' grant, with additional funding provided by the Danish Council for Independent Research 'Sapere Aude' programme. The thesis comprises five chapters, all of which represent different projects that involved the analysis of massive amounts......, thanks to the introduction of NGS and the implementation of data analysis methods specific for each project. Chapters 1 to 3 have been published in peer-reviewed journals and Chapter 4 is currently in review. Chapter 5 consists of a manuscript describing initial results of an ongoing research project......The work presented in this thesis is the result of research carried out during a three-year PhD at the Centre for GeoGenetics, Natural History Museum of Denmark, University of Copenhagen, under supervision of Professor Tom Gilbert. The PhD was funded by the Danish National Research Foundation...

  12. A New Boron Analysis Method

    Energy Technology Data Exchange (ETDEWEB)

    Weitman, J; Daaverhoeg, N; Farvolden, S

    1970-07-01

    In connection with fast neutron (n, {alpha}) cross section measurements a novel boron analysis method has been developed. The boron concentration is inferred from the mass spectrometrically determined number of helium atoms produced in the thermal and epithermal B-10 (n, {alpha}) reaction. The relation between helium amount and boron concentration is given, including corrections for self shielding effects and background levels. Direct and diffusion losses of helium are calculated and losses due to gettering, adsorption and HF-ionization in the release stage are discussed. A series of boron determinations is described and the results are compared with those obtained by other methods, showing excellent agreement. The lower limit of boron concentration which can be measured varies with type of sample. In e.g. steel, concentrations below 10-5 % boron in samples of 0.1-1 gram may be determined.

  13. Housing price forecastability: A factor analysis

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    of the model stays high at longer horizons. The estimated factors are strongly statistically signi…cant according to a bootstrap resampling method which takes into account that the factors are estimated regressors. The simple three-factor model also contains substantial out-of-sample predictive power...

  14. Gravimetric and titrimetric methods of analysis

    International Nuclear Information System (INIS)

    Rives, R.D.; Bruks, R.R.

    1983-01-01

    Gravimetric and titrimetric methods of analysis are considered. Methods of complexometric titration are mentioned, as well as methods of increasing sensitivity in titrimetry. Gravimetry and titrimetry are applied during analysis for traces of geological materials

  15. Amplification of the epidermal growth factor receptor gene in glioblastoma: an analysis of the relationship between genotype and phenotype by CISH method.

    Science.gov (United States)

    Miyanaga, Tomomi; Hirato, Junko; Nakazato, Yoichi

    2008-04-01

    We examined epidermal growth factor receptor (EGFR) overexpression and EGFR gene amplification using immunohistochemistry (IHC) and chromogenic in situ hybridization (CISH) in 109 glioblastomas, including 98 primary glioblastomas and 11 secondary glioblastomas. EGFR overexpression and EGFR gene amplification were found in 33% and 24% of glioblastoma, respectively, and all of those cases were primary glioblastoma. Large ischemic necrosis was significantly more frequent in primary glioblastomas than in secondary glioblastomas (54% vs. 18%), but pseudopalisading necrosis was not (65% vs. 54%). EGFR gene amplification was detected significantly more frequently in cases with both types of necrosis. Although glioblastomas with EGFR gene amplification invariably exhibited EGFR overexpression at the level of the whole tumor, tumor cells with EGFR gene amplification did not always show EGFR overexpression at the level of individual tumor cells. Cases of "strong" EGFR overexpression on IHC could be regarded as having EGFR gene amplification, and cases without EGFR overexpression could not. Cases of "weak" EGFR overexpression should be tested with CISH to confirm the presence of EGFR gene amplification. We found that 54% of glioblastomas with EGFR gene amplification were composed of areas with and without EGFR gene amplification; however, there were no obvious differences in morphology between tumor cells with and without EGFR gene amplification. Although small cell architecture might be associated with EGFR gene amplification at the level of the whole tumor, it did not always suggest amplification of the EGFR gene at the level of individual tumor cells. In one case, it seemed to suggest that a clone with EGFR gene amplification may arise in pre-existing tumor tissue and extend into the surrounding area. In cases of overall EGFR amplification, CISH would be a useful tool to decide the tumor border in areas infiltrated by tumor cells.

  16. Convergence Improvement of Response Matrix Method with Large Discontinuity Factors

    International Nuclear Information System (INIS)

    Yamamoto, Akio

    2003-01-01

    In the response matrix method, a numerical divergence problem has been reported when extremely small or large discontinuity factors are utilized in the calculations. In this paper, an alternative response matrix formulation to solve the divergence problem is discussed, and properties of iteration matrixes are investigated through eigenvalue analyses. In the conventional response matrix formulation, partial currents between adjacent nodes are assumed to be discontinuous, and outgoing partial currents are converted into incoming partial currents by the discontinuity factor matrix. Namely, the partial currents of the homogeneous system (i.e., homogeneous partial currents) are treated in the conventional response matrix formulation. In this approach, the spectral radius of an iteration matrix for the partial currents may exceed unity when an extremely small or large discontinuity factor is used. Contrary to this, an alternative response matrix formulation using heterogeneous partial currents is discussed in this paper. In the latter approach, partial currents are assumed to be continuous between adjacent nodes, and discontinuity factors are directly considered in the coefficients of a response matrix. From the eigenvalue analysis of the iteration matrix for the one-group, one-dimensional problem, the spectral radius for the heterogeneous partial current formulation does not exceed unity even if an extremely small or large discontinuity factor is used in the calculation; numerical stability of the alternative formulation is superior to the conventional one. The numerical stability of the heterogeneous partial current formulation is also confirmed by the two-dimensional light water reactor core analysis. Since the heterogeneous partial current formulation does not require any approximation, the converged solution exactly reproduces the reference solution when the discontinuity factors are directly derived from the reference calculation

  17. Novel Method of Production Decline Analysis

    Science.gov (United States)

    Xie, Shan; Lan, Yifei; He, Lei; Jiao, Yang; Wu, Yong

    2018-02-01

    ARPS decline curves is the most commonly used in oil and gas field due to its minimal data requirements and ease application. And prediction of production decline which is based on ARPS analysis rely on known decline type. However, when coefficient index are very approximate under different decline type, it is difficult to directly recognize decline trend of matched curves. Due to difficulties above, based on simulation results of multi-factor response experiments, a new dynamic decline prediction model is introduced with using multiple linear regression of influence factors. First of all, according to study of effect factors of production decline, interaction experimental schemes are designed. Based on simulated results, annual decline rate is predicted by decline model. Moreover, the new method is applied in A gas filed of Ordos Basin as example to illustrate reliability. The result commit that the new model can directly predict decline tendency without needing recognize decline style. From arithmetic aspect, it also take advantage of high veracity. Finally, the new method improves the evaluation method of gas well production decline in low permeability gas reservoir, which also provides technical support for further understanding of tight gas field development laws.

  18. Analysis of technological, institutional and socioeconomic factors ...

    African Journals Online (AJOL)

    Analysis of technological, institutional and socioeconomic factors that influences poor reading culture among secondary school students in Nigeria. ... Proliferation and availability of smart phones, chatting culture and social media were identified as technological factors influencing poor reading culture among secondary ...

  19. A Bayesian Nonparametric Approach to Factor Analysis

    DEFF Research Database (Denmark)

    Piatek, Rémi; Papaspiliopoulos, Omiros

    2018-01-01

    This paper introduces a new approach for the inference of non-Gaussian factor models based on Bayesian nonparametric methods. It relaxes the usual normality assumption on the latent factors, widely used in practice, which is too restrictive in many settings. Our approach, on the contrary, does no...

  20. Hand function evaluation: a factor analysis study.

    Science.gov (United States)

    Jarus, T; Poremba, R

    1993-05-01

    The purpose of this study was to investigate hand function evaluations. Factor analysis with varimax rotation was used to assess the fundamental characteristics of the items included in the Jebsen Hand Function Test and the Smith Hand Function Evaluation. The study sample consisted of 144 subjects without disabilities and 22 subjects with Colles fracture. Results suggest a four factor solution: Factor I--pinch movement; Factor II--grasp; Factor III--target accuracy; and Factor IV--activities of daily living. These categories differentiated the subjects without Colles fracture from the subjects with Colles fracture. A hand function evaluation consisting of these four factors would be useful. Such an evaluation that can be used for current clinical purposes is provided.

  1. Analysis methods (from 301 to 351)

    International Nuclear Information System (INIS)

    Analysis methods of materials used in the nuclear field (uranium, plutonium and their compounds, zirconium, magnesium, water...) and determination of impurities. Only reliable methods are selected [fr

  2. A survey on critical factors influencing new advertisement methods

    Directory of Open Access Journals (Sweden)

    Naser Azad

    2013-02-01

    Full Text Available Soft drink beverages are important part of many people’s foods and many prefer soft drink to water when they have dinner. Therefore, this business model can be considered as the longest lasting sector for many years and there has been not much change in these products. However, new methods of advertisement play important role for increasing market share. In this paper, we study the impact of new methods of advertisement in product development. The proposed study of this paper designs a questionnaire for one of Iranian soft drink producers, which consisted of 274 questions in Likert scale and uses factor analysis (FA to analyze the results. The study selects 250 people who live in city of Tehran, Iran and Cronbach alpha has been calculated as 0.88, which is well above the minimum desirable limit. According to our results, there were six important factors impacting in product development, including modern advertisement techniques, emotional impact, strategy of market leadership, pricing strategy, product life chain and supply entity. The most important factor loading in these six components include impact of social values, persuading unaware and uninformed customers, ability to monopolizing in production, improving pricing techniques, product life cycle and negative impact of high advertisement.

  3. Left ventricular wall motion abnormalities evaluated by factor analysis as compared with Fourier analysis

    International Nuclear Information System (INIS)

    Hirota, Kazuyoshi; Ikuno, Yoshiyasu; Nishikimi, Toshio

    1986-01-01

    Factor analysis was applied to multigated cardiac pool scintigraphy to evaluate its ability to detect left ventricular wall motion abnormalities in 35 patients with old myocardial infarction (MI), and in 12 control cases with normal left ventriculography. All cases were also evaluated by conventional Fourier analysis. In most cases with normal left ventriculography, the ventricular and atrial factors were extracted by factor analysis. In cases with MI, the third factor was obtained in the left ventricle corresponding to wall motion abnormality. Each case was scored according to the coincidence of findings of ventriculography and those of factor analysis or Fourier analysis. Scores were recorded for three items; the existence, location, and degree of asynergy. In cases of MI, the detection rate of asynergy was 94 % by factor analysis, 83 % by Fourier analysis, and the agreement in respect to location was 71 % and 66 %, respectively. Factor analysis had higher scores than Fourier analysis, but this was not significant. The interobserver error of factor analysis was less than that of Fourier analysis. Factor analysis can display locations and dynamic motion curves of asynergy, and it is regarded as a useful method for detecting and evaluating left ventricular wall motion abnormalities. (author)

  4. comparison of elastic-plastic FE method and engineering method for RPV fracture mechanics analysis

    International Nuclear Information System (INIS)

    Sun Yingxue; Zheng Bin; Zhang Fenggang

    2009-01-01

    This paper described the FE analysis of elastic-plastic fracture mechanics for a crack in RPV belt line using ABAQUS code. It calculated and evaluated the stress intensity factor and J integral of crack under PTS transients. The result is also compared with that by engineering analysis method. It shows that the results using engineering analysis method is a little larger than the results using FE analysis of 3D elastic-plastic fracture mechanics, thus the engineering analysis method is conservative than the elastic-plastic fracture mechanics method. (authors)

  5. DTI analysis methods : Voxel-based analysis

    NARCIS (Netherlands)

    Van Hecke, Wim; Leemans, Alexander; Emsell, Louise

    2016-01-01

    Voxel-based analysis (VBA) of diffusion tensor imaging (DTI) data permits the investigation of voxel-wise differences or changes in DTI metrics in every voxel of a brain dataset. It is applied primarily in the exploratory analysis of hypothesized group-level alterations in DTI parameters, as it does

  6. Substoichiometric method in the simple radiometric analysis

    International Nuclear Information System (INIS)

    Ikeda, N.; Noguchi, K.

    1979-01-01

    The substoichiometric method is applied to simple radiometric analysis. Two methods - the standard reagent method and the standard sample method - are proposed. The validity of the principle of the methods is verified experimentally in the determination of silver by the precipitation method, or of zinc by the ion-exchange or solvent-extraction method. The proposed methods are simple and rapid compared with the conventional superstoichiometric method. (author)

  7. Analysis of Economic Factors Affecting Stock Market

    OpenAIRE

    Xie, Linyin

    2010-01-01

    This dissertation concentrates on analysis of economic factors affecting Chinese stock market through examining relationship between stock market index and economic factors. Six economic variables are examined: industrial production, money supply 1, money supply 2, exchange rate, long-term government bond yield and real estate total value. Stock market comprises fixed interest stocks and equities shares. In this dissertation, stock market is restricted to equity market. The stock price in thi...

  8. Chemical methods of rock analysis

    National Research Council Canada - National Science Library

    Jeffery, P. G; Hutchison, D

    1981-01-01

    .... Such methods include those based upon spectrophotometry, flame emission spectrometry and atomic absorption spectroscopy, as well as gravimetry, titrimetry and the use of ion-selective electrodes...

  9. Risk factors for the undermined coal bed mining method

    Energy Technology Data Exchange (ETDEWEB)

    Arad, V. [Petrosani Univ., Petrosani (Romania). Dept. of Mining Engineering; Arad, S. [Petrosani Univ., Petrosani (Romania). Dept of Electrical Engineering

    2009-07-01

    The Romanian mining industry has been in a serious decline and is undergoing ample restructuring. Analyses of reliability and risk are most important during the early stages of a project in guiding the decision as to whether or not to proceed and in helping to establish design criteria. A technical accident occurred in 2008 at the Petrila coal mine involving an explosion during the exploitation of a coal seam. Over time a series of technical accidents, such as explosions and ignitions of methane gas, roof blowing phenomena or self-ignition of coal and hazard combustions have occurred. This paper presented an analysis of factors that led to this accident as well an analysis of factors related to the mining method. Specifically, the paper discussed the geomechanical characteristics of rocks and coal; the geodynamic phenomenon from working face 431; the spontaneous combustion phenomenon; gas accumulation; and the pressure and the height of the undermined coal bed. It was concluded that for the specific conditions encountered in Petrila colliery, the undermined bed height should be between 5 and 7 metres, depending on the geomechanic characteristics of coal and surrounding rocks. 8 refs., 1 tab., 3 figs.

  10. Probabilistic methods in combinatorial analysis

    CERN Document Server

    Sachkov, Vladimir N

    2014-01-01

    This 1997 work explores the role of probabilistic methods for solving combinatorial problems. These methods not only provide the means of efficiently using such notions as characteristic and generating functions, the moment method and so on but also let us use the powerful technique of limit theorems. The basic objects under investigation are nonnegative matrices, partitions and mappings of finite sets, with special emphasis on permutations and graphs, and equivalence classes specified on sequences of finite length consisting of elements of partially ordered sets; these specify the probabilist

  11. Methods in quantitative image analysis.

    Science.gov (United States)

    Oberholzer, M; Ostreicher, M; Christen, H; Brühlmann, M

    1996-05-01

    The main steps of image analysis are image capturing, image storage (compression), correcting imaging defects (e.g. non-uniform illumination, electronic-noise, glare effect), image enhancement, segmentation of objects in the image and image measurements. Digitisation is made by a camera. The most modern types include a frame-grabber, converting the analog-to-digital signal into digital (numerical) information. The numerical information consists of the grey values describing the brightness of every point within the image, named a pixel. The information is stored in bits. Eight bits are summarised in one byte. Therefore, grey values can have a value between 0 and 256 (2(8)). The human eye seems to be quite content with a display of 5-bit images (corresponding to 64 different grey values). In a digitised image, the pixel grey values can vary within regions that are uniform in the original scene: the image is noisy. The noise is mainly manifested in the background of the image. For an optimal discrimination between different objects or features in an image, uniformity of illumination in the whole image is required. These defects can be minimised by shading correction [subtraction of a background (white) image from the original image, pixel per pixel, or division of the original image by the background image]. The brightness of an image represented by its grey values can be analysed for every single pixel or for a group of pixels. The most frequently used pixel-based image descriptors are optical density, integrated optical density, the histogram of the grey values, mean grey value and entropy. The distribution of the grey values existing within an image is one of the most important characteristics of the image. However, the histogram gives no information about the texture of the image. The simplest way to improve the contrast of an image is to expand the brightness scale by spreading the histogram out to the full available range. Rules for transforming the grey value

  12. Moyer's method of mixed dentition analysis: a meta-analysis ...

    African Journals Online (AJOL)

    The applicability of tables derived from the data Moyer used to other ethnic groups has ... This implies that Moyer's method of prediction may have population variations. ... Key Words: meta-analysis, mixed dentition analysis, Moyer's method

  13. Factor Economic Analysis at Forestry Enterprises

    Directory of Open Access Journals (Sweden)

    M.Yu. Chik

    2018-03-01

    Full Text Available The article studies the importance of economic analysis according to the results of research of scientific works of domestic and foreign scientists. The calculation of the influence of factors on the change in the cost of harvesting timber products by cost items has been performed. The results of the calculation of the influence of factors on the change of costs on 1 UAH are determined using the full cost of sold products. The variable and fixed costs and their distribution are allocated that influences the calculation of the impact of factors on cost changes on 1 UAH of sold products. The paper singles out the general results of calculating the influence of factors on cost changes on 1 UAH of sold products. According to the results of the analysis, the list of reserves for reducing the cost of production at forest enterprises was proposed. The main sources of reserves for reducing the prime cost of forest products at forest enterprises are investigated based on the conducted factor analysis.

  14. Text mining factor analysis (TFA) in green tea patent data

    Science.gov (United States)

    Rahmawati, Sela; Suprijadi, Jadi; Zulhanif

    2017-03-01

    Factor analysis has become one of the most widely used multivariate statistical procedures in applied research endeavors across a multitude of domains. There are two main types of analyses based on factor analysis: Exploratory Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA). Both EFA and CFA aim to observed relationships among a group of indicators with a latent variable, but they differ fundamentally, a priori and restrictions made to the factor model. This method will be applied to patent data technology sector green tea to determine the development technology of green tea in the world. Patent analysis is useful in identifying the future technological trends in a specific field of technology. Database patent are obtained from agency European Patent Organization (EPO). In this paper, CFA model will be applied to the nominal data, which obtain from the presence absence matrix. While doing processing, analysis CFA for nominal data analysis was based on Tetrachoric matrix. Meanwhile, EFA model will be applied on a title from sector technology dominant. Title will be pre-processing first using text mining analysis.

  15. An SPSSR -Menu for Ordinal Factor Analysis

    Directory of Open Access Journals (Sweden)

    Mario Basto

    2012-01-01

    Full Text Available Exploratory factor analysis is a widely used statistical technique in the social sciences. It attempts to identify underlying factors that explain the pattern of correlations within a set of observed variables. A statistical software package is needed to perform the calculations. However, there are some limitations with popular statistical software packages, like SPSS. The R programming language is a free software package for statistical and graphical computing. It offers many packages written by contributors from all over the world and programming resources that allow it to overcome the dialog limitations of SPSS. This paper offers an SPSS dialog written in theR programming language with the help of some packages, so that researchers with little or no knowledge in programming, or those who are accustomed to making their calculations based on statistical dialogs, have more options when applying factor analysis to their data and hence can adopt a better approach when dealing with ordinal, Likert-type data.

  16. Applying critical analysis - main methods

    Directory of Open Access Journals (Sweden)

    Miguel Araujo Alonso

    2012-02-01

    Full Text Available What is the usefulness of critical appraisal of literature? Critical analysis is a fundamental condition for the correct interpretation of any study that is subject to review. In epidemiology, in order to learn how to read a publication, we must be able to analyze it critically. Critical analysis allows us to check whether a study fulfills certain previously established methodological inclusion and exclusion criteria. This is frequently used in conducting systematic reviews although eligibility criteria are generally limited to the study design. Critical analysis of literature and be done implicitly while reading an article, as in reading for personal interest, or can be conducted in a structured manner, using explicit and previously established criteria. The latter is done when formally reviewing a topic.

  17. Trial Sequential Methods for Meta-Analysis

    Science.gov (United States)

    Kulinskaya, Elena; Wood, John

    2014-01-01

    Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

  18. Biological stability of drinking water: controlling factors, methods and challenges

    Directory of Open Access Journals (Sweden)

    Emmanuelle ePrest

    2016-02-01

    Full Text Available Biological stability of drinking water refers to the concept of providing consumers with drinking water of same microbial quality at the tap as produced at the water treatment facility. However, uncontrolled growth of bacteria can occur during distribution in water mains and premise plumbing, and can lead to hygienic (e.g. development of opportunistic pathogens, aesthetic (e.g. deterioration of taste, odour, colour or operational (e.g. fouling or biocorrosion of pipes problems. Drinking water contains diverse microorganisms competing for limited available nutrients for growth. Bacterial growth and interactions are regulated by factors such as (i type and concentration of available organic and inorganic nutrients, (ii type and concentration of residual disinfectant, (iii presence of predators such as protozoa and invertebrates, (iv environmental conditions such as water temperature, and (v spatial location of microorganisms (bulk water, sediment or biofilm. Water treatment and distribution conditions in water mains and premise plumbing affect each of these factors and shape bacterial community characteristics (abundance, composition, viability in distribution systems. Improved understanding of bacterial interactions in distribution systems and of environmental conditions impact is needed for better control of bacterial communities during drinking water production and distribution. This article reviews (i existing knowledge on biological stability controlling factors and (ii how these factors are affected by drinking water production and distribution conditions. In addition, (iii the concept of biological stability is discussed in light of experience with well-established and new analytical methods, enabling high throughput analysis and in-depth characterization of bacterial communities in drinking water. We discuss how knowledge gained from novel techniques will improve design and monitoring of water treatment and distribution systems in order to

  19. Biological Stability of Drinking Water: Controlling Factors, Methods, and Challenges

    Science.gov (United States)

    Prest, Emmanuelle I.; Hammes, Frederik; van Loosdrecht, Mark C. M.; Vrouwenvelder, Johannes S.

    2016-01-01

    Biological stability of drinking water refers to the concept of providing consumers with drinking water of same microbial quality at the tap as produced at the water treatment facility. However, uncontrolled growth of bacteria can occur during distribution in water mains and premise plumbing, and can lead to hygienic (e.g., development of opportunistic pathogens), aesthetic (e.g., deterioration of taste, odor, color) or operational (e.g., fouling or biocorrosion of pipes) problems. Drinking water contains diverse microorganisms competing for limited available nutrients for growth. Bacterial growth and interactions are regulated by factors, such as (i) type and concentration of available organic and inorganic nutrients, (ii) type and concentration of residual disinfectant, (iii) presence of predators, such as protozoa and invertebrates, (iv) environmental conditions, such as water temperature, and (v) spatial location of microorganisms (bulk water, sediment, or biofilm). Water treatment and distribution conditions in water mains and premise plumbing affect each of these factors and shape bacterial community characteristics (abundance, composition, viability) in distribution systems. Improved understanding of bacterial interactions in distribution systems and of environmental conditions impact is needed for better control of bacterial communities during drinking water production and distribution. This article reviews (i) existing knowledge on biological stability controlling factors and (ii) how these factors are affected by drinking water production and distribution conditions. In addition, (iii) the concept of biological stability is discussed in light of experience with well-established and new analytical methods, enabling high throughput analysis and in-depth characterization of bacterial communities in drinking water. We discussed, how knowledge gained from novel techniques will improve design and monitoring of water treatment and distribution systems in order

  20. Biological Stability of Drinking Water: Controlling Factors, Methods, and Challenges

    KAUST Repository

    Prest, Emmanuelle I.

    2016-02-01

    Biological stability of drinking water refers to the concept of providing consumers with drinking water of same microbial quality at the tap as produced at the water treatment facility. However, uncontrolled growth of bacteria can occur during distribution in water mains and premise plumbing, and can lead to hygienic (e.g., development of opportunistic pathogens), aesthetic (e.g., deterioration of taste, odor, color) or operational (e.g., fouling or biocorrosion of pipes) problems. Drinking water contains diverse microorganisms competing for limited available nutrients for growth. Bacterial growth and interactions are regulated by factors, such as (i) type and concentration of available organic and inorganic nutrients, (ii) type and concentration of residual disinfectant, (iii) presence of predators, such as protozoa and invertebrates, (iv) environmental conditions, such as water temperature, and (v) spatial location of microorganisms (bulk water, sediment, or biofilm). Water treatment and distribution conditions in water mains and premise plumbing affect each of these factors and shape bacterial community characteristics (abundance, composition, viability) in distribution systems. Improved understanding of bacterial interactions in distribution systems and of environmental conditions impact is needed for better control of bacterial communities during drinking water production and distribution. This article reviews (i) existing knowledge on biological stability controlling factors and (ii) how these factors are affected by drinking water production and distribution conditions. In addition, (iii) the concept of biological stability is discussed in light of experience with well-established and new analytical methods, enabling high throughput analysis and in-depth characterization of bacterial communities in drinking water. We discussed, how knowledge gained from novel techniques will improve design and monitoring of water treatment and distribution systems in order

  1. Environmental Performance in Countries Worldwide: Determinant Factors and Multivariate Analysis

    Directory of Open Access Journals (Sweden)

    Isabel Gallego-Alvarez

    2014-11-01

    Full Text Available The aim of this study is to analyze the environmental performance of countries and the variables that can influence it. At the same time, we performed a multivariate analysis using the HJ-biplot, an exploratory method that looks for hidden patterns in the data, obtained from the usual singular value decomposition (SVD of the data matrix, to contextualize the countries grouped by geographical areas and the variables relating to environmental indicators included in the environmental performance index. The sample used comprises 149 countries of different geographic areas. The findings obtained from the empirical analysis emphasize that socioeconomic factors, such as economic wealth and education, as well as institutional factors represented by the style of public administration, in particular control of corruption, are determinant factors of environmental performance in the countries analyzed. In contrast, no effect on environmental performance was found for factors relating to the internal characteristics of a country or political factors.

  2. Nominal Performance Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose

  3. Hybrid methods for cybersecurity analysis :

    Energy Technology Data Exchange (ETDEWEB)

    Davis, Warren Leon,; Dunlavy, Daniel M.

    2014-01-01

    Early 2010 saw a signi cant change in adversarial techniques aimed at network intrusion: a shift from malware delivered via email attachments toward the use of hidden, embedded hyperlinks to initiate sequences of downloads and interactions with web sites and network servers containing malicious software. Enterprise security groups were well poised and experienced in defending the former attacks, but the new types of attacks were larger in number, more challenging to detect, dynamic in nature, and required the development of new technologies and analytic capabilities. The Hybrid LDRD project was aimed at delivering new capabilities in large-scale data modeling and analysis to enterprise security operators and analysts and understanding the challenges of detection and prevention of emerging cybersecurity threats. Leveraging previous LDRD research e orts and capabilities in large-scale relational data analysis, large-scale discrete data analysis and visualization, and streaming data analysis, new modeling and analysis capabilities were quickly brought to bear on the problems in email phishing and spear phishing attacks in the Sandia enterprise security operational groups at the onset of the Hybrid project. As part of this project, a software development and deployment framework was created within the security analyst work ow tool sets to facilitate the delivery and testing of new capabilities as they became available, and machine learning algorithms were developed to address the challenge of dynamic threats. Furthermore, researchers from the Hybrid project were embedded in the security analyst groups for almost a full year, engaged in daily operational activities and routines, creating an atmosphere of trust and collaboration between the researchers and security personnel. The Hybrid project has altered the way that research ideas can be incorporated into the production environments of Sandias enterprise security groups, reducing time to deployment from months and

  4. An integrating factor matrix method to find first integrals

    International Nuclear Information System (INIS)

    Saputra, K V I; Quispel, G R W; Van Veen, L

    2010-01-01

    In this paper we develop an integrating factor matrix method to derive conditions for the existence of first integrals. We use this novel method to obtain first integrals, along with the conditions for their existence, for two- and three-dimensional Lotka-Volterra systems with constant terms. The results are compared to previous results obtained by other methods.

  5. An Effective Method to Accurately Calculate the Phase Space Factors for β"-β"- Decay

    International Nuclear Information System (INIS)

    Horoi, Mihai; Neacsu, Andrei

    2016-01-01

    Accurate calculations of the electron phase space factors are necessary for reliable predictions of double-beta decay rates and for the analysis of the associated electron angular and energy distributions. We present an effective method to calculate these phase space factors that takes into account the distorted Coulomb field of the daughter nucleus, yet it allows one to easily calculate the phase space factors with good accuracy relative to the most exact methods available in the recent literature.

  6. Microparticle analysis system and method

    Science.gov (United States)

    Morrison, Dennis R. (Inventor)

    2007-01-01

    A device for analyzing microparticles is provided which includes a chamber with an inlet and an outlet for respectively introducing and dispensing a flowing fluid comprising microparticles, a light source for providing light through the chamber and a photometer for measuring the intensity of light transmitted through individual microparticles. The device further includes an imaging system for acquiring images of the fluid. In some cases, the device may be configured to identify and determine a quantity of the microparticles within the fluid. Consequently, a method for identifying and tracking microparticles in motion is contemplated herein. The method involves flowing a fluid comprising microparticles in laminar motion through a chamber, transmitting light through the fluid, measuring the intensities of the light transmitted through the microparticles, imaging the fluid a plurality of times and comparing at least some of the intensities of light between different images of the fluid.

  7. Factor analysis for exercise stress radionuclide ventriculography

    International Nuclear Information System (INIS)

    Hirota, Kazuyoshi; Yasuda, Mitsutaka; Oku, Hisao; Ikuno, Yoshiyasu; Takeuchi, Kazuhide; Takeda, Tadanao; Ochi, Hironobu

    1987-01-01

    Using factor analysis, a new image processing in exercise stress radionuclide ventriculography, changes in factors associated with exercise were evaluated in 14 patients with angina pectoris or old myocardial infarction. The patients were imaged in the left anterior oblique projection, and three factor images were presented on a color coded scale. Abnormal factors (AF) were observed in 6 patients before exercise, 13 during exercise, and 4 after exercise. In 7 patients, the occurrence of AF was associated with exercise. Five of them became free from AF after exercise. Three patients showing AF before exercise had aggravation of AF during exercise. Overall, the occurrence or aggravation of AF was associated with exercise in ten (71 %) of the patients. The other three patients, however, had disappearance of AF during exercise. In the last patient, none of the AF was observed throughout the study. In view of a high incidence of AF associated with exercise, the factor analysis may have the potential in evaluating cardiac reverse from the viewpoint of left ventricular wall motion abnormality. (Namekawa, K.)

  8. Correction factor for hair analysis by PIXE

    International Nuclear Information System (INIS)

    Montenegro, E.C.; Baptista, G.B.; Castro Faria, L.V. de; Paschoa, A.S.

    1980-01-01

    The application of the Particle Induced X-ray Emission (PIXE) technique to analyse quantitatively the elemental composition of hair specimens brings about some difficulties in the interpretation of the data. The present paper proposes a correction factor to account for the effects of the energy loss of the incident particle with penetration depth, and X-ray self-absorption when a particular geometrical distribution of elements in hair is assumed for calculational purposes. The correction factor has been applied to the analysis of hair contents Zn, Cu and Ca as a function of the energy of the incident particle. (orig.)

  9. Boolean Factor Analysis by Attractor Neural Network

    Czech Academy of Sciences Publication Activity Database

    Frolov, A. A.; Húsek, Dušan; Muraviev, I. P.; Polyakov, P.Y.

    2007-01-01

    Roč. 18, č. 3 (2007), s. 698-707 ISSN 1045-9227 R&D Projects: GA AV ČR 1ET100300419; GA ČR GA201/05/0079 Institutional research plan: CEZ:AV0Z10300504 Keywords : recurrent neural network * Hopfield-like neural network * associative memory * unsupervised learning * neural network architecture * neural network application * statistics * Boolean factor analysis * dimensionality reduction * features clustering * concepts search * information retrieval Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.769, year: 2007

  10. Correction factor for hair analysis by PIXE

    International Nuclear Information System (INIS)

    Montenegro, E.C.; Baptista, G.B.; Castro Faria, L.V. de; Paschoa, A.S.

    1979-06-01

    The application of the Particle Induced X-ray Emission (PIXE) technique to analyse quantitatively the elemental composition of hair specimens brings about some difficulties in the interpretation of the data. The present paper proposes a correction factor to account for the effects of energy loss of the incident particle with penetration depth, and x-ray self-absorption when a particular geometrical distribution of elements in hair is assumed for calculational purposes. The correction factor has been applied to the analysis of hair contents Zn, Cu and Ca as a function of the energy of the incident particle.(Author) [pt

  11. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M.A. Wasiolek

    2003-07-25

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports (BSC 2003 [DIRS 160964]; BSC 2003 [DIRS 160965]; BSC 2003 [DIRS 160976]; BSC 2003 [DIRS 161239]; BSC 2003 [DIRS 161241]) contain detailed description of the model input parameters. This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs and conversion factors for the TSPA. The BDCFs will be used in performance assessment for calculating annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle activity in groundwater and the annual dose from beta- and photon-emitting radionuclides.

  12. Infinitesimal methods of mathematical analysis

    CERN Document Server

    Pinto, J S

    2004-01-01

    This modern introduction to infinitesimal methods is a translation of the book Métodos Infinitesimais de Análise Matemática by José Sousa Pinto of the University of Aveiro, Portugal and is aimed at final year or graduate level students with a background in calculus. Surveying modern reformulations of the infinitesimal concept with a thoroughly comprehensive exposition of important and influential hyperreal numbers, the book includes previously unpublished material on the development of hyperfinite theory of Schwartz distributions and its application to generalised Fourier transforms and harmon

  13. Current status of methods for shielding analysis

    International Nuclear Information System (INIS)

    Engle, W.W.

    1980-01-01

    Current methods used in shielding analysis and recent improvements in those methods are discussed. The status of methods development is discussed based on needs cited at the 1977 International Conference on Reactor Shielding. Additional areas where methods development is needed are discussed

  14. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M.A. Wasiolek

    2005-04-28

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis

  15. Nominal Performance Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M.A. Wasiolek

    2005-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standards. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis'' (Figure 1-1). The objectives of this analysis are to develop BDCFs for the

  16. Investigation of evaluation methods for human factors education effectiveness

    International Nuclear Information System (INIS)

    Yoshimura, Seiichi; Fujimoto, Junzo; Sasou Kunihide; Hasegawa, Naoko

    2004-01-01

    Education effectiveness in accordance with investment is required in the steam of electric power regulation alleviation. Therefore, evaluation methods for human factors education effectiveness which can observe human factors culture pervading process were investigated through research activities on education effectiveness in universities and actual in house education in industry companies. As a result, the contents of evaluation were found to be the change of feeling for human factors and some improving proposals in work places when considering the purpose of human factors education. And, questionnaire is found to be suitable for the style of evaluation. In addition, the timing of evaluation is desirable for both just after education and after some period in work places. Hereafter, data will be collected using these two kinds of questionnaires in human factors education courses in CRIEPI and some education courses in utilities. Thus, education effectiveness evaluation method which is suitable for human factors will be established. (author)

  17. Analysis of related risk factors for pancreatic fistula after pancreaticoduodenectomy

    Directory of Open Access Journals (Sweden)

    Qi-Song Yu

    2016-08-01

    Full Text Available Objective: To explore the related risk factors for pancreatic fistula after pancreaticoduodenectomy to provide a theoretical evidence for effectively preventing the occurrence of pancreatic fistula. Methods: A total of 100 patients who were admitted in our hospital from January, 2012 to January, 2015 and had performed pancreaticoduodenectomy were included in the study. The related risk factors for developing pancreatic fistula were collected for single factor and Logistic multi-factor analysis. Results: Among the included patients, 16 had pancreatic fistula, and the total occurrence rate was 16% (16/100. The single-factor analysis showed that the upper abdominal operation history, preoperative bilirubin, pancreatic texture, pancreatic duct diameter, intraoperative amount of bleeding, postoperative hemoglobin, and application of somatostatin after operation were the risk factors for developing pancreatic fistula (P<0.05. The multi-factor analysis showed that the upper abdominal operation history, the soft pancreatic texture, small pancreatic duct diameter, and low postoperative hemoglobin were the dependent risk factors for developing pancreatic fistula (OR=4.162, 6.104, 5.613, 4.034, P<0.05. Conclusions: The occurrence of pancreatic fistula after pancreaticoduodenectomy is closely associated with the upper abdominal operation history, the soft pancreatic texture, small pancreatic duct diameter, and low postoperative hemoglobin; therefore, effective measures should be taken to reduce the occurrence of pancreatic fistula according to the patients’ own conditions.

  18. Chapter 11. Community analysis-based methods

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  19. A comparison study on detection of key geochemical variables and factors through three different types of factor analysis

    Science.gov (United States)

    Hoseinzade, Zohre; Mokhtari, Ahmad Reza

    2017-10-01

    Large numbers of variables have been measured to explain different phenomena. Factor analysis has widely been used in order to reduce the dimension of datasets. Additionally, the technique has been employed to highlight underlying factors hidden in a complex system. As geochemical studies benefit from multivariate assays, application of this method is widespread in geochemistry. However, the conventional protocols in implementing factor analysis have some drawbacks in spite of their advantages. In the present study, a geochemical dataset including 804 soil samples collected from a mining area in central Iran in order to search for MVT type Pb-Zn deposits was considered to outline geochemical analysis through various fractal methods. Routine factor analysis, sequential factor analysis, and staged factor analysis were applied to the dataset after opening the data with (additive logratio) alr-transformation to extract mineralization factor in the dataset. A comparison between these methods indicated that sequential factor analysis has more clearly revealed MVT paragenesis elements in surface samples with nearly 50% variation in F1. In addition, staged factor analysis has given acceptable results while it is easy to practice. It could detect mineralization related elements while larger factor loadings are given to these elements resulting in better pronunciation of mineralization.

  20. Analysis of Increased Information Technology Outsourcing Factors

    Directory of Open Access Journals (Sweden)

    Brcar Franc

    2013-01-01

    Full Text Available The study explores the field of IT outsourcing. The narrow field of research is to build a model of IT outsourcing based on influential factors. The purpose of this research is to determine the influential factors on IT outsourcing expansion. A survey was conducted with 141 large-sized Slovenian companies. Data were statistically analyzed using binary logistic regression. The final model contains five factors: (1 management’s support; (2 knowledge on IT outsourcing; (3 improvement of efficiency and effectiveness; (4 quality improvement of IT services; and (5 innovation improvement of IT. Managers immediately can use the results of this research in their decision-making. Increased performance of each individual organization is to the benefit of the entire society. The examination of IT outsourcing with the methods used is the first such research in Slovenia.

  1. Confirmatory factor analysis applied to the Force Concept Inventory

    Science.gov (United States)

    Eaton, Philip; Willoughby, Shannon D.

    2018-06-01

    In 1995, Huffman and Heller used exploratory factor analysis to draw into question the factors of the Force Concept Inventory (FCI). Since then several papers have been published examining the factors of the FCI on larger sets of student responses and understandable factors were extracted as a result. However, none of these proposed factor models have been verified to not be unique to their original sample through the use of independent sets of data. This paper seeks to confirm the factor models proposed by Scott et al. in 2012, and Hestenes et al. in 1992, as well as another expert model proposed within this study through the use of confirmatory factor analysis (CFA) and a sample of 20 822 postinstruction student responses to the FCI. Upon application of CFA using the full sample, all three models were found to fit the data with acceptable global fit statistics. However, when CFA was performed using these models on smaller sample sizes the models proposed by Scott et al. and Eaton and Willoughby were found to be far more stable than the model proposed by Hestenes et al. The goodness of fit of these models to the data suggests that the FCI can be scored on factors that are not unique to a single class. These scores could then be used to comment on how instruction methods effect the performance of students along a single factor and more in-depth analyses of curriculum changes may be possible as a result.

  2. DISRUPTIVE EVENT BIOSPHERE DOSE CONVERSION FACTOR ANALYSIS

    International Nuclear Information System (INIS)

    M.A. Wasiolek

    2005-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The Biosphere Model Report (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2005 [DIRS 172827]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis'' (Figure 1-1). The objective of this analysis was to develop the BDCFs for the volcanic

  3. Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture

    Science.gov (United States)

    West, Phillip B [Idaho Falls, ID; Novascone, Stephen R [Idaho Falls, ID; Wright, Jerry P [Idaho Falls, ID

    2011-09-27

    Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.

  4. Analysis of mineral phases in coal utilizing factor analysis

    International Nuclear Information System (INIS)

    Roscoe, B.A.; Hopke, P.K.

    1982-01-01

    The mineral phase inclusions of coal are discussed. The contribution of these to a coal sample are determined utilizing several techniques. Neutron activation analysis in conjunction with coal washability studies have produced some information on the general trends of elemental variation in the mineral phases. These results have been enhanced by the use of various statistical techniques. The target transformation factor analysis is specifically discussed and shown to be able to produce elemental profiles of the mineral phases in coal. A data set consisting of physically fractionated coal samples was generated. These samples were analyzed by neutron activation analysis and then their elemental concentrations examined using TTFA. Information concerning the mineral phases in coal can thus be acquired from factor analysis even with limited data. Additional data may permit the resolution of additional mineral phases as well as refinement of theose already identified

  5. Parametric Methods for Order Tracking Analysis

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm

    2017-01-01

    Order tracking analysis is often used to find the critical speeds at which structural resonances are excited by a rotating machine. Typically, order tracking analysis is performed via non-parametric methods. In this report, however, we demonstrate some of the advantages of using a parametric method...

  6. Uncertainty of quantitative microbiological methods of pharmaceutical analysis.

    Science.gov (United States)

    Gunar, O V; Sakhno, N G

    2015-12-30

    The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. A Beginner’s Guide to Factor Analysis: Focusing on Exploratory Factor Analysis

    Directory of Open Access Journals (Sweden)

    An Gie Yong

    2013-10-01

    Full Text Available The following paper discusses exploratory factor analysis and gives an overview of the statistical technique and how it is used in various research designs and applications. A basic outline of how the technique works and its criteria, including its main assumptions are discussed as well as when it should be used. Mathematical theories are explored to enlighten students on how exploratory factor analysis works, an example of how to run an exploratory factor analysis on SPSS is given, and finally a section on how to write up the results is provided. This will allow readers to develop a better understanding of when to employ factor analysis and how to interpret the tables and graphs in the output.

  8. Rosenberg's Self-Esteem Scale: Two Factors or Method Effects.

    Science.gov (United States)

    Tomas, Jose M.; Oliver, Amparo

    1999-01-01

    Results of a study with 640 Spanish high school students suggest the existence of a global self-esteem factor underlying responses to Rosenberg's (M. Rosenberg, 1965) Self-Esteem Scale, although the inclusion of method effects is needed to achieve a good model fit. Method effects are associated with item wording. (SLD)

  9. Nominal Performance Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-08

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the Total System Performance Assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the groundwater exposure scenario, and the development of conversion factors for assessing compliance with the groundwater protection standard. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA-LA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the ''Biosphere Model Report'' in Figure 1-1, contain detailed description of the model input parameters, their development, and the relationship between the parameters and specific features events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the groundwater exposure scenario. The objectives of this analysis are to develop BDCFs for the groundwater exposure scenario for the three climate states considered in the TSPA-LA as well as conversion factors for evaluating compliance with the groundwater protection standard. The BDCFs will be used in performance assessment for calculating all-pathway annual doses for a given concentration of radionuclides in groundwater. The conversion factors will be used for calculating gross alpha particle

  10. The delayed neutron method of uranium analysis

    International Nuclear Information System (INIS)

    Wall, T.

    1989-01-01

    The technique of delayed neutron analysis (DNA) is discussed. The DNA rig installed on the MOATA reactor, the assay standards and the types of samples which have been assayed are described. Of the total sample throughput of about 55,000 units since the uranium analysis service began, some 78% has been concerned with analysis of uranium ore samples derived from mining and exploration. Delayed neutron analysis provides a high sensitivity, low cost uranium analysis method for both uranium exploration and other applications. It is particularly suitable for analysis of large batch samples and for non-destructive analysis over a wide range of matrices. 8 refs., 4 figs., 3 tabs

  11. Disruptive Event Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-07-21

    This analysis report, ''Disruptive Event Biosphere Dose Conversion Factor Analysis'', is one of the technical reports containing documentation of the ERMYN (Environmental Radiation Model for Yucca Mountain Nevada) biosphere model for the geologic repository at Yucca Mountain, its input parameters, and the application of the model to perform the dose assessment for the repository. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of the two reports that develop biosphere dose conversion factors (BDCFs), which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the conceptual model as well as the mathematical model and lists its input parameters. Model input parameters are developed and described in detail in five analysis report (BSC 2003 [DIRS 160964], BSC 2003 [DIRS 160965], BSC 2003 [DIRS 160976], BSC 2003 [DIRS 161239], and BSC 2003 [DIRS 161241]). The objective of this analysis was to develop the BDCFs for the volcanic ash exposure scenario and the dose factors (DFs) for calculating inhalation doses during volcanic eruption (eruption phase of the volcanic event). The volcanic ash exposure scenario is hereafter referred to as the volcanic ash scenario. For the volcanic ash scenario, the mode of radionuclide release into the biosphere is a volcanic eruption through the repository with the resulting entrainment of contaminated waste in the tephra and the subsequent atmospheric transport and dispersion of contaminated material in

  12. Disruptive Event Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M. A. Wasiolek

    2003-01-01

    This analysis report, ''Disruptive Event Biosphere Dose Conversion Factor Analysis'', is one of the technical reports containing documentation of the ERMYN (Environmental Radiation Model for Yucca Mountain Nevada) biosphere model for the geologic repository at Yucca Mountain, its input parameters, and the application of the model to perform the dose assessment for the repository. The biosphere model is one of a series of process models supporting the Total System Performance Assessment (TSPA) for the Yucca Mountain repository. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of the two reports that develop biosphere dose conversion factors (BDCFs), which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2003 [DIRS 164186]) describes in detail the conceptual model as well as the mathematical model and lists its input parameters. Model input parameters are developed and described in detail in five analysis report (BSC 2003 [DIRS 160964], BSC 2003 [DIRS 160965], BSC 2003 [DIRS 160976], BSC 2003 [DIRS 161239], and BSC 2003 [DIRS 161241]). The objective of this analysis was to develop the BDCFs for the volcanic ash exposure scenario and the dose factors (DFs) for calculating inhalation doses during volcanic eruption (eruption phase of the volcanic event). The volcanic ash exposure scenario is hereafter referred to as the volcanic ash scenario. For the volcanic ash scenario, the mode of radionuclide release into the biosphere is a volcanic eruption through the repository with the resulting entrainment of contaminated waste in the tephra and the subsequent atmospheric transport and dispersion of contaminated material in the biosphere. The biosphere process

  13. Seismic analysis response factors and design margins of piping systems

    International Nuclear Information System (INIS)

    Shieh, L.C.; Tsai, N.C.; Yang, M.S.; Wong, W.L.

    1985-01-01

    The objective of the simplified methods project of the Seismic Safety Margins Research Program is to develop a simplified seismic risk methodology for general use. The goal is to reduce seismic PRA costs to roughly 60 man-months over a 6 to 8 month period, without compromising the quality of the product. To achieve the goal, it is necessary to simplify the calculational procedure of the seismic response. The response factor approach serves this purpose. The response factor relates the median level response to the design data. Through a literature survey, we identified the various seismic analysis methods adopted in the U.S. nuclear industry for the piping system. A series of seismic response calculations was performed. The response factors and their variabilities for each method of analysis were computed. A sensitivity study of the effect of piping damping, in-structure response spectra envelop method, and analysis method was conducted. In addition, design margins, which relate the best-estimate response to the design data, are also presented

  14. Radiochemistry and nuclear methods of analysis

    International Nuclear Information System (INIS)

    Ehmann, W.D.; Vance, D.

    1991-01-01

    This book provides both the fundamentals of radiochemistry as well as specific applications of nuclear techniques to analytical chemistry. It includes such areas of application as radioimmunoassay and activation techniques using very short-lined indicator radionuclides. It emphasizes the current nuclear methods of analysis such as neutron activation PIXE, nuclear reaction analysis, Rutherford backscattering, isotope dilution analysis and others

  15. Exploratory Bi-Factor Analysis: The Oblique Case

    Science.gov (United States)

    Jennrich, Robert I.; Bentler, Peter M.

    2012-01-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford ("Psychometrika" 47:41-54, 1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler ("Psychometrika" 76:537-549, 2011) introduced an exploratory form of bi-factor…

  16. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  17. Constructing an Intelligent Patent Network Analysis Method

    Directory of Open Access Journals (Sweden)

    Chao-Chan Wu

    2012-11-01

    Full Text Available Patent network analysis, an advanced method of patent analysis, is a useful tool for technology management. This method visually displays all the relationships among the patents and enables the analysts to intuitively comprehend the overview of a set of patents in the field of the technology being studied. Although patent network analysis possesses relative advantages different from traditional methods of patent analysis, it is subject to several crucial limitations. To overcome the drawbacks of the current method, this study proposes a novel patent analysis method, called the intelligent patent network analysis method, to make a visual network with great precision. Based on artificial intelligence techniques, the proposed method provides an automated procedure for searching patent documents, extracting patent keywords, and determining the weight of each patent keyword in order to generate a sophisticated visualization of the patent network. This study proposes a detailed procedure for generating an intelligent patent network that is helpful for improving the efficiency and quality of patent analysis. Furthermore, patents in the field of Carbon Nanotube Backlight Unit (CNT-BLU were analyzed to verify the utility of the proposed method.

  18. Exploratory Bi-factor Analysis: The Oblique Case

    OpenAIRE

    Jennrich, Robert L.; Bentler, Peter M.

    2011-01-01

    Bi-factor analysis is a form of confirmatory factor analysis originally introduced by Holzinger and Swineford (1937). The bi-factor model has a general factor, a number of group factors, and an explicit bi-factor structure. Jennrich and Bentler (2011) introduced an exploratory form of bi-factor analysis that does not require one to provide an explicit bi-factor structure a priori. They use exploratory factor analysis and a bi-factor rotation criterion designed to produce a rotated loading mat...

  19. Scale-Free Nonparametric Factor Analysis: A User-Friendly Introduction with Concrete Heuristic Examples.

    Science.gov (United States)

    Mittag, Kathleen Cage

    Most researchers using factor analysis extract factors from a matrix of Pearson product-moment correlation coefficients. A method is presented for extracting factors in a non-parametric way, by extracting factors from a matrix of Spearman rho (rank correlation) coefficients. It is possible to factor analyze a matrix of association such that…

  20. Analysis of the principal factors that intervene in the quantification of planar images of uniform distributions of 99mTc by the conjugate view method with background correction by simple subtraction

    International Nuclear Information System (INIS)

    Mora Araya, Luis Diego

    2013-01-01

    The activity of uniform distributions of 99m Tc was quantified by the conjugate view method. The necessary factors of calibration and transmission were calculated to realize the quantification. The dependence of the estimated number of accounts within the source region and variability of the value of the transmission factor were determined, according to the size established for the region of interest, keeping constant its geometry. The images of all acquisitions were corrected by environmental background radiation and radiation dispersed, by the dual energy window method (DEW). The impact of corrections in the image were checked, both qualitatively and quantitative. The acquisition to obtain the calibration factor was realized with the same configuration and the same conditions that were used to realize the acquisition for quantification; in which, the same volume and the same geometry were used to contain the distribution of the activity of 99m Tc. The volume and geometry of the same medium attenuator have obtained a calibration factor exactly in the same circumstances in which have quantified. The behavior of the estimation of the calibration factor of the gamma camera was analyzed, according to the decay corrections of the activity and the attenuation that are applied. The dependence of the calibration factors and transmission were analyzed, according to the region of interest used in the corresponding images to estimate their values. The behavior of the estimation of the activity was determined, according to all possible combinations of the factors studied that have intervened in the quantification algorithm of conjugate view, namely, the size of the region of interest corresponding to the source region, the transmission factor , the calibration factor and background correction by simple subtraction. The results obtained of the estimates of the activity were compared. A tendency is established, indicating which have been combinations of the studied factors that

  1. Quantitative Risk Analysis: Method And Process

    Directory of Open Access Journals (Sweden)

    Anass BAYAGA

    2010-03-01

    Full Text Available Recent and past studies (King III report, 2009: 73-75; Stoney 2007;Committee of Sponsoring Organisation-COSO, 2004, Bartell, 2003; Liebenberg and Hoyt, 2003; Reason, 2000; Markowitz 1957 lament that although, the introduction of quantifying risk to enhance degree of objectivity in finance for instance was quite parallel to its development in the manufacturing industry, it is not the same in Higher Education Institution (HEI. In this regard, the objective of the paper was to demonstrate the methods and process of Quantitative Risk Analysis (QRA through likelihood of occurrence of risk (phase I. This paper serves as first of a two-phased study, which sampled hundred (100 risk analysts in a University in the greater Eastern Cape Province of South Africa.The analysis of likelihood of occurrence of risk by logistic regression and percentages were conducted to investigate whether there were a significant difference or not between groups (analyst in respect of QRA.The Hosmer and Lemeshow test was non-significant with a chi-square(X2 =8.181; p = 0.300, which indicated that there was a good model fit, since the data did not significantly deviate from the model. The study concluded that to derive an overall likelihood rating that indicated the probability that a potential risk may be exercised within the construct of an associated threat environment, the following governing factors must be considered: (1 threat source motivation and capability (2 nature of the vulnerability (3 existence and effectiveness of current controls (methods and process.

  2. Factorization method for simulating QCD at finite density

    International Nuclear Information System (INIS)

    Nishimura, Jun

    2003-01-01

    We propose a new method for simulating QCD at finite density. The method is based on a general factorization property of distribution functions of observables, and it is therefore applicable to any system with a complex action. The so-called overlap problem is completely eliminated by the use of constrained simulations. We test this method in a Random Matrix Theory for finite density QCD, where we are able to reproduce the exact results for the quark number density. (author)

  3. Analysis of IFR driver fuel hot channel factors

    International Nuclear Information System (INIS)

    Ku, J.Y.; Chang, L.K.; Mohr, D.

    1994-01-01

    Thermal-hydraulic uncertainty factors for Integral Fast Reactor (IFR) driver fuels have been determined based primarily on the database obtained from the predecessor fuels used in the IFR prototype, Experimental Breeder Reactor II. The uncertainty factors were applied to the channel factors (HCFs) analyses to obtain separate overall HCFs for fuel and cladding for steady-state analyses. A ''semistatistical horizontal method'' was used in the HCFs analyses. The uncertainty factor of the fuel thermal conductivity dominates the effects considered in the HCFs analysis; the uncertainty in fuel thermal conductivity will be reduced as more data are obtained to expand the currently limited database for the IFR ternary metal fuel (U-20Pu-10Zr). A set of uncertainty factors to be used for transient analyses has also been derived

  4. Analysis of IFR driver fuel hot channel factors

    International Nuclear Information System (INIS)

    Ku, J.Y.; Chang, L.K.; Mohr, D.

    2004-01-01

    Thermal-hydraulic uncertainty factors for Integral Fast Reactor (IFR) driver fuels have been determined based primarily on the database obtained from the predecessor fuels used in the IFR prototype. Experimental Breeder Reactor II. The uncertainty factors were applied to the hot channel factors (HCFs) analyses to obtain separate overall HCFs for fuel and cladding for steady-state analyses. A 'semistatistical horizontal method' was used in the HCFs analyses. The uncertainty factor of the fuel thermal conductivity dominates the effects considered in the HCFs analysis; the uncertainty in fuel thermal conductivity will be reduced as more data are obtained to expand the currently limited database for the IFR ternary metal fuel (U-20Pu-10Zr). A set of uncertainty factors to be used for transient analyses has also been derived. (author)

  5. Text analysis methods, text analysis apparatuses, and articles of manufacture

    Science.gov (United States)

    Whitney, Paul D; Willse, Alan R; Lopresti, Charles A; White, Amanda M

    2014-10-28

    Text analysis methods, text analysis apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a text analysis method includes accessing information indicative of data content of a collection of text comprising a plurality of different topics, using a computing device, analyzing the information indicative of the data content, and using results of the analysis, identifying a presence of a new topic in the collection of text.

  6. An alternative method for centrifugal compressor loading factor modelling

    Science.gov (United States)

    Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.

    2017-08-01

    The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.

  7. Nuclear analysis methods. Rudiments of radiation protection

    International Nuclear Information System (INIS)

    Roth, E.

    1998-01-01

    The nuclear analysis methods are generally used to analyse radioactive elements but they can be used also for chemical analysis, with fields such analysis and characterization of traces. The principles of radiation protection are explained (ALARA), the biological effects of ionizing radiations are given, elements and units used in radiation protection are reminded in tables. A part of this article is devoted to how to use radiation protection in a nuclear analysis laboratory. (N.C.)

  8. Disruptive Event Biosphere Dose Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2004-09-08

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2004 [DIRS 169671]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis''. The objective of this

  9. Scalable group level probabilistic sparse factor analysis

    DEFF Research Database (Denmark)

    Hinrich, Jesper Løve; Nielsen, Søren Føns Vind; Riis, Nicolai Andre Brogaard

    2017-01-01

    Many data-driven approaches exist to extract neural representations of functional magnetic resonance imaging (fMRI) data, but most of them lack a proper probabilistic formulation. We propose a scalable group level probabilistic sparse factor analysis (psFA) allowing spatially sparse maps, component...... pruning using automatic relevance determination (ARD) and subject specific heteroscedastic spatial noise modeling. For task-based and resting state fMRI, we show that the sparsity constraint gives rise to components similar to those obtained by group independent component analysis. The noise modeling...... shows that noise is reduced in areas typically associated with activation by the experimental design. The psFA model identifies sparse components and the probabilistic setting provides a natural way to handle parameter uncertainties. The variational Bayesian framework easily extends to more complex...

  10. Disruptive Event Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    M. Wasiolek

    2004-01-01

    This analysis report is one of the technical reports containing documentation of the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis report describes the development of biosphere dose conversion factors (BDCFs) for the volcanic ash exposure scenario, and the development of dose factors for calculating inhalation dose during volcanic eruption. A graphical representation of the documentation hierarchy for the ERMYN is presented in Figure 1-1. This figure shows the interrelationships among the products (i.e., analysis and model reports) developed for biosphere modeling and provides an understanding of how this analysis report contributes to biosphere modeling. This report is one of two reports that develop biosphere BDCFs, which are input parameters for the TSPA model. The ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) describes in detail the ERMYN conceptual model and mathematical model. The input parameter reports, shown to the right of the Biosphere Model Report in Figure 1-1, contain detailed descriptions of the model input parameters, their development and the relationship between the parameters and specific features, events and processes (FEPs). This report describes biosphere model calculations and their output, the BDCFs, for the volcanic ash exposure scenario. This analysis receives direct input from the outputs of the ''Biosphere Model Report'' (BSC 2004 [DIRS 169460]) and from the five analyses that develop parameter values for the biosphere model (BSC 2004 [DIRS 169671]; BSC 2004 [DIRS 169672]; BSC 2004 [DIRS 169673]; BSC 2004 [DIRS 169458]; and BSC 2004 [DIRS 169459]). The results of this report are further analyzed in the ''Biosphere Dose Conversion Factor Importance and Sensitivity Analysis''. The objective of this analysis was to develop the BDCFs for the volcanic ash

  11. A continuous exchange factor method for radiative exchange in enclosures with participating media

    International Nuclear Information System (INIS)

    Naraghi, M.H.N.; Chung, B.T.F.; Litkouhi, B.

    1987-01-01

    A continuous exchange factor method for analysis of radiative exchange in enclosures is developed. In this method two types of exchange functions are defined, direct exchange function and total exchange function. Certain integral equations relating total exchange functions to direct exchange functions are developed. These integral equations are solved using Gaussian quadrature integration method. The results obtained based on the present approach are found to be more accurate than those of the zonal method

  12. Recent Progress on the Factorization Method for Electrical Impedance Tomography

    Directory of Open Access Journals (Sweden)

    Bastian Harrach

    2013-01-01

    method was introduced by Kirsch for inverse scattering problems and extended to electrical impedance tomography (EIT by Brühl and Hanke. Since these pioneering works, substantial progress has been made on the theoretical foundations of the method. The necessary assumptions have been weakened, and the proofs have been considerably simplified. In this work, we aim to summarize this progress and present a state-of-the-art formulation of the Factorization Method for EIT with continuous data. In particular, we formulate the method for general piecewise analytic conductivities and give short and self-contained proofs.

  13. Exploratory Factor Analysis With Small Samples and Missing Data.

    Science.gov (United States)

    McNeish, Daniel

    2017-01-01

    Exploratory factor analysis (EFA) is an extremely popular method for determining the underlying factor structure for a set of variables. Due to its exploratory nature, EFA is notorious for being conducted with small sample sizes, and recent reviews of psychological research have reported that between 40% and 60% of applied studies have 200 or fewer observations. Recent methodological studies have addressed small size requirements for EFA models; however, these models have only considered complete data, which are the exception rather than the rule in psychology. Furthermore, the extant literature on missing data techniques with small samples is scant, and nearly all existing studies focus on topics that are not of primary interest to EFA models. Therefore, this article presents a simulation to assess the performance of various missing data techniques for EFA models with both small samples and missing data. Results show that deletion methods do not extract the proper number of factors and estimate the factor loadings with severe bias, even when data are missing completely at random. Predictive mean matching is the best method overall when considering extracting the correct number of factors and estimating factor loadings without bias, although 2-stage estimation was a close second.

  14. Importance of development factors in company dealing with cataphoresis coating method

    Directory of Open Access Journals (Sweden)

    Dorota Klimecka-Tatar

    2014-06-01

    Full Text Available The main aim of presented in this paper results is analysis of the most important factors in the company activity. The questionnaire test were carried among persons employed by the company, which mainstream is method of cataphoresis anti-corrosion coating. In the paper also validity of the Toyota roof elements were defined. Based on research as the most important factors of the company mission, indicated the quality factor.

  15. Microlocal methods in the analysis of the boundary element method

    DEFF Research Database (Denmark)

    Pedersen, Michael

    1993-01-01

    The application of the boundary element method in numerical analysis is based upon the use of boundary integral operators stemming from multiple layer potentials. The regularity properties of these operators are vital in the development of boundary integral equations and error estimates. We show...

  16. Statistical methods for categorical data analysis

    CERN Document Server

    Powers, Daniel

    2008-01-01

    This book provides a comprehensive introduction to methods and models for categorical data analysis and their applications in social science research. Companion website also available, at https://webspace.utexas.edu/dpowers/www/

  17. Development of Ultraviolet Spectrophotometric Method for Analysis ...

    African Journals Online (AJOL)

    HP

    Method for Analysis of Lornoxicam in Solid Dosage. Forms. Sunit Kumar Sahoo ... testing. Mean recovery was 100.82 % for tablets. Low values of % RSD indicate .... Saharty E, Refaat YS, Khateeb ME. Stability-. Indicating. Spectrophotometric.

  18. Tensor-Dictionary Learning with Deep Kruskal-Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Andrew J.; Pu, Yunchen; Sun, Yannan; Spell, Gregory; Carin, Lawrence

    2017-04-20

    We introduce new dictionary learning methods for tensor-variate data of any order. We represent each data item as a sum of Kruskal decomposed dictionary atoms within the framework of beta-process factor analysis (BPFA). Our model is nonparametric and can infer the tensor-rank of each dictionary atom. This Kruskal-Factor Analysis (KFA) is a natural generalization of BPFA. We also extend KFA to a deep convolutional setting and develop online learning methods. We test our approach on image processing and classification tasks achieving state of the art results for 2D & 3D inpainting and Caltech 101. The experiments also show that atom-rank impacts both overcompleteness and sparsity.

  19. An introduction to numerical methods and analysis

    CERN Document Server

    Epperson, James F

    2013-01-01

    Praise for the First Edition "". . . outstandingly appealing with regard to its style, contents, considerations of requirements of practice, choice of examples, and exercises.""-Zentralblatt MATH "". . . carefully structured with many detailed worked examples.""-The Mathematical Gazette The Second Edition of the highly regarded An Introduction to Numerical Methods and Analysis provides a fully revised guide to numerical approximation. The book continues to be accessible and expertly guides readers through the many available techniques of numerical methods and analysis. An Introduction to

  20. Hypothesis analysis methods, hypothesis analysis devices, and articles of manufacture

    Science.gov (United States)

    Sanfilippo, Antonio P [Richland, WA; Cowell, Andrew J [Kennewick, WA; Gregory, Michelle L [Richland, WA; Baddeley, Robert L [Richland, WA; Paulson, Patrick R [Pasco, WA; Tratz, Stephen C [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2012-03-20

    Hypothesis analysis methods, hypothesis analysis devices, and articles of manufacture are described according to some aspects. In one aspect, a hypothesis analysis method includes providing a hypothesis, providing an indicator which at least one of supports and refutes the hypothesis, using the indicator, associating evidence with the hypothesis, weighting the association of the evidence with the hypothesis, and using the weighting, providing information regarding the accuracy of the hypothesis.

  1. Human Modeling for Ground Processing Human Factors Engineering Analysis

    Science.gov (United States)

    Stambolian, Damon B.; Lawrence, Brad A.; Stelges, Katrine S.; Steady, Marie-Jeanne O.; Ridgwell, Lora C.; Mills, Robert E.; Henderson, Gena; Tran, Donald; Barth, Tim

    2011-01-01

    There have been many advancements and accomplishments over the last few years using human modeling for human factors engineering analysis for design of spacecraft. The key methods used for this are motion capture and computer generated human models. The focus of this paper is to explain the human modeling currently used at Kennedy Space Center (KSC), and to explain the future plans for human modeling for future spacecraft designs

  2. Improving Your Exploratory Factor Analysis for Ordinal Data: A Demonstration Using FACTOR

    Directory of Open Access Journals (Sweden)

    James Baglin

    2014-06-01

    Full Text Available Exploratory factor analysis (EFA methods are used extensively in the field of assessment and evaluation. Due to EFA's widespread use, common methods and practices have come under close scrutiny. A substantial body of literature has been compiled highlighting problems with many of the methods and practices used in EFA, and, in response, many guidelines have been proposed with the aim to improve application. Unfortunately, implementing recommended EFA practices has been restricted by the range of options available in commercial statistical packages and, perhaps, due to an absence of clear, practical - how-to' demonstrations. Consequently, this article describes the application of methods recommended to get the most out of your EFA. The article focuses on dealing with the common situation of analysing ordinal data as derived from Likert-type scales. These methods are demonstrated using the free, stand-alone, easy-to-use and powerful EFA package FACTOR (http://psico.fcep.urv.es/utilitats/factor/, Lorenzo-Seva & Ferrando, 2006. The demonstration applies the recommended techniques using an accompanying dataset, based on the Big 5 personality test. The outcomes obtained by the EFA using the recommended procedures through FACTOR are compared to the default techniques currently available in SPSS.

  3. Relating Actor Analysis Methods to Policy Problems

    NARCIS (Netherlands)

    Van der Lei, T.E.

    2009-01-01

    For a policy analyst the policy problem is the starting point for the policy analysis process. During this process the policy analyst structures the policy problem and makes a choice for an appropriate set of methods or techniques to analyze the problem (Goeller 1984). The methods of the policy

  4. Scale factor measure method without turntable for angular rate gyroscope

    Science.gov (United States)

    Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua

    2018-03-01

    In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.

  5. Fast sweeping method for the factored eikonal equation

    Science.gov (United States)

    Fomel, Sergey; Luo, Songting; Zhao, Hongkai

    2009-09-01

    We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.

  6. Nodal method for fast reactor analysis

    International Nuclear Information System (INIS)

    Shober, R.A.

    1979-01-01

    In this paper, a nodal method applicable to fast reactor diffusion theory analysis has been developed. This method has been shown to be accurate and efficient in comparison to highly optimized finite difference techniques. The use of an analytic solution to the diffusion equation as a means of determining accurate coupling relationships between nodes has been shown to be highly accurate and efficient in specific two-group applications, as well as in the current multigroup method

  7. Factors of Selection of the Stock Allocation Method

    Directory of Open Access Journals (Sweden)

    Rohov Heorhii K.

    2014-03-01

    Full Text Available The article describes results of the author’s study of factors of making strategic decisions on selection of methods of stock allocation by public joint stock companies in Ukraine. The author used the Random forest mathematical apparatus of classification trees building and also informal methods. The article analyses the reasons that restrain public allocation of stock. It shows significant influence upon selection of a method of stock allocation of such factors as capital concentration, balance rate of corporate rights, sector of economy and significant participation of the institutes of common investment or the state in the authorised capital. The built hierarchical model of classification of factors of the issuing policy of joint stock companies finds logical justification in specific features of the institutional environment, however, it does not fit into the framework of the classical concept of the market economy. The model could be used both for formation of goals of corporate financial strategies and in the process of improvement of state regulation of activity of securities issuers. The prospect of further studies in this direction is identification of transformation of factors of selection of the stock allocation method under conditions of revival of the stock market.

  8. Inelastic analysis methods for piping systems

    International Nuclear Information System (INIS)

    Boyle, J.T.; Spence, J.

    1980-01-01

    The analysis of pipework systems which operate in an environment where local inelastic strains are evident is one of the most demanding problems facing the stress analyst in the nuclear field. The spatial complexity of even the most modest system makes a detailed analysis using finite element techniques beyond the scope of current computer technology. For this reason the emphasis has been on simplified methods. It is the aim of this paper to provide a reasonably complete, state-of-the-art review of inelastic pipework analysis methods and to attempt to highlight areas where reliable information is lacking and further work is needed. (orig.)

  9. Human factors estimation methods in nuclear power plant

    International Nuclear Information System (INIS)

    Takano, Kenichi; Yoshino, Kenji; Nagasaka, Akihiko

    1986-01-01

    The diffinitions and models of mental work-loads are investigated, consequently the most simple and reasonable one is the single channel model, and the channel has limited capacity. The capacity depends on the time related to the brain information processings, like as the recognizations by eyes or ears etc., and the judgements by memory or experience etc., and the actions. In this paper the mental work load is diffined by the relative needed time of such information processing compared to total capacity. Based on the above diffinitions, the model experiment is carried out, the main task is simple additional task of the two digits displayed on a CRT and varying the additional speed from 10 cycle/min. - 60 cycle/min. Four techniques to measure the mental work-load, (1) the task time analysis method, (2) the physiological method, (3) the secondary task method, (4) the subjective method, are examined in the respects of the sensitivity and validity. The measured values gained by the physiological method and the secondary task method and subjective method are compared to those of the task time analysis results, because the task time analysis method is most faithfull to the diffinitions. (author)

  10. Task analysis methods applicable to control room design review (CDR)

    International Nuclear Information System (INIS)

    Moray, N.P.; Senders, J.W.; Rhodes, W.

    1985-06-01

    This report presents the results of a research study conducted in support of the human factors engineering program of the Atomic Energy Control Board in Canada. It contains five products which may be used by the Atomic Enegy Control Board in relation to Task Analysis of jobs in CANDU nuclear power plants: 1. a detailed method for preparing for a task analysis; 2. a Task Data Form for recording task analysis data; 3. a detailed method for carrying out task analyses; 4. a guide to assessing alternative methods for performing task analyses, if such are proposed by utilities or consultants; and 5. an annotated bibliography on task analysis. In addition, a short explanation of the origins, nature and uses of task analysis is provided, with some examples of its cost effectiveness. 35 refs

  11. Implicitly Weighted Methods in Robust Image Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 44, č. 3 (2012), s. 449-462 ISSN 0924-9907 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robustness * high breakdown point * outlier detection * robust correlation analysis * template matching * face recognition Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.767, year: 2012

  12. Excitation methods for energy dispersive analysis

    International Nuclear Information System (INIS)

    Jaklevic, J.M.

    1976-01-01

    The rapid development in recent years of energy dispersive x-ray fluorescence analysis has been based primarily on improvements in semiconductor detector x-ray spectrometers. However, the whole analysis system performance is critically dependent on the availability of optimum methods of excitation for the characteristic x rays in specimens. A number of analysis facilities based on various methods of excitation have been developed over the past few years. A discussion is given of the features of various excitation methods including charged particles, monochromatic photons, and broad-energy band photons. The effects of the excitation method on background and sensitivity are discussed from both theoretical and experimental viewpoints. Recent developments such as pulsed excitation and polarized photons are also discussed

  13. Analysis of thermal power calibration method

    International Nuclear Information System (INIS)

    Zagar, T.; Ravnik, M.; Persic, A.

    2000-01-01

    . Reactor power is in research reactors usually calibrated with accuracy of 10%. Calorimetric power calibration can be significantly wrong if it is performed under uncontrolled conditions. To measure correct temperature-rise rates, measurement should be performed at low reactor power, with concrete and air temperature equal to the bulk water temperature. Under controlled conditions corrections for heat loss are around 2%. But if calorimetric calibration is performed with high reactor power or with water temperature lower than concrete temperature, the error can be as big as 30%. This calibration is usually performed for some arbitrary chosen control rod configuration and is strictly valid for that configuration only. By introducing the flux perturbation factors a correction to the signal from the nuclear instrumentation is made, which practically eliminates the sensitivity of the control rod configuration. The analysis presented shows that the correction can be up to 15% in most unfavourable case, when the control rod closest to the detector is fully inserted

  14. A strategy for evaluating pathway analysis methods.

    Science.gov (United States)

    Yu, Chenggang; Woo, Hyung Jun; Yu, Xueping; Oyama, Tatsuya; Wallqvist, Anders; Reifman, Jaques

    2017-10-13

    Researchers have previously developed a multitude of methods designed to identify biological pathways associated with specific clinical or experimental conditions of interest, with the aim of facilitating biological interpretation of high-throughput data. Before practically applying such pathway analysis (PA) methods, we must first evaluate their performance and reliability, using datasets where the pathways perturbed by the conditions of interest have been well characterized in advance. However, such 'ground truths' (or gold standards) are often unavailable. Furthermore, previous evaluation strategies that have focused on defining 'true answers' are unable to systematically and objectively assess PA methods under a wide range of conditions. In this work, we propose a novel strategy for evaluating PA methods independently of any gold standard, either established or assumed. The strategy involves the use of two mutually complementary metrics, recall and discrimination. Recall measures the consistency of the perturbed pathways identified by applying a particular analysis method to an original large dataset and those identified by the same method to a sub-dataset of the original dataset. In contrast, discrimination measures specificity-the degree to which the perturbed pathways identified by a particular method to a dataset from one experiment differ from those identifying by the same method to a dataset from a different experiment. We used these metrics and 24 datasets to evaluate six widely used PA methods. The results highlighted the common challenge in reliably identifying significant pathways from small datasets. Importantly, we confirmed the effectiveness of our proposed dual-metric strategy by showing that previous comparative studies corroborate the performance evaluations of the six methods obtained by our strategy. Unlike any previously proposed strategy for evaluating the performance of PA methods, our dual-metric strategy does not rely on any ground truth

  15. A human factor analysis of a radiotherapy accident

    International Nuclear Information System (INIS)

    Thellier, S.

    2009-01-01

    Since September 2005, I.R.S.N. studies activities of radiotherapy treatment from the angle of the human and organizational factors to improve the reliability of treatment in radiotherapy. Experienced in nuclear industry incidents analysis, I.R.S.N. analysed and diffused in March 2008, for the first time in France, the detailed study of a radiotherapy accident from the angle of the human and organizational factors. The method used for analysis is based on interviews and documents kept by the hospital. This analysis aimed at identifying the causes of the difference recorded between the dose prescribed by the radiotherapist and the dose effectively received by the patient. Neither verbal nor written communication (intra-service meetings and protocols of treatment) allowed information to be transmitted correctly in order to permit radiographers to adjust the irradiation zones correctly. This analysis highlighted the fact that during the preparation and the carrying out of the treatment, various factors led planned controls to not be performed. Finally, this analysis highlighted the fact that unsolved areas persist in the report over this accident. This is due to a lack of traceability of a certain number of key actions. The article concluded that there must be improvement in three areas: cooperation between the practitioners, control of the actions and traceability of the actions. (author)

  16. Instrumental methods of analysis, 7th edition

    International Nuclear Information System (INIS)

    Willard, H.H.; Merritt, L.L. Jr.; Dean, J.A.; Settle, F.A. Jr.

    1988-01-01

    The authors have prepared an organized and generally polished product. The book is fashioned to be used as a textbook for an undergraduate instrumental analysis course, a supporting textbook for graduate-level courses, and a general reference work on analytical instrumentation and techniques for professional chemists. Four major areas are emphasized: data collection and processing, spectroscopic instrumentation and methods, liquid and gas chromatographic methods, and electrochemical methods. Analytical instrumentation and methods have been updated, and a thorough citation of pertinent recent literature is included

  17. Application of Software Safety Analysis Methods

    International Nuclear Information System (INIS)

    Park, G. Y.; Hur, S.; Cheon, S. W.; Kim, D. H.; Lee, D. Y.; Kwon, K. C.; Lee, S. J.; Koo, Y. H.

    2009-01-01

    A fully digitalized reactor protection system, which is called the IDiPS-RPS, was developed through the KNICS project. The IDiPS-RPS has four redundant and separated channels. Each channel is mainly composed of a group of bistable processors which redundantly compare process variables with their corresponding setpoints and a group of coincidence processors that generate a final trip signal when a trip condition is satisfied. Each channel also contains a test processor called the ATIP and a display and command processor called the COM. All the functions were implemented in software. During the development of the safety software, various software safety analysis methods were applied, in parallel to the verification and validation (V and V) activities, along the software development life cycle. The software safety analysis methods employed were the software hazard and operability (Software HAZOP) study, the software fault tree analysis (Software FTA), and the software failure modes and effects analysis (Software FMEA)

  18. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  19. Advanced analysis methods in particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Bhat, Pushpalatha C.; /Fermilab

    2010-10-01

    Each generation of high energy physics experiments is grander in scale than the previous - more powerful, more complex and more demanding in terms of data handling and analysis. The spectacular performance of the Tevatron and the beginning of operations of the Large Hadron Collider, have placed us at the threshold of a new era in particle physics. The discovery of the Higgs boson or another agent of electroweak symmetry breaking and evidence of new physics may be just around the corner. The greatest challenge in these pursuits is to extract the extremely rare signals, if any, from huge backgrounds arising from known physics processes. The use of advanced analysis techniques is crucial in achieving this goal. In this review, I discuss the concepts of optimal analysis, some important advanced analysis methods and a few examples. The judicious use of these advanced methods should enable new discoveries and produce results with better precision, robustness and clarity.

  20. Time Series Factor Analysis with an Application to Measuring Money

    NARCIS (Netherlands)

    Gilbert, Paul D.; Meijer, Erik

    2005-01-01

    Time series factor analysis (TSFA) and its associated statistical theory is developed. Unlike dynamic factor analysis (DFA), TSFA obviates the need for explicitly modeling the process dynamics of the underlying phenomena. It also differs from standard factor analysis (FA) in important respects: the

  1. Methods for determining radionuclide retardation factors: status report

    International Nuclear Information System (INIS)

    Relyea, J.F.; Serne, R.J.; Rai, D.

    1980-04-01

    This report identifies a number of mechanisms that retard radionuclide migration, and describes the static and dynamic methods that are used to study such retardation phenomena. Both static and dynamic methods are needed for reliable safety assessments of underground nuclear-waste repositories. This report also evaluates the extent to which the two methods may be used to diagnose radionuclide migration through various types of geologic media, among them unconsolidated, crushed, intact, and fractured rocks. Adsorption is one mechanism that can control radionuclide concentrations in solution and therefore impede radionuclide migration. Other mechanisms that control a solution's radionuclide concentration and radionuclide migration are precipitation of hydroxides and oxides, oxidation-reduction reactions, and the formation of minerals that might include the radionuclide as a structural element. The retardation mechanisms mentioned above are controlled by such factors as surface area, cation exchange capacity, solution pH, chemical composition of the rock and of the solution, oxidation-reduction potential, and radionuclide concentration. Rocks and ground waters used in determining retardation factors should represent the expected equilibrium conditions in the geologic system under investigation. Static test methods can be used to rapidly screen the effects of the factors mentioned above. Dynamic (or column) testing, is needed to assess the effects of hydrodynamics and the interaction of hydrodynamics with the other important parameters. This paper proposes both a standard method for conducting batch Kd determinations, and a standard format for organizing and reporting data. Dynamic testing methods are not presently developed to the point that a standard methodology can be proposed. Normal procedures are outlined for column experimentation and the data that are needed to analyze a column experiment are identified

  2. Review of strain buckling: analysis methods

    International Nuclear Information System (INIS)

    Moulin, D.

    1987-01-01

    This report represents an attempt to review the mechanical analysis methods reported in the literature to account for the specific behaviour that we call buckling under strain. In this report, this expression covers all buckling mechanisms in which the strains imposed play a role, whether they act alone (as in simple buckling under controlled strain), or whether they act with other loadings (primary loading, such as pressure, for example). Attention is focused on the practical problems relevant to LMFBR reactors. The components concerned are distinguished by their high slenderness ratios and by rather high thermal levels, both constant and variable with time. Conventional static buckling analysis methods are not always appropriate for the consideration of buckling under strain. New methods must therefore be developed in certain cases. It is also hoped that this review will facilitate the coding of these analytical methods to aid the constructor in his design task and to identify the areas which merit further investigation

  3. Analysis of mixed data methods & applications

    CERN Document Server

    de Leon, Alexander R

    2013-01-01

    A comprehensive source on mixed data analysis, Analysis of Mixed Data: Methods & Applications summarizes the fundamental developments in the field. Case studies are used extensively throughout the book to illustrate interesting applications from economics, medicine and health, marketing, and genetics. Carefully edited for smooth readability and seamless transitions between chaptersAll chapters follow a common structure, with an introduction and a concluding summary, and include illustrative examples from real-life case studies in developmental toxicolog

  4. Scope-Based Method Cache Analysis

    DEFF Research Database (Denmark)

    Huber, Benedikt; Hepp, Stefan; Schoeberl, Martin

    2014-01-01

    The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution, as it req......The quest for time-predictable systems has led to the exploration of new hardware architectures that simplify analysis and reasoning in the temporal domain, while still providing competitive performance. For the instruction memory, the method cache is a conceptually attractive solution...

  5. Radiation analysis devices, radiation analysis methods, and articles of manufacture

    Science.gov (United States)

    Roybal, Lyle Gene

    2010-06-08

    Radiation analysis devices include circuitry configured to determine respective radiation count data for a plurality of sections of an area of interest and combine the radiation count data of individual of sections to determine whether a selected radioactive material is present in the area of interest. An amount of the radiation count data for an individual section is insufficient to determine whether the selected radioactive material is present in the individual section. An article of manufacture includes media comprising programming configured to cause processing circuitry to perform processing comprising determining one or more correction factors based on a calibration of a radiation analysis device, measuring radiation received by the radiation analysis device using the one or more correction factors, and presenting information relating to an amount of radiation measured by the radiation analysis device having one of a plurality of specified radiation energy levels of a range of interest.

  6. The annual averaged atmospheric dispersion factor and deposition factor according to methods of atmospheric stability classification

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hae Sun; Jeong, Hyo Joon; Kim, Eun Han; Han, Moon Hee; Hwang, Won Tae [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-09-15

    This study analyzes the differences in the annual averaged atmospheric dispersion factor and ground deposition factor produced using two classification methods of atmospheric stability, which are based on a vertical temperature difference and the standard deviation of horizontal wind direction fluctuation. Daedeok and Wolsong nuclear sites were chosen for an assessment, and the meteorological data at 10 m were applied to the evaluation of atmospheric stability. The XOQDOQ software program was used to calculate atmospheric dispersion factors and ground deposition factors. The calculated distances were chosen at 400 m, 800 m, 1,200 m, 1,600 m, 2,400 m, and 3,200 m away from the radioactive material release points. All of the atmospheric dispersion factors generated using the atmospheric stability based on the vertical temperature difference were shown to be higher than those from the standard deviation of horizontal wind direction fluctuation. On the other hand, the ground deposition factors were shown to be same regardless of the classification method, as they were based on the graph obtained from empirical data presented in the Nuclear Regulatory Commission's Regulatory Guide 1.111, which is unrelated to the atmospheric stability for the ground level release. These results are based on the meteorological data collected over the course of one year at the specified sites; however, the classification method of atmospheric stability using the vertical temperature difference is expected to be more conservative.

  7. Thermal disadvantage factor calculation by the multiregion collision probability method

    International Nuclear Information System (INIS)

    Ozgener, B.; Ozgener, H.A.

    2004-01-01

    A multi-region collision probability formulation that is capable of applying white boundary condition directly is presented and applied to thermal neutron transport problems. The disadvantage factors computed are compared with their counterparts calculated by S N methods with both direct and indirect application of white boundary condition. The results of the ABH and collision probability method with indirect application of white boundary condition are also considered and comparisons with benchmark Monte Carlo results are carried out. The studies show that the proposed formulation is capable of calculating thermal disadvantage factor with sufficient accuracy without resorting to the fictitious scattering outer shell approximation associated with the indirect application of the white boundary condition in collision probability solutions

  8. Piping dynamic analysis by the synthesis method

    International Nuclear Information System (INIS)

    Bezler, P.; Curreri, J.R.

    1976-01-01

    Since piping systems are a frequent source of noise and vibrations, their efficient dynamic analysis is imperative. As an alternate to more conventional analyses methods, an application of the synthesis method to piping vibrations analyses is demonstrated. Specifically, the technique is illustrated by determining the normal modes and natural frequencies of a composite bend from the normal mode and natural frequency data of two component parts. A comparison of the results to those derived for the composite bend by other techniques is made

  9. Probabilistic Analysis Methods for Hybrid Ventilation

    DEFF Research Database (Denmark)

    Brohus, Henrik; Frier, Christian; Heiselberg, Per

    This paper discusses a general approach for the application of probabilistic analysis methods in the design of ventilation systems. The aims and scope of probabilistic versus deterministic methods are addressed with special emphasis on hybrid ventilation systems. A preliminary application...... of stochastic differential equations is presented comprising a general heat balance for an arbitrary number of loads and zones in a building to determine the thermal behaviour under random conditions....

  10. ANALYSIS OF RISK FACTORS ECTOPIC PREGNANCY

    Directory of Open Access Journals (Sweden)

    Budi Santoso

    2017-04-01

    Full Text Available Introduction: Ectopic pregnancy is a pregnancy with extrauterine implantation. This situation is gynecologic emergency that contributes to maternal mortality. Therefore, early recognition, based on identification of the causes of ectopic pregnancy risk factors, is needed. Methods: The design descriptive observational. The samples were pregnant women who had ectopic pregnancy at Maternity Room, Emergency Unit, Dr. Soetomo Hospital, Surabaya, from 1 July 2008 to 1 July 2010. Sampling technique was total sampling using medical records. Result: Patients with ectopic pregnancy were 99 individuals out of 2090 pregnant women who searched for treatment in Dr. Soetomo Hospital. However, only 29 patients were accompanied with traceable risk factors. Discussion:. Most ectopic pregnancies were in the age group of 26-30 years, comprising 32 patients (32.32%, then in age groups of 31–35 years as many as 25 patients (25.25%, 18 patients in age group 21–25 years (18.18%, 17 patients in age group 36–40 years (17.17%, 4 patients in age group 41 years and more (4.04%, and the least was in age group of 16–20 years with 3 patients (3.03%. A total of 12 patients with ectopic pregnancy (41.38% had experience of abortion and 6 patients (20.69% each in groups of patients with ectopic pregnancy who used family planning, in those who used family planning as well as ectopic pregnancy patients with history of surgery. There were 2 patients (6.90% of the group of patients ectopic pregnancy who had history of surgery and history of abortion. The incidence rate of ectopic pregnancy was 4.73%, mostly in the second gravidity (34.34%, whereas the nulliparous have the highest prevalence of 39.39%. Acquired risk factors, i.e. history of operations was 10.34%, patients with family planning 20.69%, patients with history of abortion 41.38%, patients with history of abortion and operation 6.90% patients with family and history of abortion was 20.69%.

  11. Dirac equation in low dimensions: The factorization method

    Energy Technology Data Exchange (ETDEWEB)

    Sánchez-Monroy, J.A., E-mail: antosan@if.usp.br [Instituto de Física, Universidade de São Paulo, 05508-090, São Paulo, SP (Brazil); Quimbay, C.J., E-mail: cjquimbayh@unal.edu.co [Departamento de Física, Universidad Nacional de Colombia, Bogotá, D. C. (Colombia); CIF, Bogotá (Colombia)

    2014-11-15

    We present a general approach to solve the (1+1) and (2+1)-dimensional Dirac equations in the presence of static scalar, pseudoscalar and gauge potentials, for the case in which the potentials have the same functional form and thus the factorization method can be applied. We show that the presence of electric potentials in the Dirac equation leads to two Klein–Gordon equations including an energy-dependent potential. We then generalize the factorization method for the case of energy-dependent Hamiltonians. Additionally, the shape invariance is generalized for a specific class of energy-dependent Hamiltonians. We also present a condition for the absence of the Klein paradox (stability of the Dirac sea), showing how Dirac particles in low dimensions can be confined for a wide family of potentials. - Highlights: • The low-dimensional Dirac equation in the presence of static potentials is solved. • The factorization method is generalized for energy-dependent Hamiltonians. • The shape invariance is generalized for energy-dependent Hamiltonians. • The stability of the Dirac sea is related to the existence of supersymmetric partner Hamiltonians.

  12. Chemical analysis by nuclear methods. v. 2

    International Nuclear Information System (INIS)

    Alfassi, Z.B.

    1998-01-01

    'Chemical analysis by Nuclear Methods' is an effort of some renowned authors in field of nuclear chemistry and radiochemistry which is compiled by Alfassi, Z.B. and translated into Farsi version collected in two volumes. The second volume consists of the following chapters: Detecting ion recoil scattering and elastic scattering are dealt in the eleventh chapter, the twelfth chapter is devoted to nuclear reaction analysis using charged particles, X-ray emission is discussed at thirteenth chapter, the fourteenth chapter is about using ion microprobes, X-ray fluorescence analysis is discussed in the fifteenth chapter, alpha, beta and gamma ray scattering in chemical analysis are dealt in chapter sixteen, Moessbauer spectroscopy and positron annihilation are discussed in chapter seventeen and eighteen; The last two chapters are about isotope dilution analysis and radioimmunoassay

  13. Numerical methods in software and analysis

    CERN Document Server

    Rice, John R

    1992-01-01

    Numerical Methods, Software, and Analysis, Second Edition introduces science and engineering students to the methods, tools, and ideas of numerical computation. Introductory courses in numerical methods face a fundamental problem-there is too little time to learn too much. This text solves that problem by using high-quality mathematical software. In fact, the objective of the text is to present scientific problem solving using standard mathematical software. This book discusses numerous programs and software packages focusing on the IMSL library (including the PROTRAN system) and ACM Algorithm

  14. The Empirical Verification of an Assignment of Items to Subtests : The Oblique Multiple Group Method Versus the Confirmatory Common Factor Method

    NARCIS (Netherlands)

    Stuive, Ilse; Kiers, Henk A.L.; Timmerman, Marieke E.; ten Berge, Jos M.F.

    2008-01-01

    This study compares two confirmatory factor analysis methods on their ability to verify whether correct assignments of items to subtests are supported by the data. The confirmatory common factor (CCF) method is used most often and defines nonzero loadings so that they correspond to the assignment of

  15. DETERMINISTIC METHODS USED IN FINANCIAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    MICULEAC Melania Elena

    2014-06-01

    Full Text Available The deterministic methods are those quantitative methods that have as a goal to appreciate through numerical quantification the creation and expression mechanisms of factorial and causal, influence and propagation relations of effects, where the phenomenon can be expressed through a direct functional relation of cause-effect. The functional and deterministic relations are the causal relations where at a certain value of the characteristics corresponds a well defined value of the resulting phenomenon. They can express directly the correlation between the phenomenon and the influence factors, under the form of a function-type mathematical formula.

  16. Nonlinear structural analysis using integrated force method

    Indian Academy of Sciences (India)

    A new formulation termed the Integrated Force Method (IFM) was proposed by Patnaik ... nated ``Structure (nY m)'' where (nY m) are the force and displacement degrees of ..... Patnaik S N, Yadagiri S 1976 Frequency analysis of structures.

  17. Modern methods of wine quality analysis

    Directory of Open Access Journals (Sweden)

    Галина Зуфарівна Гайда

    2015-06-01

    Full Text Available  In this paper physical-chemical and enzymatic methods of quantitative analysis of the basic wine components were reviewed. The results of own experiments were presented for the development of enzyme- and cell-based amperometric sensors on ethanol, lactate, glucose, arginine

  18. Factors Affecting Green Residential Building Development: Social Network Analysis

    Directory of Open Access Journals (Sweden)

    Xiaodong Yang

    2018-05-01

    Full Text Available Green residential buildings (GRBs are one of the effective practices of energy saving and emission reduction in the construction industry. However, many real estate developers in China are less willing to develop GRBs, because of the factors affecting green residential building development (GRBD. In order to promote the sustainable development of GRBs in China, this paper, based on the perspective of real estate developers, identifies the influential and critical factors affecting GRBD, using the method of social network analysis (SNA. Firstly, 14 factors affecting GRBD are determined from 64 preliminary factors of three main elements, and the framework is established. Secondly, the relationships between the 14 factors are analyzed by SNA. Finally, four critical factors for GRBD, which are on the local economy development level, development strategy and innovation orientation, developer’s acknowledgement and positioning for GRBD, and experience and ability for GRBD, are identified by the social network centrality test. The findings illustrate the key issues that affect the development of GRBs, and provide references for policy making by the government and strategy formulation by real estate developers.

  19. Erikson Psychosocial Stage Inventory: A Factor Analysis

    Science.gov (United States)

    Gray, Mary McPhail; And Others

    1986-01-01

    The 72-item Erikson Psychosocial Stage Inventory (EPSI) was factor analyzed for a group of 534 university freshmen and sophomore students. Seven factors emerged, which were labeled Initiative, Industry, Identity, Friendship, Dating, Goal Clarity, and Self-Confidence. Item's representing Erikson's factors, Trust and Autonomy, were dispersed across…

  20. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  1. Economic analysis of alternative LLW disposal methods

    International Nuclear Information System (INIS)

    Foutes, C.E.

    1987-01-01

    The Environmental Protection Agency (EPA) has evaluated the costs and benefits of alternative disposal technologies as part of its program to develop generally applicable environmental standards for the land disposal of low-level radioactive waste (LLW). Costs, population health effects and Critical Population Group (CPG) exposures resulting from alternative waste treatment and disposal methods were developed and input into the analysis. The cost-effectiveness analysis took into account a number of waste streams, hydrogeologic and climatic region settings, and waste treatment and disposal methods. Total costs of each level of a standard included costs for packaging, processing, transportation, and burial of waste. Benefits are defined in terms of reductions in the general population health risk (expected fatal cancers and genetic effects) evaluated over 10,000 years. A cost-effectiveness ratio, was calculated for each alternative standard. This paper describes the alternatives considered and preliminary results of the cost-effectiveness analysis

  2. Reliability and risk analysis methods research plan

    International Nuclear Information System (INIS)

    1984-10-01

    This document presents a plan for reliability and risk analysis methods research to be performed mainly by the Reactor Risk Branch (RRB), Division of Risk Analysis and Operations (DRAO), Office of Nuclear Regulatory Research. It includes those activities of other DRAO branches which are very closely related to those of the RRB. Related or interfacing programs of other divisions, offices and organizations are merely indicated. The primary use of this document is envisioned as an NRC working document, covering about a 3-year period, to foster better coordination in reliability and risk analysis methods development between the offices of Nuclear Regulatory Research and Nuclear Reactor Regulation. It will also serve as an information source for contractors and others to more clearly understand the objectives, needs, programmatic activities and interfaces together with the overall logical structure of the program

  3. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  4. Spectroscopic Chemical Analysis Methods and Apparatus

    Science.gov (United States)

    Hug, William F. (Inventor); Reid, Ray D. (Inventor); Bhartia, Rohit (Inventor); Lane, Arthur L. (Inventor)

    2018-01-01

    Spectroscopic chemical analysis methods and apparatus are disclosed which employ deep ultraviolet (e.g. in the 200 nm to 300 nm spectral range) electron beam pumped wide bandgap semiconductor lasers, incoherent wide bandgap semiconductor light emitting devices, and hollow cathode metal ion lasers to perform non-contact, non-invasive detection of unknown chemical analytes. These deep ultraviolet sources enable dramatic size, weight and power consumption reductions of chemical analysis instruments. In some embodiments, Raman spectroscopic detection methods and apparatus use ultra-narrow-band angle tuning filters, acousto-optic tuning filters, and temperature tuned filters to enable ultra-miniature analyzers for chemical identification. In some embodiments Raman analysis is conducted along with photoluminescence spectroscopy (i.e. fluorescence and/or phosphorescence spectroscopy) to provide high levels of sensitivity and specificity in the same instrument.

  5. Computational methods for nuclear criticality safety analysis

    International Nuclear Information System (INIS)

    Maragni, M.G.

    1992-01-01

    Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)

  6. Multiple timescale analysis and factor analysis of energy ecological footprint growth in China 1953-2006

    International Nuclear Information System (INIS)

    Chen Chengzhong; Lin Zhenshan

    2008-01-01

    Scientific analysis of energy consumption and its influencing factors is of great importance for energy strategy and policy planning. The energy consumption in China 1953-2006 is estimated by applying the energy ecological footprint (EEF) method, and the fluctuation periods of annual China's per capita EEF (EEF cpc ) growth rate are analyzed with the empirical mode decomposition (EMD) method in this paper. EEF intensity is analyzed to depict energy efficiency in China. The main timescales of the 37 factors that affect the annual growth rate of EEF cpc are also discussed based on EMD and factor analysis methods. Results show three obvious undulation cycles of the annual growth rate of EEF cpc , i.e., 4.6, 14.4 and 34.2 years over the last 53 years. The analysis findings from the common synthesized factors of IMF1, IMF2 and IMF3 timescales of the 37 factors suggest that China's energy policy-makers should attach more importance to stabilizing economic growth, optimizing industrial structure, regulating domestic petroleum exploitation and improving transportation efficiency

  7. Towards automatic analysis of dynamic radionuclide studies using principal-components factor analysis

    International Nuclear Information System (INIS)

    Nigran, K.S.; Barber, D.C.

    1985-01-01

    A method is proposed for automatic analysis of dynamic radionuclide studies using the mathematical technique of principal-components factor analysis. This method is considered as a possible alternative to the conventional manual regions-of-interest method widely used. The method emphasises the importance of introducing a priori information into the analysis about the physiology of at least one of the functional structures in a study. Information is added by using suitable mathematical models to describe the underlying physiological processes. A single physiological factor is extracted representing the particular dynamic structure of interest. Two spaces 'study space, S' and 'theory space, T' are defined in the formation of the concept of intersection of spaces. A one-dimensional intersection space is computed. An example from a dynamic 99 Tcsup(m) DTPA kidney study is used to demonstrate the principle inherent in the method proposed. The method requires no correction for the blood background activity, necessary when processing by the manual method. The careful isolation of the kidney by means of region of interest is not required. The method is therefore less prone to operator influence and can be automated. (author)

  8. PIXE - a new method for elemental analysis

    International Nuclear Information System (INIS)

    Johansson, S.A.E.

    1983-01-01

    With elemental analysis we mean the determination of which chemical elements are present in a sample and of their concentration. This is an old and important problem in chemistry. The earliest methods were purely chemical and many such methods are still used. However, various methods based on physical principles have gradually become more and more important. One such method is neutron activation. When the sample is bombarded with neutrons it becomes radioactive and the various radioactive isotopes produced can be identified by the radiation they emit. From the measured intensity of the radiation one can calculate how much of a certain element that is present in the sample. Another possibility is to study the light emitted when the sample is excited in various ways. A spectroscopic investigation of the light can identify the chemical elements and allows also a determination of their concentration in the sample. In the same way, if a sample can be brought to emit X-rays, this radiation is also characteristic for the elements present and can be used to determine the elemental concentration. One such X-ray method which has been developed recently is PIXE. The name is an acronym for Particle Induced X-ray Emission and indicates the principle of the method. Particles in this context means heavy, charged particles such as protons and a-particles of rather high energy. Hence, in PIXE-analysis the sample is irradiated in the beam of an accelerator and the emitted X-rays are studied. (author)

  9. HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD

    Energy Technology Data Exchange (ETDEWEB)

    Harold S. Blackman; David I. Gertman; Ronald L. Boring

    2008-09-01

    This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.

  10. Dependability Analysis Methods For Configurable Software

    International Nuclear Information System (INIS)

    Dahll, Gustav; Pulkkinen, Urho

    1996-01-01

    Configurable software systems are systems which are built up by standard software components in the same way as a hardware system is built up by standard hardware components. Such systems are often used in the control of NPPs, also in safety related applications. A reliability analysis of such systems is therefore necessary. This report discusses what configurable software is, and what is particular with respect to reliability assessment of such software. Two very commonly used techniques in traditional reliability analysis, viz. failure mode, effect and criticality analysis (FMECA) and fault tree analysis are investigated. A real example is used to illustrate the discussed methods. Various aspects relevant to the assessment of the software reliability in such systems are discussed. Finally some models for quantitative software reliability assessment applicable on configurable software systems are described. (author)

  11. Nominal Performance Biosphere Dose Conversion Factor Analysis

    International Nuclear Information System (INIS)

    Wasiolek, M.

    2000-01-01

    The purpose of this report was to document the process leading to development of the Biosphere Dose Conversion Factors (BDCFs) for the postclosure nominal performance of the potential repository at Yucca Mountain. BDCF calculations concerned twenty-four radionuclides. This selection included sixteen radionuclides that may be significant nominal performance dose contributors during the compliance period of up to 10,000 years, five additional radionuclides of importance for up to 1 million years postclosure, and three relatively short-lived radionuclides important for the human intrusion scenario. Consideration of radionuclide buildup in soil caused by previous irrigation with contaminated groundwater was taken into account in the BDCF development. The effect of climate evolution, from the current arid conditions to a wetter and cooler climate, on the BDCF values was evaluated. The analysis included consideration of different exposure pathway's contribution to the BDCFs. Calculations of nominal performance BDCFs used the GENII-S computer code in a series of probabilistic realizations to propagate the uncertainties of input parameters into the output. BDCFs for the nominal performance, when combined with the concentrations of radionuclides in groundwater allow calculation of potential radiation doses to the receptor of interest. Calculated estimates of radionuclide concentration in groundwater result from the saturated zone modeling. The integration of the biosphere modeling results (BDCFs) with the outcomes of the other component models is accomplished in the Total System Performance Assessment (TSPA) to calculate doses to the receptor of interest from radionuclides postulated to be released to the environment from the potential repository at Yucca Mountain

  12. Disruptive Event Biosphere Doser Conversion Factor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    M. Wasiolek

    2000-12-28

    The purpose of this report was to document the process leading to, and the results of, development of radionuclide-, exposure scenario-, and ash thickness-specific Biosphere Dose Conversion Factors (BDCFs) for the postulated postclosure extrusive igneous event (volcanic eruption) at Yucca Mountain. BDCF calculations were done for seventeen radionuclides. The selection of radionuclides included those that may be significant dose contributors during the compliance period of up to 10,000 years, as well as radionuclides of importance for up to 1 million years postclosure. The approach documented in this report takes into account human exposure during three different phases at the time of, and after, volcanic eruption. Calculations of disruptive event BDCFs used the GENII-S computer code in a series of probabilistic realizations to propagate the uncertainties of input parameters into the output. The pathway analysis included consideration of different exposure pathway's contribution to the BDCFs. BDCFs for volcanic eruption, when combined with the concentration of radioactivity deposited by eruption on the soil surface, allow calculation of potential radiation doses to the receptor of interest. Calculation of radioactivity deposition is outside the scope of this report and so is the transport of contaminated ash from the volcano to the location of the receptor. The integration of the biosphere modeling results (BDCFs) with the outcomes of the other component models is accomplished in the Total System Performance Assessment (TSPA), in which doses are calculated to the receptor of interest from radionuclides postulated to be released to the environment from the potential repository at Yucca Mountain.

  13. Absorption correction factor in X-ray fluorescent quantitative analysis

    International Nuclear Information System (INIS)

    Pimjun, S.

    1994-01-01

    An experiment on absorption correction factor in X-ray fluorescent quantitative analysis were carried out. Standard samples were prepared from the mixture of Fe 2 O 3 and tapioca flour at various concentration of Fe 2 O 3 ranging from 5% to 25%. Unknown samples were kaolin containing 3.5% to-50% of Fe 2 O 3 Kaolin samples were diluted with tapioca flour in order to reduce the absorption of FeK α and make them easy to prepare. Pressed samples with 0.150 /cm 2 and 2.76 cm in diameter, were used in the experiment. Absorption correction factor is related to total mass absorption coefficient (χ) which varied with sample composition. In known sample, χ can be calculated by conveniently the formula. However in unknown sample, χ can be determined by Emission-Transmission method. It was found that the relationship between corrected FeK α intensity and contents of Fe 2 O 3 in these samples was linear. This result indicate that this correction factor can be used to adjust the accuracy of X-ray intensity. Therefore, this correction factor is essential in quantitative analysis of elements comprising in any sample by X-ray fluorescent technique

  14. Studies on the instrumental neutron activation analysis by cadmium ratio method and pair comparator method

    Energy Technology Data Exchange (ETDEWEB)

    Chao, H E; Lu, W D; Wu, S C

    1977-12-01

    The cadmium ratio method and pair comparator method provide a solution for the effects on the effective activation factors resulting from the variation of neutron spectrum at different irradiation positions as usually encountered in the single comparator method. The relations between the activation factors and neutron spectrum in terms of cadmium ratio of the comparator Au or of the activation factor of Co-Au pair for the elements, Sc, Cr, Mn, Co, La, Ce, Sm, and Th have been determined. The activation factors of the elements at any irradiation position can then be obtained from the cadmium ratio of the comparator and/or the activation factor of the comparator pair. The relations determined should be able to apply to different reactors and/or different positions of a reactor. It is shown that, for the isotopes /sup 46/Sc, /sup 51/Cr, /sup 56/Mn, /sup 60/Co, /sup 140/La, /sup 141/Ce, /sup 153/Sm and /sup 233/Pa, the thermal neutron activation factors determined by these two methods were generally in agreement with theoretical values. Their I/sub 0//sigma/sub th/ values appeared to agree with literature values also. The methods were applied to determine the contents of elements Sc, Cr, Mn, La, Ce, Sm, and Th in U.S.G.S. Standard Rock G-2, and the results were also in agreement with literature values. The cadmium ratio method and pair comparator method improved the single comparator method, and they are more suitable to analysis for multi-elements of a large number of samples.

  15. Novel approach in quantitative analysis of shearography method

    International Nuclear Information System (INIS)

    Wan Saffiey Wan Abdullah

    2002-01-01

    The application of laser interferometry in industrial non-destructive testing and material characterization is becoming more prevalent since this method provides non-contact full-field inspection of the test object. However their application only limited to the qualitative analysis, current trend has changed to the development of this method by the introduction of quantitative analysis, which attempts to detail the defect examined. This being the design feature for a ranges of object size to be examined. The growing commercial demand for quantitative analysis for NDT and material characterization is determining the quality of optical and analysis instrument. However very little attention is currently being paid to understanding, quantifying and compensating for the numerous error sources which are a function of interferometers. This paper presents a comparison of measurement analysis using the established theoretical approach and the new approach, taken into account the factor of divergence illumination and other geometrical factors. The difference in the measurement system could be associated in the error factor. (Author)

  16. Cask crush pad analysis using detailed and simplified analysis methods

    International Nuclear Information System (INIS)

    Uldrich, E.D.; Hawkes, B.D.

    1997-01-01

    A crush pad has been designed and analyzed to absorb the kinetic energy of a hypothetically dropped spent nuclear fuel shipping cask into a 44-ft. deep cask unloading pool at the Fluorinel and Storage Facility (FAST). This facility, located at the Idaho Chemical Processing Plant (ICPP) at the Idaho national Engineering and Environmental Laboratory (INEEL), is a US Department of Energy site. The basis for this study is an analysis by Uldrich and Hawkes. The purpose of this analysis was to evaluate various hypothetical cask drop orientations to ensure that the crush pad design was adequate and the cask deceleration at impact was less than 100 g. It is demonstrated herein that a large spent fuel shipping cask, when dropped onto a foam crush pad, can be analyzed by either hand methods or by sophisticated dynamic finite element analysis using computer codes such as ABAQUS. Results from the two methods are compared to evaluate accuracy of the simplified hand analysis approach

  17. Advanced Analysis Methods in High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Pushpalatha C. Bhat

    2001-10-03

    During the coming decade, high energy physics experiments at the Fermilab Tevatron and around the globe will use very sophisticated equipment to record unprecedented amounts of data in the hope of making major discoveries that may unravel some of Nature's deepest mysteries. The discovery of the Higgs boson and signals of new physics may be around the corner. The use of advanced analysis techniques will be crucial in achieving these goals. The author discusses some of the novel methods of analysis that could prove to be particularly valuable for finding evidence of any new physics, for improving precision measurements and for exploring parameter spaces of theoretical models.

  18. Replica Analysis for Portfolio Optimization with Single-Factor Model

    Science.gov (United States)

    Shinzato, Takashi

    2017-06-01

    In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.

  19. 凝血检验、血常规的影响因素及控制变异方法分析%The analysis of coagulation tests, blood routine test inlfuence factors and the variation control method

    Institute of Scientific and Technical Information of China (English)

    黄祥丽; 代治国

    2016-01-01

    目的:探讨凝血检验及血常规分析的影响因素及控制变异方法。方法选取2015年1~5月本院招募的健康成年志愿者60名,行凝血常规和血常规检测,对检测结果进行统计分析。结果压脉带使用3分钟时的活化部分凝血活酶时间(activated partial thromboplastin time,APTT)、凝血酶时间(thrombin time,TT)和凝血酶原时间(prothrombin time,PT)分别为(24.60±1.85)秒、(16.54±4.18)秒和(9.80±2.96)秒,均明显低于压脉带即刻时水平,差异有显著性(P<0.05)。压脉带使用3分钟时的红细胞(red blood cell,RBC)和血红蛋白(hemoglobin,HGB)分别为(5.28±0.64)×1012/L和(155.97±6.75)g/L,均明显高于压脉带即刻时水平,白细胞(white blood cell,WBC)和血小板(blood platelet,PLT)分别为(4.26±0.28)×109/L和(172.16±8.95)×109/L。离心前后放置2小时APTT分别为(37.18±2.68)秒和(30.05±3.19)秒,差异有显著性(P<0.05),而血浆纤维蛋白原(plasma fibrinogen,FIB)、TT和PT差异无显著性(P>0.05);离心前后放置4小时APTT、TT和PT差异有显著性(P<0.05)。标本放置24小时后RBC和PLT分别为(4.17±0.30)×1012/L和(165.29±5.58)×109/L,明显低于其他放置时间点,而WBC为(5.32±0.26)×109/L,高于其他放置时间点,差异有显著性(P<0.05)。结论标本采集和标本放置时间及方式对凝血检验及血常规分析有影响,重视控制变异方法,能极大提高检测数据可靠性。%Objective To investigate coagulation tests, blood routine test influence factors and the variation control method.Method60 healthy adult volunteers had been recruited in our hospital from January 2015 to May 2015, for blood coagulation and blood routine examination, the results were statistically analyzed.ResultPressure pulse belt used 3 minutes of activated partial thromboplastin time

  20. Spatial Analysis Methods of Road Traffic Collisions

    DEFF Research Database (Denmark)

    Loo, Becky P. Y.; Anderson, Tessa Kate

    Spatial Analysis Methods of Road Traffic Collisions centers on the geographical nature of road crashes, and uses spatial methods to provide a greater understanding of the patterns and processes that cause them. Written by internationally known experts in the field of transport geography, the book...... outlines the key issues in identifying hazardous road locations (HRLs), considers current approaches used for reducing and preventing road traffic collisions, and outlines a strategy for improved road safety. The book covers spatial accuracy, validation, and other statistical issues, as well as link...

  1. Simple gas chromatographic method for furfural analysis.

    Science.gov (United States)

    Gaspar, Elvira M S M; Lopes, João F

    2009-04-03

    A new, simple, gas chromatographic method was developed for the direct analysis of 5-hydroxymethylfurfural (5-HMF), 2-furfural (2-F) and 5-methylfurfural (5-MF) in liquid and water soluble foods, using direct immersion SPME coupled to GC-FID and/or GC-TOF-MS. The fiber (DVB/CAR/PDMS) conditions were optimized: pH effect, temperature, adsorption and desorption times. The method is simple and accurate (RSDfurfurals will contribute to characterise and quantify their presence in the human diet.

  2. Factors influencing the contraceptive method choice: a university hospital experience

    Science.gov (United States)

    Kahraman, Korhan; Göç, Göksu; Taşkın, Salih; Haznedar, Pınar; Karagözlü, Selen; Kale, Burak; Kurtipek, Zeynep; Özmen, Batuhan

    2012-01-01

    Objective To analyze the factors influencing behavior of women in choosing contraceptive methods. Material and Methods A total of 4022 women who were admitted to our clinic in a year, were the subjects in this current study for contraception choices. Relationship between the current contraceptive choice and the age, marital status, educational level, gravidity and induced abortions were evaluated. Results Current users of any contraceptive methods were found to make up thirty-three percent of the entire study population. The most preferred method of contraception was an intrauterine device (46.4%), followed by, condom (19.2%), coitus interruptus (16.4%), tubal sterilization (11%), oral contraceptives (5.7%) and lastly the “other methods” that consisted of depot injectables and implants (1.2%). Among other contraceptive methods, the condom was found to be used mostly by the younger age group (OR:0.956, 95% CI:0.936–0.976, p<0.001), while tubal sterilization was preferred mainly by the elderly population (p<0.001, OR:1.091, 95% CI:1.062–1.122). Women that have a higher educational level, were found to use OC (76.3%, OR:5.970, 95% CI:3.233–11.022), tubal sterilization (59.6%, OR:4.110, 95% CI:2.694–6.271) and other methods (62.5%, OR:3.279, 95% CI:1.033–10.402) more commonly than the low educational group (p<0.001). Conclusion These results demonstrated that the rates of both contraception utilization and the usage of more effective methods of contraception need to be increased by providing better family planning systems and counselling opportunities. PMID:24592017

  3. Classification analysis of organization factors related to system safety

    International Nuclear Information System (INIS)

    Liu Huizhen; Zhang Li; Zhang Yuling; Guan Shihua

    2009-01-01

    This paper analyzes the different types of organization factors which influence the system safety. The organization factor can be divided into the interior organization factor and exterior organization factor. The latter includes the factors of political, economical, technical, law, social culture and geographical, and the relationships among different interest groups. The former includes organization culture, communication, decision, training, process, supervision and management and organization structure. This paper focuses on the description of the organization factors. The classification analysis of the organization factors is the early work of quantitative analysis. (authors)

  4. Neutron activation analysis: principle and methods

    International Nuclear Information System (INIS)

    Reddy, A.V.R.; Acharya, R.

    2006-01-01

    Neutron activation analysis (NAA) is a powerful isotope specific nuclear analytical technique for simultaneous determination of elemental composition of major, minor and trace elements in diverse matrices. The technique is capable of yielding high analytical sensitivity and low detection limits (ppm to ppb). Due to high penetration power of neutrons and gamma rays, NAA experiences negligible matrix effects in the samples of different origins. Depending on the sample matrix and element of interest NAA technique is used non-destructively, known as instrumental neutron activation analysis (INAA), or through chemical NAA methods. The present article describes principle of NAA, different methods and gives a overview some applications in the fields like environment, biology, geology, material sciences, nuclear technology and forensic sciences. (author)

  5. Digital dream analysis: a revised method.

    Science.gov (United States)

    Bulkeley, Kelly

    2014-10-01

    This article demonstrates the use of a digital word search method designed to provide greater accuracy, objectivity, and speed in the study of dreams. A revised template of 40 word search categories, built into the website of the Sleep and Dream Database (SDDb), is applied to four "classic" sets of dreams: The male and female "Norm" dreams of Hall and Van de Castle (1966), the "Engine Man" dreams discussed by Hobson (1988), and the "Barb Sanders Baseline 250" dreams examined by Domhoff (2003). A word search analysis of these original dream reports shows that a digital approach can accurately identify many of the same distinctive patterns of content found by previous investigators using much more laborious and time-consuming methods. The results of this study emphasize the compatibility of word search technologies with traditional approaches to dream content analysis. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Numerical methods and analysis of multiscale problems

    CERN Document Server

    Madureira, Alexandre L

    2017-01-01

    This book is about numerical modeling of multiscale problems, and introduces several asymptotic analysis and numerical techniques which are necessary for a proper approximation of equations that depend on different physical scales. Aimed at advanced undergraduate and graduate students in mathematics, engineering and physics – or researchers seeking a no-nonsense approach –, it discusses examples in their simplest possible settings, removing mathematical hurdles that might hinder a clear understanding of the methods. The problems considered are given by singular perturbed reaction advection diffusion equations in one and two-dimensional domains, partial differential equations in domains with rough boundaries, and equations with oscillatory coefficients. This work shows how asymptotic analysis can be used to develop and analyze models and numerical methods that are robust and work well for a wide range of parameters.

  7. Qualitative data analysis a methods sourcebook

    CERN Document Server

    Miles, Matthew B; Saldana, Johnny

    2014-01-01

    The Third Edition of Miles & Huberman's classic research methods text is updated and streamlined by Johnny SaldaNa, author of The Coding Manual for Qualitative Researchers. Several of the data display strategies from previous editions are now presented in re-envisioned and reorganized formats to enhance reader accessibility and comprehension. The Third Edition's presentation of the fundamentals of research design and data management is followed by five distinct methods of analysis: exploring, describing, ordering, explaining, and predicting. Miles and Huberman's original research studies are profiled and accompanied with new examples from SaldaNa's recent qualitative work. The book's most celebrated chapter, "Drawing and Verifying Conclusions," is retained and revised, and the chapter on report writing has been greatly expanded, and is now called "Writing About Qualitative Research." Comprehensive and authoritative, Qualitative Data Analysis has been elegantly revised for a new generation of qualitative r...

  8. An exergy method for compressor performance analysis

    Energy Technology Data Exchange (ETDEWEB)

    McGovern, J A; Harte, S [Trinity Coll., Dublin (Ireland)

    1995-07-01

    An exergy method for compressor performance analysis is presented. The purpose of this is to identify and quantify defects in the use of a compressor`s shaft power. This information can be used as the basis for compressor design improvements. The defects are attributed to friction, irreversible heat transfer, fluid throttling, and irreversible fluid mixing. They are described, on a common basis, as exergy destruction rates and their locations are identified. The method can be used with any type of positive displacement compressor. It is most readily applied where a detailed computer simulation program is available for the compressor. An analysis of an open reciprocating refrigeration compressor that used R12 refrigerant is given as an example. The results that are presented consist of graphs of the instantaneous rates of exergy destruction according to the mechanisms involved, a pie chart of the breakdown of the average shaft power wastage by mechanism, and a pie chart with a breakdown by location. (author)

  9. Methods for genetic linkage analysis using trisomies.

    OpenAIRE

    Feingold, E; Lamb, N E; Sherman, S L

    1995-01-01

    Certain genetic disorders are rare in the general population, but more common in individuals with specific trisomies. Examples of this include leukemia and duodenal atresia in trisomy 21. This paper presents a linkage analysis method for using trisomic individuals to map genes for such traits. It is based on a very general gene-specific dosage model that posits that the trait is caused by specific effects of different alleles at one or a few loci and that duplicate copies of "susceptibility" ...

  10. Machine Learning Methods for Production Cases Analysis

    Science.gov (United States)

    Mokrova, Nataliya V.; Mokrov, Alexander M.; Safonova, Alexandra V.; Vishnyakov, Igor V.

    2018-03-01

    Approach to analysis of events occurring during the production process were proposed. Described machine learning system is able to solve classification tasks related to production control and hazard identification at an early stage. Descriptors of the internal production network data were used for training and testing of applied models. k-Nearest Neighbors and Random forest methods were used to illustrate and analyze proposed solution. The quality of the developed classifiers was estimated using standard statistical metrics, such as precision, recall and accuracy.

  11. Safety relief valve alternate analysis method

    International Nuclear Information System (INIS)

    Adams, R.H.; Javid, A.; Khatua, T.P.

    1981-01-01

    An experimental test program was started in the United States in 1976 to define and quantify Safety Relief Valve (SRV) phenomena in General Electric Mark I Suppression Chambers. The testing considered several discharged devices and was used to correlate SRV load prediction models. The program was funded by utilities with Mark I containments and has resulted in a detailed SRV load definition as a portion of the Mark I containment program Load Definition Report (LDR). The (USNRC) has reviewed and approved the LDR SRV load definition. In addition, the USNRC has permitted calibration of structural models used for predicting torus response to SRV loads. Model calibration is subject to confirmatory in-plant testing. The SRV methodology given in the LDR requires that transient dynamic pressures be applied to a torus structural model that includes a fluid added mass matrix. Preliminary evaluations of torus response have indicated order of magnitude conservatisms, with respect to test results, which could result in unrealistic containment modifications. In addition, structural response trends observed in full-scale tests between cold pipe, first valve actuation and hot pipe, subsequent valve actuation conditions have not been duplicated using current analysis methods. It was suggested by others that an energy approach using current fluid models be utilized to define loads. An alternate SRV analysis method is defined to correct suppression chamber structural response to a level that permits economical but conservative design. Simple analogs are developed for the purpose of correcting the analytical response obtained from LDR analysis methods. Analogs evaluated considered forced vibration and free vibration structural response. The corrected response correlated well with in-plant test response. The correlation of the analytical model at test conditions permits application of the alternate analysis method at design conditions. (orig./HP)

  12. Analysis and comparison of biometric methods

    OpenAIRE

    Zatloukal, Filip

    2011-01-01

    The thesis deals with biometrics and biometric systems and the possibility to use these systems in the enterprise. Aim of this study is an analysis and description of selected types of biometric identification methods and their advantages and shortcomings. The work is divided into two parts. The first part is theoretical, describes the basic concepts of biometrics, biometric identification criteria, currently used identification systems, the ways of biometric systems use, performance measurem...

  13. Statistical trend analysis methods for temporal phenomena

    International Nuclear Information System (INIS)

    Lehtinen, E.; Pulkkinen, U.; Poern, K.

    1997-04-01

    We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods

  14. Statistical trend analysis methods for temporal phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Lehtinen, E.; Pulkkinen, U. [VTT Automation, (Finland); Poern, K. [Poern Consulting, Nykoeping (Sweden)

    1997-04-01

    We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods. 14 refs, 10 figs.

  15. The Columbia Impairment Scale: Factor Analysis Using a Community Mental Health Sample

    Science.gov (United States)

    Singer, Jonathan B.; Eack, Shaun M.; Greeno, Catherine M.

    2011-01-01

    Objective: The objective of this study was to test the factor structure of the parent version of the Columbia Impairment Scale (CIS) in a sample of mothers who brought their children for community mental health (CMH) services (n = 280). Method: Confirmatory factor analysis (CFA) was used to test the fit of the hypothesized four-factor structure…

  16. Analysis Method for Integrating Components of Product

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Ho [Inzest Co. Ltd, Seoul (Korea, Republic of); Lee, Kun Sang [Kookmin Univ., Seoul (Korea, Republic of)

    2017-04-15

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  17. Analysis Method for Integrating Components of Product

    International Nuclear Information System (INIS)

    Choi, Jun Ho; Lee, Kun Sang

    2017-01-01

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  18. Using BMDP and SPSS for a Q factor analysis.

    Science.gov (United States)

    Tanner, B A; Koning, S M

    1980-12-01

    While Euclidean distances and Q factor analysis may sometimes be preferred to correlation coefficients and cluster analysis for developing a typology, commercially available software does not always facilitate their use. Commands are provided for using BMDP and SPSS in a Q factor analysis with Euclidean distances.

  19. Probabilistic methods in fire-risk analysis

    International Nuclear Information System (INIS)

    Brandyberry, M.D.

    1989-01-01

    The first part of this work outlines a method for assessing the frequency of ignition of a consumer product in a building and shows how the method would be used in an example scenario utilizing upholstered furniture as the product and radiant auxiliary heating devices (electric heaters, wood stoves) as the ignition source. Deterministic thermal models of the heat-transport processes are coupled with parameter uncertainty analysis of the models and with a probabilistic analysis of the events involved in a typical scenario. This leads to a distribution for the frequency of ignition for the product. In second part, fire-risk analysis as currently used in nuclear plants is outlines along with a discussion of the relevant uncertainties. The use of the computer code COMPBRN is discussed for use in the fire-growth analysis along with the use of response-surface methodology to quantify uncertainties in the code's use. Generalized response surfaces are developed for temperature versus time for a cable tray, as well as a surface for the hot gas layer temperature and depth for a room of arbitrary geometry within a typical nuclear power plant compartment. These surfaces are then used to simulate the cable tray damage time in a compartment fire experiment

  20. Project-Method Fit: Exploring Factors That Influence Agile Method Use

    Science.gov (United States)

    Young, Diana K.

    2013-01-01

    While the productivity and quality implications of agile software development methods (SDMs) have been demonstrated, research concerning the project contexts where their use is most appropriate has yielded less definitive results. Most experts agree that agile SDMs are not suited for all project contexts. Several project and team factors have been…

  1. Hierarchic Analysis Method to Evaluate Rock Burst Risk

    Directory of Open Access Journals (Sweden)

    Ming Ji

    2015-01-01

    Full Text Available In order to reasonably evaluate the risk of rock bursts in mines, the factors impacting rock bursts and the existing grading criterion on the risk of rock bursts were studied. By building a model of hierarchic analysis method, the natural factors, technology factors, and management factors that influence rock bursts were analyzed and researched, which determined the degree of each factor’s influence (i.e., weight and comprehensive index. Then the grade of rock burst risk was assessed. The results showed that the assessment level generated by the model accurately reflected the actual risk degree of rock bursts in mines. The model improved the maneuverability and practicability of existing evaluation criteria and also enhanced the accuracy and science of rock burst risk assessment.

  2. Homotopy analysis method for neutron diffusion calculations

    International Nuclear Information System (INIS)

    Cavdar, S.

    2009-01-01

    The Homotopy Analysis Method (HAM), proposed in 1992 by Shi Jun Liao and has been developed since then, is based on a fundamental concept in differential geometry and topology, the homotopy. It has proved useful for problems involving algebraic, linear/non-linear, ordinary/partial differential and differential-integral equations being an analytic, recursive method that provides a series sum solution. It has the advantage of offering a certain freedom for the choice of its arguments such as the initial guess, the auxiliary linear operator and the convergence control parameter, and it allows us to effectively control the rate and region of convergence of the series solution. HAM is applied for the fixed source neutron diffusion equation in this work, which is a part of our research motivated by the question of whether methods for solving the neutron diffusion equation that yield straightforward expressions but able to provide a solution of reasonable accuracy exist such that we could avoid analytic methods that are widely used but either fail to solve the problem or provide solutions through many intricate expressions that are likely to contain mistakes or numerical methods that require powerful computational resources and advanced programming skills due to their very nature or intricate mathematical fundamentals. Fourier basis are employed for expressing the initial guess due to the structure of the problem and its boundary conditions. We present the results in comparison with other widely used methods of Adomian Decomposition and Variable Separation.

  3. Detector Simulation: Data Treatment and Analysis Methods

    CERN Document Server

    Apostolakis, J

    2011-01-01

    Detector Simulation in 'Data Treatment and Analysis Methods', part of 'Landolt-Börnstein - Group I Elementary Particles, Nuclei and Atoms: Numerical Data and Functional Relationships in Science and Technology, Volume 21B1: Detectors for Particles and Radiation. Part 1: Principles and Methods'. This document is part of Part 1 'Principles and Methods' of Subvolume B 'Detectors for Particles and Radiation' of Volume 21 'Elementary Particles' of Landolt-Börnstein - Group I 'Elementary Particles, Nuclei and Atoms'. It contains the Section '4.1 Detector Simulation' of Chapter '4 Data Treatment and Analysis Methods' with the content: 4.1 Detector Simulation 4.1.1 Overview of simulation 4.1.1.1 Uses of detector simulation 4.1.2 Stages and types of simulation 4.1.2.1 Tools for event generation and detector simulation 4.1.2.2 Level of simulation and computation time 4.1.2.3 Radiation effects and background studies 4.1.3 Components of detector simulation 4.1.3.1 Geometry modeling 4.1.3.2 External fields 4.1.3.3 Intro...

  4. Data Analysis Methods for Library Marketing

    Science.gov (United States)

    Minami, Toshiro; Kim, Eunja

    Our society is rapidly changing to information society, where the needs and requests of the people on information access are different widely from person to person. Library's mission is to provide its users, or patrons, with the most appropriate information. Libraries have to know the profiles of their patrons, in order to achieve such a role. The aim of library marketing is to develop methods based on the library data, such as circulation records, book catalogs, book-usage data, and others. In this paper we discuss the methodology and imporatnce of library marketing at the beginning. Then we demonstrate its usefulness through some examples of analysis methods applied to the circulation records in Kyushu University and Guacheon Library, and some implication that obtained as the results of these methods. Our research is a big beginning towards the future when library marketing is an unavoidable tool.

  5. Factor analysis of serogroups botanica and aurisina of Leptospira biflexa.

    Science.gov (United States)

    Cinco, M

    1977-11-01

    Factor analysis is performed on serovars of Botanica and Aurisina serogroup of Leptospira biflexa. The results show the arrangement of main factors serovar and serogroup specific, as well as the antigens common with serovars of heterologous serogroups.

  6. Phosphorus analysis in milk samples by neutron activation analysis method

    International Nuclear Information System (INIS)

    Oliveira, R.M. de; Cunha, I.I.L.

    1991-01-01

    The determination of phosphorus in milk samples by instrumental thermal neutron activation analysis is described. The procedure involves a short irradiation in a nuclear reactor and measurement of the beta radiation emitted by phosphorus - 32 after a suitable decay period. The sources of error were studied and the established method was applied to standard reference materials of known phosphorus content. (author)

  7. Nonparametric factor analysis of time series

    OpenAIRE

    Rodríguez-Poo, Juan M.; Linton, Oliver Bruce

    1998-01-01

    We introduce a nonparametric smoothing procedure for nonparametric factor analaysis of multivariate time series. The asymptotic properties of the proposed procedures are derived. We present an application based on the residuals from the Fair macromodel.

  8. Analysis of success factors in advertising

    OpenAIRE

    Fedorchak, Oleksiy; Kedebecz, Kristina

    2017-01-01

    The essence of factors of the success of advertising campaigns is investigated. The stages of conducting and stages of evaluation of the effectiveness of advertising campaigns are determined. Also defined goals and objectives of advertising campaigns.

  9. Holographic analysis of diffraction structure factors

    International Nuclear Information System (INIS)

    Marchesini, S.; Bucher, J.J.; Shuh, D.K.; Fabris, L.; Press, M.J.; West, M.W.; Hussain, Z.; Mannella, N.; Fadley, C.S.; Van Hove, M.A.; Stolte, W.C.

    2002-01-01

    We combine the theory of inside-source/inside-detector x-ray fluorescence holography and Kossel lines/ x ray standing waves in kinematic approximation to directly obtain the phases of the diffraction structure factors. The influence of Kossel lines and standing waves on holography is also discussed. We obtain partial phase determination from experimental data obtaining the sign of the real part of the structure factor for several reciprocal lattice vectors of a vanadium crystal

  10. Identification of noise in linear data sets by factor analysis

    International Nuclear Information System (INIS)

    Roscoe, B.A.; Hopke, Ph.K.

    1982-01-01

    A technique which has the ability to identify bad data points, after the data has been generated, is classical factor analysis. The ability of classical factor analysis to identify two different types of data errors make it ideally suited for scanning large data sets. Since the results yielded by factor analysis indicate correlations between parameters, one must know something about the nature of the data set and the analytical techniques used to obtain it to confidentially isolate errors. (author)

  11. Exploring Technostress: Results of a Large Sample Factor Analysis

    OpenAIRE

    Jonušauskas, Steponas; Raišienė, Agota Giedrė

    2016-01-01

    With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ an...

  12. 252Cf-source-driven neutron noise analysis method

    International Nuclear Information System (INIS)

    Mihalczo, J.T.; King, W.T.; Blakeman, E.D.

    1985-01-01

    The 252 Cf-source-driven neutron noise analysis method has been tested in a a wide variety of experiments that have indicated the broad range of applicability of the method. The neutron multiplication factor, k/sub eff/ has been satisfactorily determined for a variety of materials including uranium metal, light water reactor fuel pins, fissile solutions, fuel plates in water, and interacting cylinders. For a uranyl nitrate solution tank which is typical of a fuel processing or reprocessing plant, the k/sub eff/ values were satisfactorily determined for values between 0.92 and 0.5 using a simple point kinetics interpretation of the experimental data. The short measurement times, in several cases as low as 1 min, have shown that the development of this method can lead to a practical subcriticality monitor for many in-plant applications. The further development of the method will require experiments and the development of theoretical methods to predict the experimental observables

  13. Slope stability analysis using limit equilibrium method in nonlinear criterion.

    Science.gov (United States)

    Lin, Hang; Zhong, Wenwen; Xiong, Wei; Tang, Wenyu

    2014-01-01

    In slope stability analysis, the limit equilibrium method is usually used to calculate the safety factor of slope based on Mohr-Coulomb criterion. However, Mohr-Coulomb criterion is restricted to the description of rock mass. To overcome its shortcomings, this paper combined Hoek-Brown criterion and limit equilibrium method and proposed an equation for calculating the safety factor of slope with limit equilibrium method in Hoek-Brown criterion through equivalent cohesive strength and the friction angle. Moreover, this paper investigates the impact of Hoek-Brown parameters on the safety factor of slope, which reveals that there is linear relation between equivalent cohesive strength and weakening factor D. However, there are nonlinear relations between equivalent cohesive strength and Geological Strength Index (GSI), the uniaxial compressive strength of intact rock σ ci , and the parameter of intact rock m i . There is nonlinear relation between the friction angle and all Hoek-Brown parameters. With the increase of D, the safety factor of slope F decreases linearly; with the increase of GSI, F increases nonlinearly; when σ ci is relatively small, the relation between F and σ ci is nonlinear, but when σ ci is relatively large, the relation is linear; with the increase of m i , F decreases first and then increases.

  14. Inference algorithms and learning theory for Bayesian sparse factor analysis

    International Nuclear Information System (INIS)

    Rattray, Magnus; Sharp, Kevin; Stegle, Oliver; Winn, John

    2009-01-01

    Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

  15. Inference algorithms and learning theory for Bayesian sparse factor analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rattray, Magnus; Sharp, Kevin [School of Computer Science, University of Manchester, Manchester M13 9PL (United Kingdom); Stegle, Oliver [Max-Planck-Institute for Biological Cybernetics, Tuebingen (Germany); Winn, John, E-mail: magnus.rattray@manchester.ac.u [Microsoft Research Cambridge, Roger Needham Building, Cambridge, CB3 0FB (United Kingdom)

    2009-12-01

    Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

  16. Statictical Analysis Of The Conditioning Factors Of Urban Electric Consumption

    International Nuclear Information System (INIS)

    Segura D'Rouville, Juan Joel; Suárez Carreño, Franyelit María

    2017-01-01

    This research work presents the analysis of the most important factors that condition the urban residential electricity consumption. This study shows the quantitative parameters conditioning the electricity consumption. This sector of analysis has been chosen because there is disaggregated information of which are the main social and technological factors that determine its behavior, growth, with the objective of elaborating policies in the management of the electric consumption. The electrical demand considered as the sum of the powers of all the equipment that are used in each of the instants of a full day, is related to the electrical consumption, which is not but the value of the power demanded by a determined consumer Multiplied by the time in which said demand is maintained. In this report we propose the design of a probabilistic model of prediction of electricity consumption, taking into account mainly influential social and technological factors. The statistical process of this database is done through the Stat Graphics software version 4.1, for its extensive didactic in the accomplishment of calculations and associated methods. Finally, the correlation of the variables was performed to classify the determinants in a specific way and thus to determine the consumption of the dwellings. (author)

  17. Analysis of risk factors of pulmonary embolism in diabetic patients

    International Nuclear Information System (INIS)

    Xie Changhui; Ma Zhihai; Zhu Lin; Chi Lianxiang

    2012-01-01

    Objective: To study the related risk factors in diabetic patients with pulmonary embolism (PE). Methods: 58 diabetic cases underwent lower limbs 99m Tc-MAA veins imaging (and/or ultrasonography) and pulmonary perfusion imaging. The related laboratory data [fasting blood glucose (FBG), blood cholesterol, blood long chain triglycerides (LCT)] and clinic information [age, disease courses, chest symptoms (chest pain and short of breathe), lower limbs symptoms (swelling, varicose veins and diabetic foot) and acute complication (diabetic ketoacidosis and hyperosmolar non ketotic diabetic coma)] were collected simultaneously. SPSS was used for χ 2 -test and Logistic regression analysis. Results: (1) 28 patients (48.3%) were showed to be with lower limbs deep vein thrombosis (DVT) and by 99m Tc-MAA imaging, 10 cases (17.2%) with PE. The PE ratios (32.1%) of the patients with DVT was more higher than no DVT (3.3%) (χ 2 =6.53, P 2 ≥4.23, P 2 ≤2.76, P>0.05), respectively. (3) Multiplicity analysis indicated: the related risk factors for PE included chest symptoms (Score=13.316, P=0.000) and lower limbs symptoms (Score=7.780, P=0.005). No significant difference to other factors (Score≤2.494, P>0.114), respectively. Conclusion: The serious DM with chest symptoms, lower limbs symptoms and/or DVT must be controlled as early as possible by all kinds of treatment. It will decrease the PE complication. (authors)

  18. Human factors review for Severe Accident Sequence Analysis (SASA)

    International Nuclear Information System (INIS)

    Krois, P.A.; Haas, P.M.; Manning, J.J.; Bovell, C.R.

    1984-01-01

    The paper will discuss work being conducted during this human factors review including: (1) support of the Severe Accident Sequence Analysis (SASA) Program based on an assessment of operator actions, and (2) development of a descriptive model of operator severe accident management. Research by SASA analysts on the Browns Ferry Unit One (BF1) anticipated transient without scram (ATWS) was supported through a concurrent assessment of operator performance to demonstrate contributions to SASA analyses from human factors data and methods. A descriptive model was developed called the Function Oriented Accident Management (FOAM) model, which serves as a structure for bridging human factors, operations, and engineering expertise and which is useful for identifying needs/deficiencies in the area of accident management. The assessment of human factors issues related to ATWS required extensive coordination with SASA analysts. The analysis was consolidated primarily to six operator actions identified in the Emergency Procedure Guidelines (EPGs) as being the most critical to the accident sequence. These actions were assessed through simulator exercises, qualitative reviews, and quantitative human reliability analyses. The FOAM descriptive model assumes as a starting point that multiple operator/system failures exceed the scope of procedures and necessitates a knowledge-based emergency response by the operators. The FOAM model provides a functionally-oriented structure for assembling human factors, operations, and engineering data and expertise into operator guidance for unconventional emergency responses to mitigate severe accident progression and avoid/minimize core degradation. Operators must also respond to potential radiological release beyond plant protective barriers. Research needs in accident management and potential uses of the FOAM model are described. 11 references, 1 figure

  19. Information technology portfolio in supply chain management using factor analysis

    Directory of Open Access Journals (Sweden)

    Ahmad Jaafarnejad

    2013-11-01

    Full Text Available The adoption of information technology (IT along with supply chain management (SCM has become increasingly a necessity among most businesses. This enhances supply chain (SC performance and helps companies achieve the organizational competitiveness. IT systems capture and analyze information and enable management to make decisions by considering a global scope across the entire SC. This paper reviews the existing literature on IT in SCM and considers pertinent criteria. Using principal component analysis (PCA of factor analysis (FA, a number of related criteria are divided into smaller groups. Finally, SC managers can develop an IT portfolio in SCM using mean values of few extracted components on the relevance –emergency matrix. A numerical example is provided to explain details of the proposed method.

  20. Clinical usefulness of physiological components obtained by factor analysis

    International Nuclear Information System (INIS)

    Ohtake, Eiji; Murata, Hajime; Matsuda, Hirofumi; Yokoyama, Masao; Toyama, Hinako; Satoh, Tomohiko.

    1989-01-01

    The clinical usefulness of physiological components obtained by factor analysis was assessed in 99m Tc-DTPA renography. Using definite physiological components, another dynamic data could be analyzed. In this paper, the dynamic renal function after ESWL (Extracorporeal Shock Wave Lithotripsy) treatment was examined using physiological components in the kidney before ESWL and/or a normal kidney. We could easily evaluate the change of renal functions by this method. The usefulness of a new analysis using physiological components was summarized as follows: 1) The change of a dynamic function could be assessed in quantity as that of the contribution ratio. 2) The change of a sick condition could be morphologically evaluated as that of the functional image. (author)

  1. Methods of charged-particle activation analysis

    International Nuclear Information System (INIS)

    Chaudhri, M. Anwar; Chaudhri, M. Nasir; Jabbar, Q.; Nadeem, Q.

    2006-01-01

    The accuracy of Chaudhri's method for charged-particle activation analysis published in J. Radioanal. Chem. (1977) v. 37 p. 243 has been further demonstrated by extensive calculations. The nuclear reactions 12 C(d,n) 13 N, 63 Cu( 3 He,p) 65 Zn, 107 Ag(α,n) 110 In and 208 Pb(d,p) 209 Pb, the cross sections of which were easily available, have been examined for the detection of 12 C, 63 Cu, 107 Ag and 208 Pb, respectively, in matrices of Cu, Zr and Pb, at the bombarding energies of 4 - 22 MeV. The 'standard' is assumed to be in a carbon matrix. It has been clearly demonstrated that Chaudhri's method, which makes the charged particle activation analysis as simple as neutron activation analysis, provides results which are almost identical to, or only about 1-2 % different, from the results obtained using the full 'Activity Equation' involving solving complex integrals. It is valid even when the difference in the average atomic weights of matrices of the standard and the sample is large. (author)

  2. Analysis of factors affecting the effect of stope leaching

    International Nuclear Information System (INIS)

    Xie Wangnan; Dong Chunming

    2014-01-01

    The industrial test and industrial trial production of stope leaching were carried out at Taoshan orefield of Dabu deposit. The results of test and trial production showed obvious differences in leaching rate and leaching time. Compared with industrial trial production of stope leaching, the leaching rate of industrial test was higher, and leaching time was shorter. It was considered that the blasting method and liquid arrangement were the main factors affecting the leaching rate and leaching time according to analysis. So we put forward the following suggestions: the technique of deep hole slicing tight-face blasting was used to reduce the yield of lump ores, the effective liquid arrangement methods were adopted to make the lixiviant infiltrating throughout whole ore heap, and bacterial leaching was introduced. (authors)

  3. Development and analysis of finite volume methods

    International Nuclear Information System (INIS)

    Omnes, P.

    2010-05-01

    This document is a synthesis of a set of works concerning the development and the analysis of finite volume methods used for the numerical approximation of partial differential equations (PDEs) stemming from physics. In the first part, the document deals with co-localized Godunov type schemes for the Maxwell and wave equations, with a study on the loss of precision of this scheme at low Mach number. In the second part, discrete differential operators are built on fairly general, in particular very distorted or nonconforming, bidimensional meshes. These operators are used to approach the solutions of PDEs modelling diffusion, electro and magneto-statics and electromagnetism by the discrete duality finite volume method (DDFV) on staggered meshes. The third part presents the numerical analysis and some a priori as well as a posteriori error estimations for the discretization of the Laplace equation by the DDFV scheme. The last part is devoted to the order of convergence in the L2 norm of the finite volume approximation of the solution of the Laplace equation in one dimension and on meshes with orthogonality properties in two dimensions. Necessary and sufficient conditions, relatively to the mesh geometry and to the regularity of the data, are provided that ensure the second-order convergence of the method. (author)

  4. Creep analysis by the path function method

    International Nuclear Information System (INIS)

    Akin, J.E.; Pardue, R.M.

    1977-01-01

    The finite element method has become a common analysis procedure for the creep analysis of structures. The most recent programs are designed to handle a general class of material properties and are able to calculate elastic, plastic, and creep components of strain under general loading histories. The constant stress approach is too crude a model to accurately represent the actual behaviour of the stress for large time steps. The true path of a point in the effective stress-effective strain (sigmasup(e)-epsilonsup(c)) plane is often one in which the slope is rapidly changing. Thus the stress level quickly moves away from the initial stress level and then gradually approaches the final one. The result is that the assumed constant stress level quickly becomes inaccurate. What is required is a better method of approximation of the true path in the sigmasup(e)-epsilonsup(c) space. The method described here is called the path function approach because it employs an assumed function to estimate the motion of points in the sigmasup(e)-epsilonsup(c) space. (Auth.)

  5. Earthquake Hazard Analysis Methods: A Review

    Science.gov (United States)

    Sari, A. M.; Fakhrurrozi, A.

    2018-02-01

    One of natural disasters that have significantly impacted on risks and damage is an earthquake. World countries such as China, Japan, and Indonesia are countries located on the active movement of continental plates with more frequent earthquake occurrence compared to other countries. Several methods of earthquake hazard analysis have been done, for example by analyzing seismic zone and earthquake hazard micro-zonation, by using Neo-Deterministic Seismic Hazard Analysis (N-DSHA) method, and by using Remote Sensing. In its application, it is necessary to review the effectiveness of each technique in advance. Considering the efficiency of time and the accuracy of data, remote sensing is used as a reference to the assess earthquake hazard accurately and quickly as it only takes a limited time required in the right decision-making shortly after the disaster. Exposed areas and possibly vulnerable areas due to earthquake hazards can be easily analyzed using remote sensing. Technological developments in remote sensing such as GeoEye-1 provide added value and excellence in the use of remote sensing as one of the methods in the assessment of earthquake risk and damage. Furthermore, the use of this technique is expected to be considered in designing policies for disaster management in particular and can reduce the risk of natural disasters such as earthquakes in Indonesia.

  6. Analysis on risk factors for post-stroke emotional incontinence

    Directory of Open Access Journals (Sweden)

    Xiao-chun ZHANG

    2018-01-01

    Full Text Available Objective To investigate the occurrence rate and related risk factors for post-stroke emotional incontinence (PSEI. Methods The clinical data [sex, age, body mass index (BMI, education, marital status, medical history (hypertension, heart disease, diabetes, hyperlipemia, smoking and drinking and family history of stroke] of 162 stroke patients were recorded. Serum homocysteine (Hcy level was examined. Head CT and/or MRI were used to indicate stroke subtype, site of lesion and number of lesion. Diagnostic and Statistical Manual of Mental Disorders Fifth Edition (DSM-Ⅴ Chinese version and Hamilton Depression Rating Scale-17 Items (HAMD-17 were used to evaluate the degree of depression. House diagnostic standard was used to diagnose PSEI. Univariate and multivariate backward Logistic regression analysis was used to screen related risk factor for PSEI. Spearman rank correlation analysis was used to discuss the correlation between PSEI and post-stroke depression (PSD. Results Among 162 stroke patients, 12 cases were diagnosed as PSEI (7.41% . The ratio of age < 60 years in PSEI group was significantly higher than non-PSEI group (P = 0.045. The ratio of smoking in PSEI group was significantly lower than non-PSEI group (P = 0.036. Univariate and multivariate backward Logistic regression analysis showed age < 60 years was independent risk factor for PSEI (OR = 4.000, 95%CI: 1.149-13.924; P = 0.029. Ten cases were combined with PSD in 12 PSEI patients, and the co-morbidity rate of PSEI and PSD was83.33%. Spearman rank correlation analysis showed PSEI was positively related to PSD (rs = 0.305, P = 0.000. Conclusions PSEI is common affective disorder in stroke patients, which easily happens in patients under 60 years of age. DOI: 10.3969/j.issn.1672-6731.2017.12.010

  7. Factoral analysis of the cost of preparing oil

    Energy Technology Data Exchange (ETDEWEB)

    Avdeyeva, L A; Kudoyarov, G Sh; Shmatova, M F

    1979-01-01

    Mathematical statistics methods (basically correlational and regression analysis) are used to study the factors which form the level of cost of preparing oil with consideration of the mutual influence of the factors. Selected as the claims for inclusion into a mathematical model was a group of five a priori justified factors: the water level of the oil being extracted (%); the specific expenditure of deemulsifiers; the volume of oil preparation; the quality of oil preparation (the salt content) and the level of use of the installations' capacities (%). To construct an economic and mathematical model of the cost of the technical preparation (SPP) of the oil, all the unions which make up the Ministry of the Oil Industry were divided into two comparable totalities. The first group included unions in which the oil SPP was lower than the branch average and the second, unions in which the SPP was higher than the branch wide cost. Using the coefficients of regression, special elasticity coefficients and the fluctuation indicators, the basic factors were finally identified which have the greatest influence on the formation of the oil SPP level separately for the first and second groups of unions.

  8. Factorization method and new potentials from the inverted oscillator

    International Nuclear Information System (INIS)

    Bermudez, David; Fernández C, David J.

    2013-01-01

    In this article we will apply the first- and second-order supersymmetric quantum mechanics to obtain new exactly-solvable real potentials departing from the inverted oscillator potential. This system has some special properties; in particular, only very specific second-order transformations produce non-singular real potentials. It will be shown that these transformations turn out to be the so-called complex ones. Moreover, we will study the factorization method applied to the inverted oscillator and the algebraic structure of the new Hamiltonians. -- Highlights: •We apply supersymmetric quantum mechanics to the inverted oscillator potential. •The complex second-order transformations allow us to build new non-singular potentials. •The algebraic structure of the initial and final potentials is analyzed. •The initial potential is described by a complex-deformed Heisenberg–Weyl algebra. •The final potentials are described by polynomial Heisenberg algebras

  9. Warranty claim analysis considering human factors

    International Nuclear Information System (INIS)

    Wu Shaomin

    2011-01-01

    Warranty claims are not always due to product failures. They can also be caused by two types of human factors. On the one hand, consumers might claim warranty due to misuse and/or failures caused by various human factors. Such claims might account for more than 10% of all reported claims. On the other hand, consumers might not be bothered to claim warranty for failed items that are still under warranty, or they may claim warranty after they have experienced several intermittent failures. These two types of human factors can affect warranty claim costs. However, research in this area has received rather little attention. In this paper, we propose three models to estimate the expected warranty cost when the two types of human factors are included. We consider two types of failures: intermittent and fatal failures, which might result in different claim patterns. Consumers might report claims after a fatal failure has occurred, and upon intermittent failures they might report claims after a number of failures have occurred. Numerical examples are given to validate the results derived.

  10. Chiral analysis of baryon form factors

    Energy Technology Data Exchange (ETDEWEB)

    Gail, T.A.

    2007-11-08

    This work presents an extensive theoretical investigation of the structure of the nucleon within the standard model of elementary particle physics. In particular, the long range contributions to a number of various form factors parametrizing the interactions of the nucleon with an electromagnetic probe are calculated. The theoretical framework for those calculations is chiral perturbation theory, the exact low energy limit of Quantum Chromo Dynamics, which describes such long range contributions in terms of a pion-cloud. In this theory, a nonrelativistic leading one loop order calculation of the form factors parametrizing the vector transition of a nucleon to its lowest lying resonance, the {delta}, a covariant calculation of the isovector and isoscalar vector form factors of the nucleon at next to leading one loop order and a covariant calculation of the isoscalar and isovector generalized vector form factors of the nucleon at leading one loop order are performed. In order to perform consistent loop calculations in the covariant formulation of chiral perturbation theory an appropriate renormalization scheme is defined in this work. All theoretical predictions are compared to phenomenology and results from lattice QCD simulations. These comparisons allow for a determination of the low energy constants of the theory. Furthermore, the possibility of chiral extrapolation, i.e. the extrapolation of lattice data from simulations at large pion masses down to the small physical pion mass is studied in detail. Statistical as well as systematic uncertainties are estimated for all results throughout this work. (orig.)

  11. Methods for Engineering Enterprise Management Based on the Inter-factor Productive-Economic Relations

    Directory of Open Access Journals (Sweden)

    O. A. Naydis

    2015-01-01

    Full Text Available The article analyzes the current state of engineering enterprises in the Russian Federation. It conducts a review and analysis of existing methods for business management using indicators to characterize enterprise activities by means of the scalars, functional dependencies of one factor value on the other (function one, wherein the magnitude of one factor value corresponds to a single magnitude of the other value - a dependent factor, as well as by means of data tables, and, as an example, by balance list and articulation statement used in accounting. The paper gives statements of need for taking into account the mutual influences and system interrelation of factors diversity and for special methods of their identification. The article is aimed at development of methods for business management of engineering enterprises taking into account a variety of factors and their interdependencies. The relevance of the issue stems from the fact that the analysis of existing methods of business management has shown that it is impossible to have the requested information about a considerable number of productive-economic factors in their system-based interrelation. There is a proposal for the management objects wherein multiple factors and their interactions are taken into consideration to be called inter-factor productive-economic relations (IPER. The paper presents the IPER-based structure of the business management system. It describes a method to identify controlled productive-economic factors and provides allocation and justification of the significant ones for the IPER control. Described methods of business management are distinguished by a large amount of control information, and data form rather complex structures. Therefore, it is proposed to use them in automatic control systems. The paper describes principles of information support for business management through binding IPER to organizational structures of the enterprise. It offers an

  12. Sensitivity analysis of the Two Geometry Method

    International Nuclear Information System (INIS)

    Wichers, V.A.

    1993-09-01

    The Two Geometry Method (TGM) was designed specifically for the verification of the uranium enrichment of low enriched UF 6 gas in the presence of uranium deposits on the pipe walls. Complications can arise if the TGM is applied under extreme conditions, such as deposits larger than several times the gas activity, small pipe diameters less than 40 mm and low pressures less than 150 Pa. This report presents a comprehensive sensitivity analysis of the TGM. The impact of the various sources of uncertainty on the performance of the method is discussed. The application to a practical case is based on worst case conditions with regards to the measurement conditions, and on realistic conditions with respect to the false alarm probability and the non detection probability. Monte Carlo calculations were used to evaluate the sensitivity for sources of uncertainty which are experimentally inaccessible. (orig.)

  13. Blood proteins analysis by Raman spectroscopy method

    Science.gov (United States)

    Artemyev, D. N.; Bratchenko, I. A.; Khristoforova, Yu. A.; Lykina, A. A.; Myakinin, O. O.; Kuzmina, T. P.; Davydkin, I. L.; Zakharov, V. P.

    2016-04-01

    This work is devoted to study the possibility of plasma proteins (albumin, globulins) concentration measurement using Raman spectroscopy setup. The blood plasma and whole blood were studied in this research. The obtained Raman spectra showed significant variation of intensities of certain spectral bands 940, 1005, 1330, 1450 and 1650 cm-1 for different protein fractions. Partial least squares regression analysis was used for determination of correlation coefficients. We have shown that the proposed method represents the structure and biochemical composition of major blood proteins.

  14. Moessbauer lineshape analysis by the DISPA method

    International Nuclear Information System (INIS)

    Miglierini, M.; Sitek, J.

    1986-01-01

    To evaluate the Moessbauer spectral parameters and hence, the structural and magnetic properties the lineshape should be known. A plot of dispersion versus absorption (DISPA plot) for a pure Lorentzian gives a perfect circle. Directions and magnitudes of DISPA distortions from this reference circle point out the kind of line-broadening mechanism observed. A possibility of the application of the DISPA technique in the Moessbauer lineshape analysis is dealt with in this paper. The method is verified on Moessbauer spectra of sodium nitroprusside, natural iron, and stainless steel. The lineshape of an amorphous metallic alloy Fe 40 Ni 40 B 20 is studied by means of the DISPA plots. (author)

  15. Electromagnetic compatibility methods, analysis, circuits, and measurement

    CERN Document Server

    Weston, David A

    2016-01-01

    Revised, updated, and expanded, Electromagnetic Compatibility: Methods, Analysis, Circuits, and Measurement, Third Edition provides comprehensive practical coverage of the design, problem solving, and testing of electromagnetic compatibility (EMC) in electrical and electronic equipment and systems. This new edition provides novel information on theory, applications, evaluations, electromagnetic computational programs, and prediction techniques available. With sixty-nine schematics providing examples for circuit level electromagnetic interference (EMI) hardening and cost effective EMI problem solving, this book also includes 1130 illustrations and tables. Including extensive data on components and their correct implementation, the myths, misapplication, misconceptions, and fallacies that are common when discussing EMC/EMI will also be addressed and corrected.

  16. Method and apparatus for simultaneous spectroelectrochemical analysis

    Science.gov (United States)

    Chatterjee, Sayandev; Bryan, Samuel A; Schroll, Cynthia A; Heineman, William R

    2013-11-19

    An apparatus and method of simultaneous spectroelectrochemical analysis is disclosed. A transparent surface is provided. An analyte solution on the transparent surface is contacted with a working electrode and at least one other electrode. Light from a light source is focused on either a surface of the working electrode or the analyte solution. The light reflected from either the surface of the working electrode or the analyte solution is detected. The potential of the working electrode is adjusted, and spectroscopic changes of the analyte solution that occur with changes in thermodynamic potentials are monitored.

  17. Necessary steps in factor analysis : Enhancing validation studies of educational instruments. The PHEEM applied to clerks as an example

    NARCIS (Netherlands)

    Schonrock-Adema, Johanna; Heijne-Penninga, Marjolein; van Hell, Elisabeth A.; Cohen-Schotanus, Janke

    2009-01-01

    Background: The validation of educational instruments, in particular the employment of factor analysis, can be improved in many instances. Aims: To demonstrate the superiority of a sophisticated method of factor analysis, implying an integration of recommendations described in the factor analysis

  18. Simplified piping analysis methods with inelastic supports

    International Nuclear Information System (INIS)

    Lin, C.W.; Romanko, A.D.

    1986-01-01

    Energy absorbing supports (EAS) which contain x-shaped plates or dampers with heavy viscous fluid can absorb a large amount of energy during vibratory motions. The response of piping systems supported by these types of energy absorbing devices can be markedly reduced as compared with ordinary supports using rigid rods, hangers or snubbers. In this paper, a simple multiple support response spectrum technique is presented, which would allow the energy dissipation nature of the EAS be factored in the piping response calculation. In the meantime, the effect of lower system frequencies due to the reduced support stiffness from local yielding is also included in the analysis. Numerical results obtained show that this technique is more conservative than the time history solution by an acceptable and realistic margin; and it has less than 10 percent of the computation cost

  19. A method to identify dependencies between organizational factors using statistical independence test

    International Nuclear Information System (INIS)

    Kim, Y.; Chung, C.H.; Kim, C.; Jae, M.; Jung, J.H.

    2004-01-01

    A considerable number of studies on organizational factors in nuclear power plants have been made especially in recent years, most of which have assumed organizational factors to be independent. However, since organizational factors characterize the organization in terms of safety and efficiency etc. and there would be some factors that have close relations between them. Therefore, from whatever point of view, if we want to identify the characteristics of an organization, the dependence relationships should be considered to get an accurate result. In this study the organization of a reference nuclear power plant in Korea was analyzed for the trip cases of that plant using 20 organizational factors that Jacobs and Haber had suggested: 1) coordination of work, 2) formalization, 3) organizational knowledge, 4) roles and responsibilities, 5) external communication, 6) inter-department communications, 7) intra-departmental communications, 8) organizational culture, 9) ownership, 10) safety culture, 11) time urgency, 12) centralization, 13) goal prioritization, 14) organizational learning, 15) problem identification, 16) resource allocation, 17) performance evaluation, 18) personnel selection, 19) technical knowledge, and 20) training. By utilizing the results of the analysis, a method to identify the dependence relationships between organizational factors is presented. The statistical independence test for the analysis result of the trip cases is adopted to reveal dependencies. This method is geared to the needs to utilize many kinds of data that has been obtained as the operating years of nuclear power plants increase, and more reliable dependence relations may be obtained by using these abundant data

  20. Multinomial Response Models, for Modeling and Determining Important Factors in Different Contraceptive Methods in Women

    Directory of Open Access Journals (Sweden)

    E Haji Nejad

    2001-06-01

    Full Text Available Difference aspects of multinomial statistical modelings and its classifications has been studied so far. In these type of problems Y is the qualitative random variable with T possible states which are considered as classifications. The goal is prediction of Y based on a random Vector X ? IR^m. Many methods for analyzing these problems were considered. One of the modern and general method of classification is Classification and Regression Trees (CART. Another method is recursive partitioning techniques which has a strange relationship with nonparametric regression. Classical discriminant analysis is a standard method for analyzing these type of data. Flexible discriminant analysis method which is a combination of nonparametric regression and discriminant analysis and classification using spline that includes least square regression and additive cubic splines. Neural network is an advanced statistical method for analyzing these types of data. In this paper properties of multinomial logistics regression were investigated and this method was used for modeling effective factors in selecting contraceptive methods in Ghom province for married women age 15-49. The response variable has a tetranomial distibution. The levels of this variable are: nothing, pills, traditional and a collection of other contraceptive methods. A collection of significant independent variables were: place, age of women, education, history of pregnancy and family size. Menstruation age and age at marriage were not statistically significant.

  1. Regression analysis of nuclear plant capacity factors

    International Nuclear Information System (INIS)

    Stocks, K.J.; Faulkner, J.I.

    1980-07-01

    Operating data on all commercial nuclear power plants of the PWR, HWR, BWR and GCR types in the Western World are analysed statistically to determine whether the explanatory variables size, year of operation, vintage and reactor supplier are significant in accounting for the variation in capacity factor. The results are compared with a number of previous studies which analysed only United States reactors. The possibility of specification errors affecting the results is also examined. Although, in general, the variables considered are statistically significant, they explain only a small portion of the variation in the capacity factor. The equations thus obtained should certainly not be used to predict the lifetime performance of future large reactors

  2. Coloured Petri Nets: Basic Concepts, Analysis Methods and Practical Use. Vol. 2, Analysis Methods

    DEFF Research Database (Denmark)

    Jensen, Kurt

    ideas behind the analysis methods are described as well as the mathematics on which they are based and also how the methods are supported by computer tools. Some parts of the volume are theoretical while others are application oriented. The purpose of the volume is to teach the reader how to use......This three-volume work presents a coherent description of the theoretical and practical aspects of coloured Petri nets (CP-nets). The second volume contains a detailed presentation of the analysis methods for CP-nets. They allow the modeller to investigate dynamic properties of CP-nets. The main...... the formal analysis methods, which does not require a deep understanding of the underlying mathematical theory....

  3. An Empirical Analysis of Job Satisfaction Factors.

    Science.gov (United States)

    1987-09-01

    have acknowledged the importance of factors which make the Air Force attractive to its members or conversely, make other employees consider...Maslow’s need hierarchy theory attempts to show that man has five basic categories of needs: physiological, safety, belongingness , esteem, and self...attained until lower-level basic needs are attained. This implies a sort of growth process where optional job environments for given employees are

  4. Application of computer intensive data analysis methods to the analysis of digital images and spatial data

    DEFF Research Database (Denmark)

    Windfeld, Kristian

    1992-01-01

    Computer-intensive methods for data analysis in a traditional setting has developed rapidly in the last decade. The application of and adaption of some of these methods to the analysis of multivariate digital images and spatial data are explored, evaluated and compared to well established classical...... into the projection pursuit is presented. Examples from remote sensing are given. The ACE algorithm for computing non-linear transformations for maximizing correlation is extended and applied to obtain a non-linear transformation that maximizes autocorrelation or 'signal' in a multivariate image....... This is a generalization of the minimum /maximum autocorrelation factors (MAF's) which is a linear method. The non-linear method is compared to the linear method when analyzing a multivariate TM image from Greenland. The ACE method is shown to give a more detailed decomposition of the image than the MAF-transformation...

  5. Economic analysis of alternative LLW disposal methods

    International Nuclear Information System (INIS)

    Foutes, C.E.; Queenan, C.J. III

    1987-01-01

    The Environmental Protection Agency (EPA) has evaluated the costs and benefits of alternative disposal technologies as part of its program to develop generally applicable environmental standards for the land disposal of low-level radioactive waste (LLW). Costs, population health effects and Critical Population Group (CPG) exposures resulting from alternative waste treatment and disposal methods were evaluated both in absolute terms and also relative to a base case (current practice). Incremental costs of the standard included costs for packaging, processing, transportation, and burial of waste. Benefits are defined in terms of reductions in the general population health risk (expected fatal cancers and genetic effects) evaluated over 10,000 years. A cost-effectiveness ratio, defined as the incremental cost per avoided health effect, was calculated for each alternative standard. The cost-effectiveness analysis took into account a number of waste streams, hydrogeologic and climatic region settings, and waste treatment and disposal methods. This paper describes the alternatives considered and preliminary results of the cost-effectiveness analysis. 15 references, 7 figures, 3 tables

  6. Analysis of methods. [information systems evolution environment

    Science.gov (United States)

    Mayer, Richard J. (Editor); Ackley, Keith A.; Wells, M. Sue; Mayer, Paula S. D.; Blinn, Thomas M.; Decker, Louis P.; Toland, Joel A.; Crump, J. Wesley; Menzel, Christopher P.; Bodenmiller, Charles A.

    1991-01-01

    Information is one of an organization's most important assets. For this reason the development and maintenance of an integrated information system environment is one of the most important functions within a large organization. The Integrated Information Systems Evolution Environment (IISEE) project has as one of its primary goals a computerized solution to the difficulties involved in the development of integrated information systems. To develop such an environment a thorough understanding of the enterprise's information needs and requirements is of paramount importance. This document is the current release of the research performed by the Integrated Development Support Environment (IDSE) Research Team in support of the IISEE project. Research indicates that an integral part of any information system environment would be multiple modeling methods to support the management of the organization's information. Automated tool support for these methods is necessary to facilitate their use in an integrated environment. An integrated environment makes it necessary to maintain an integrated database which contains the different kinds of models developed under the various methodologies. In addition, to speed the process of development of models, a procedure or technique is needed to allow automatic translation from one methodology's representation to another while maintaining the integrity of both. The purpose for the analysis of the modeling methods included in this document is to examine these methods with the goal being to include them in an integrated development support environment. To accomplish this and to develop a method for allowing intra-methodology and inter-methodology model element reuse, a thorough understanding of multiple modeling methodologies is necessary. Currently the IDSE Research Team is investigating the family of Integrated Computer Aided Manufacturing (ICAM) DEFinition (IDEF) languages IDEF(0), IDEF(1), and IDEF(1x), as well as ENALIM, Entity

  7. A Factor Analysis of the BSRI and the PAQ.

    Science.gov (United States)

    Edwards, Teresa A.; And Others

    Factor analysis of the Bem Sex Role Inventory (BSRI) and the Personality Attributes Questionnaire (PAQ) was undertaken to study the independence of the masculine and feminine scales within each instrument. Both instruments were administered to undergraduate education majors. Analysis of primary first and second order factors of the BSRI indicated…

  8. Improvement of human reliability analysis method for PRA

    International Nuclear Information System (INIS)

    Tanji, Junichi; Fujimoto, Haruo

    2013-09-01

    It is required to refine human reliability analysis (HRA) method by, for example, incorporating consideration for the cognitive process of operator into the evaluation of diagnosis errors and decision-making errors, as a part of the development and improvement of methods used in probabilistic risk assessments (PRAs). JNES has been developed a HRA method based on ATHENA which is suitable to handle the structured relationship among diagnosis errors, decision-making errors and operator cognition process. This report summarizes outcomes obtained from the improvement of HRA method, in which enhancement to evaluate how the plant degraded condition affects operator cognitive process and to evaluate human error probabilities (HEPs) which correspond to the contents of operator tasks is made. In addition, this report describes the results of case studies on the representative accident sequences to investigate the applicability of HRA method developed. HEPs of the same accident sequences are also estimated using THERP method, which is most popularly used HRA method, and comparisons of the results obtained using these two methods are made to depict the differences of these methods and issues to be solved. Important conclusions obtained are as follows: (1) Improvement of HRA method using operator cognitive action model. Clarification of factors to be considered in the evaluation of human errors, incorporation of degraded plant safety condition into HRA and investigation of HEPs which are affected by the contents of operator tasks were made to improve the HRA method which can integrate operator cognitive action model into ATHENA method. In addition, the detail of procedures of the improved method was delineated in the form of flowchart. (2) Case studies and comparison with the results evaluated by THERP method. Four operator actions modeled in the PRAs of representative BWR5 and 4-loop PWR plants were selected and evaluated as case studies. These cases were also evaluated using

  9. Analysis and optimization of the TWINKLE factoring device

    NARCIS (Netherlands)

    Lenstra, A.K.; Shamir, A.; Preneel, B.

    2000-01-01

    We describe an enhanced version of the TWINKLE factoring device and analyse to what extent it can be expected to speed up the sieving step of the Quadratic Sieve and Number Field Sieve factoring al- gorithms. The bottom line of our analysis is that the TWINKLE-assisted factorization of 768-bit

  10. Hierarchical Factoring Based On Image Analysis And Orthoblique Rotations.

    Science.gov (United States)

    Stankov, L

    1979-07-01

    The procedure for hierarchical factoring suggested by Schmid and Leiman (1957) is applied within the framework of image analysis and orthoblique rotational procedures. It is shown that this approach necessarily leads to correlated higher order factors. Also, one can obtain a smaller number of factors than produced by typical hierarchical procedures.

  11. Cleanup standards and pathways analysis methods

    International Nuclear Information System (INIS)

    Devgun, J.S.

    1993-01-01

    Remediation of a radioactively contaminated site requires that certain regulatory criteria be met before the site can be released for unrestricted future use. Since the ultimate objective of remediation is to protect the public health and safety, residual radioactivity levels remaining at a site after cleanup must be below certain preset limits or meet acceptable dose or risk criteria. Release of a decontaminated site requires proof that the radiological data obtained from the site meet the regulatory criteria for such a release. Typically release criteria consist of a composite of acceptance limits that depend on the radionuclides, the media in which they are present, and federal and local regulations. In recent years, the US Department of Energy (DOE) has developed a pathways analysis model to determine site-specific soil activity concentration guidelines for radionuclides that do not have established generic acceptance limits. The DOE pathways analysis computer code (developed by Argonne National Laboratory for the DOE) is called RESRAD (Gilbert et al. 1989). Similar efforts have been initiated by the US Nuclear Regulatory Commission (NRC) to develop and use dose-related criteria based on genetic pathways analyses rather than simplistic numerical limits on residual radioactivity. The focus of this paper is radionuclide contaminated soil. Cleanup standards are reviewed, pathways analysis methods are described, and an example is presented in which RESRAD was used to derive cleanup guidelines

  12. 3D analysis methods - Study and seminar

    International Nuclear Information System (INIS)

    Daaviittila, A.

    2003-10-01

    The first part of the report results from a study that was performed as a Nordic co-operation activity with active participation from Studsvik Scandpower and Westinghouse Atom in Sweden, and VTT in Finland. The purpose of the study was to identify and investigate the effects rising from using the 3D transient com-puter codes in BWR safety analysis, and their influence on the transient analysis methodology. One of the main questions involves the critical power ratio (CPR) calculation methodology. The present way, where the CPR calculation is per-formed with a separate hot channel calculation, can be artificially conservative. In the investigated cases, no dramatic minimum CPR effect coming from the 3D calculation is apparent. Some cases show some decrease in the transient change of minimum CPR with the 3D calculation, which confirms the general thinking that the 1D calculation is conservative. On the other hand, the observed effect on neutron flux behaviour is quite large. In a slower transient the 3D effect might be stronger. The second part of the report is a summary of a related seminar that was held on the 3D analysis methods. The seminar was sponsored by the Reactor Safety part (NKS-R) of the Nordic Nuclear Safety Research Programme (NKS). (au)

  13. Generalized Analysis of a Distribution Separation Method

    Directory of Open Access Journals (Sweden)

    Peng Zhang

    2016-04-01

    Full Text Available Separating two probability distributions from a mixture model that is made up of the combinations of the two is essential to a wide range of applications. For example, in information retrieval (IR, there often exists a mixture distribution consisting of a relevance distribution that we need to estimate and an irrelevance distribution that we hope to get rid of. Recently, a distribution separation method (DSM was proposed to approximate the relevance distribution, by separating a seed irrelevance distribution from the mixture distribution. It was successfully applied to an IR task, namely pseudo-relevance feedback (PRF, where the query expansion model is often a mixture term distribution. Although initially developed in the context of IR, DSM is indeed a general mathematical formulation for probability distribution separation. Thus, it is important to further generalize its basic analysis and to explore its connections to other related methods. In this article, we first extend DSM’s theoretical analysis, which was originally based on the Pearson correlation coefficient, to entropy-related measures, including the KL-divergence (Kullback–Leibler divergence, the symmetrized KL-divergence and the JS-divergence (Jensen–Shannon divergence. Second, we investigate the distribution separation idea in a well-known method, namely the mixture model feedback (MMF approach. We prove that MMF also complies with the linear combination assumption, and then, DSM’s linear separation algorithm can largely simplify the EM algorithm in MMF. These theoretical analyses, as well as further empirical evaluation results demonstrate the advantages of our DSM approach.

  14. Method development for trace and ultratrace analysis

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    Method development, that is, selection of a mode of chromatography and the right column and mobile-phase combination for trace and ultratrace analysis, requires several main considerations. The method should be useful for resolving various trace and ultratrace components present in the sample. If the nature of these components is known, the choice of method may be straightforward, that is, a selection can be made from the following modes of HPLC: (1) adsorption chromatography; (2) normal-phase chromatography; (3) reversed-phase chromatography; (4) ion-pair chromatography; (5) ion-exchange chromatography; (6) ion chromatography. Unfortunately, the nature of all of the components is frequently unknown. However, several intelligent judgments can be made on the nature of impurities. This chapter deals with some basic approaches to mobile-phase selection and optimization. More detailed information may be found in basic texts. Techniques for separation of high-molecular-weight compounds (macromolecules) and chiral compounds may be found elsewhere. Mainly compounds with molecular weight lower than 2,000 are discussed here. 123 refs

  15. Phylogenetic analysis using parsimony and likelihood methods.

    Science.gov (United States)

    Yang, Z

    1996-02-01

    The assumptions underlying the maximum-parsimony (MP) method of phylogenetic tree reconstruction were intuitively examined by studying the way the method works. Computer simulations were performed to corroborate the intuitive examination. Parsimony appears to involve very stringent assumptions concerning the process of sequence evolution, such as constancy of substitution rates between nucleotides, constancy of rates across nucleotide sites, and equal branch lengths in the tree. For practical data analysis, the requirement of equal branch lengths means similar substitution rates among lineages (the existence of an approximate molecular clock), relatively long interior branches, and also few species in the data. However, a small amount of evolution is neither a necessary nor a sufficient requirement of the method. The difficulties involved in the application of current statistical estimation theory to tree reconstruction were discussed, and it was suggested that the approach proposed by Felsenstein (1981, J. Mol. Evol. 17: 368-376) for topology estimation, as well as its many variations and extensions, differs fundamentally from the maximum likelihood estimation of a conventional statistical parameter. Evidence was presented showing that the Felsenstein approach does not share the asymptotic efficiency of the maximum likelihood estimator of a statistical parameter. Computer simulations were performed to study the probability that MP recovers the true tree under a hierarchy of models of nucleotide substitution; its performance relative to the likelihood method was especially noted. The results appeared to support the intuitive examination of the assumptions underlying MP. When a simple model of nucleotide substitution was assumed to generate data, the probability that MP recovers the true topology could be as high as, or even higher than, that for the likelihood method. When the assumed model became more complex and realistic, e.g., when substitution rates were

  16. Modification and analysis of engineering hot spot factor of HFETR

    International Nuclear Information System (INIS)

    Hu Yuechun; Deng Caiyu; Li Haitao; Xu Taozhong; Mo Zhengyu

    2014-01-01

    This paper presents the modification and analysis of engineering hot spot factors of HFETR. The new factors are applied in the fuel temperature analysis and the estimated value of the safety allowable operating power of HFETR. The result shows the maximum cladding temperature of the fuel is lower when the new factor are in utilization, and the safety allowable operating power of HFETR if higher, thus providing the economical efficiency of HFETR. (authors)

  17. A replication of a factor analysis of motivations for trapping

    Science.gov (United States)

    Schroeder, Susan; Fulton, David C.

    2015-01-01

    Using a 2013 sample of Minnesota trappers, we employed confirmatory factor analysis to replicate an exploratory factor analysis of trapping motivations conducted by Daigle, Muth, Zwick, and Glass (1998).  We employed the same 25 items used by Daigle et al. and tested the same five-factor structure using a recent sample of Minnesota trappers. We also compared motivations in our sample to those reported by Daigle et el.

  18. Landslide susceptibility analysis using Probabilistic Certainty Factor ...

    Indian Academy of Sciences (India)

    done using many different methods and techniques. A detailed outline of .... of depressions where water is accumulated, espe- cially when the ..... The two decision rules that must be satisfied for a good landslide .... making the susceptibility zonation relative. This is ..... tional Conference on Imaging Systems and Techniques,.

  19. Lactose intolerance : analysis of underlying factors

    NARCIS (Netherlands)

    Vonk, RJ; Priebe, MG; Koetse, HA; Stellaard, F; Lenoir-Wijnkoop, [No Value; Antoine, JM; Zhong, Y; Huang, CY

    Background We studied the degree of lactose digestion and orocecal transit time (OCTT) as possible causes for the variability of symptoms of lactose intolerance (LI) in a sample of a population with genetically determined low lactase activity. Methods Lactose digestion index (LDI) was measured by

  20. Analysis of risk assessment methods for goods trucking

    Directory of Open Access Journals (Sweden)

    Yunyazova A.O.

    2018-04-01

    Full Text Available the article considers models of risk assessment that can be applied to cargo transportation, for forecasting possible damage in the form of financial and material costs in order to reduce the percentage of probability of their occurrence. The analysis of risk by the method «Criterion. Event. Rule" is represented. This method is based on the collection of information by various methods, assigning an assessment to the identified risks, ranking and formulating a report on the analysis. It can be carried out as a fully manual mechanical method of information collecting and performing calculations or can be brought to an automated level from data collection to the delivery of finished results (but in this case some nuances that could significantly influence the outcome of the analysis can be ignored. The expert method is of particular importance, since it relies directly on human experience. In this case, a special role is played by the human factor. The collection of information and the assigned assessments to risk groups depend on the extent to which experts agree on this issue. The smaller the fluctuations in the values ​​of the estimates of the experts, the more accurate and optimal the results will be.

  1. External event analysis methods for NUREG-1150

    International Nuclear Information System (INIS)

    Bohn, M.P.; Lambright, J.A.

    1989-01-01

    The US Nuclear Regulatory Commission is sponsoring probabilistic risk assessments of six operating commercial nuclear power plants as part of a major update of the understanding of risk as provided by the original WASH-1400 risk assessments. In contrast to the WASH-1400 studies, at least two of the NUREG-1150 risk assessments will include an analysis of risks due to earthquakes, fires, floods, etc., which are collectively known as eternal events. This paper summarizes the methods to be used in the external event analysis for NUREG-1150 and the results obtained to date. The two plants for which external events are being considered are Surry and Peach Bottom, a PWR and BWR respectively. The external event analyses (through core damage frequency calculations) were completed in June 1989, with final documentation available in September. In contrast to most past external event analyses, wherein rudimentary systems models were developed reflecting each external event under consideration, the simplified NUREG-1150 analyses are based on the availability of the full internal event PRA systems models (event trees and fault trees) and make use of extensive computer-aided screening to reduce them to sequence cut sets important to each external event. This provides two major advantages in that consistency and scrutability with respect to the internal event analysis is achieved, and the full gamut of random and test/maintenance unavailabilities are automatically included, while only those probabilistically important survive the screening process. Thus, full benefit of the internal event analysis is obtained by performing the internal and external event analyses sequentially

  2. Human factor analysis and preventive countermeasures in nuclear power plant

    International Nuclear Information System (INIS)

    Li Ye

    2010-01-01

    Based on the human error analysis theory and the characteristics of maintenance in a nuclear power plant, human factors of maintenance in NPP are divided into three different areas: human, technology, and organization. Which is defined as individual factors, including psychological factors, physiological characteristics, health status, level of knowledge and interpersonal skills; The technical factors including technology, equipment, tools, working order, etc.; The organizational factors including management, information exchange, education, working environment, team building and leadership management,etc The analysis found that organizational factors can directly or indirectly affect the behavior of staff and technical factors, is the most basic human error factor. Based on this nuclear power plant to reduce human error and measures the response. (authors)

  3. Confirmatory Factor Analysis of the ISB - Burnout Syndrome Inventory

    Directory of Open Access Journals (Sweden)

    Ana Maria T. Benevides-Pereira

    2017-05-01

    Full Text Available AimBurnout is a dysfunctional reaction to chronic occupational stress. The present study analysis the psychometric qualities of the Burnout Syndrome Inventory (ISB through Confirmatory Factor Analysis (CFA.MethodEmpirical study in a multi-centre and multi-occupational sample (n = 701 using the ISB. The Part I assesses antecedent factors: Positive Organizational Conditions (PC and Negative Organizational Conditions (NC. The Part II assesses the syndrome: Emotional Exhaustion (EE, Dehumanization (DE, Emotional Distancing (ED and Personal Accomplishment (PA.ResultsThe highest means occurred in the positive scales CP (M = 23.29, SD = 5.89 and PA (M = 14.84, SD = 4.71. Negative conditions showed the greatest variability (SD = 6.03. Reliability indexes were reasonable, with the lowest rate at .77 for DE and the highest rate .91 for PA. The CFA revealed RMSEA = .057 and CFI = .90 with all scales regressions showing significant values (β = .73 until β = .92.ConclusionThe ISB showed a plausible instrument to evaluate burnout. The two sectors maintained the initial model and confirmed the theoretical presupposition. This instrument makes possible a more comprehensive idea of the labour context, and one or another part may be used separately according to the needs and the aims of the assessor.

  4. Spectral Analysis Methods of Social Networks

    Directory of Open Access Journals (Sweden)

    P. G. Klyucharev

    2017-01-01

    Full Text Available Online social networks (such as Facebook, Twitter, VKontakte, etc. being an important channel for disseminating information are often used to arrange an impact on the social consciousness for various purposes - from advertising products or services to the full-scale information war thereby making them to be a very relevant object of research. The paper reviewed the analysis methods of social networks (primarily, online, based on the spectral theory of graphs. Such methods use the spectrum of the social graph, i.e. a set of eigenvalues of its adjacency matrix, and also the eigenvectors of the adjacency matrix.Described measures of centrality (in particular, centrality based on the eigenvector and PageRank, which reflect a degree of impact one or another user of the social network has. A very popular PageRank measure uses, as a measure of centrality, the graph vertices, the final probabilities of the Markov chain, whose matrix of transition probabilities is calculated on the basis of the adjacency matrix of the social graph. The vector of final probabilities is an eigenvector of the matrix of transition probabilities.Presented a method of dividing the graph vertices into two groups. It is based on maximizing the network modularity by computing the eigenvector of the modularity matrix.Considered a method for detecting bots based on the non-randomness measure of a graph to be computed using the spectral coordinates of vertices - sets of eigenvector components of the adjacency matrix of a social graph.In general, there are a number of algorithms to analyse social networks based on the spectral theory of graphs. These algorithms show very good results, but their disadvantage is the relatively high (albeit polynomial computational complexity for large graphs.At the same time it is obvious that the practical application capacity of the spectral graph theory methods is still underestimated, and it may be used as a basis to develop new methods.The work

  5. Investigating product development strategy in beverage industry using factor analysis

    Directory of Open Access Journals (Sweden)

    Naser Azad

    2013-03-01

    Full Text Available Selecting a product development strategy that is associated with the company's current service or product innovation, based on customers’ needs and changing environment, plays an important role in increasing demand, increasing market share, increasing sales and profits. Therefore, it is important to extract effective variables associated with product development to improve performance measurement of firms. This paper investigates important factors influencing product development strategies using factor analysis. The proposed model of this paper investigates 36 factors and, using factor analysis, we extract six most influential factors including information sharing, intelligence information, exposure strategy, differentiation, research and development strategy and market survey. The first strategy, partnership, includes five sub-factor including product development partnership, partnership with foreign firms, customers’ perception from competitors’ products, Customer involvement in product development, inter-agency coordination, customer-oriented approach to innovation and transmission of product development change where inter-agency coordination has been considered the most important factor. Internal strengths are the most influential factors impacting the second strategy, intelligence information. The third factor, introducing strategy, introducing strategy, includes four sub criteria and consumer buying behavior is the most influencing factor. Differentiation is the next important factor with five components where knowledge and expertise in product innovation is the most important one. Research and development strategy with four sub-criteria where reducing product development cycle plays the most influential factor and finally, market survey strategy is the last important factor with three factors and finding new market plays the most important role.

  6. Optimization of cooling tower performance analysis using Taguchi method

    Directory of Open Access Journals (Sweden)

    Ramkumar Ramakrishnan

    2013-01-01

    Full Text Available This study discuss the application of Taguchi method in assessing maximum cooling tower effectiveness for the counter flow cooling tower using expanded wire mesh packing. The experiments were planned based on Taguchi’s L27 orthogonal array .The trail was performed under different inlet conditions of flow rate of water, air and water temperature. Signal-to-noise ratio (S/N analysis, analysis of variance (ANOVA and regression were carried out in order to determine the effects of process parameters on cooling tower effectiveness and to identity optimal factor settings. Finally confirmation tests verified this reliability of Taguchi method for optimization of counter flow cooling tower performance with sufficient accuracy.

  7. Method and tool for network vulnerability analysis

    Science.gov (United States)

    Swiler, Laura Painton [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM

    2006-03-14

    A computer system analysis tool and method that will allow for qualitative and quantitative assessment of security attributes and vulnerabilities in systems including computer networks. The invention is based on generation of attack graphs wherein each node represents a possible attack state and each edge represents a change in state caused by a single action taken by an attacker or unwitting assistant. Edges are weighted using metrics such as attacker effort, likelihood of attack success, or time to succeed. Generation of an attack graph is accomplished by matching information about attack requirements (specified in "attack templates") to information about computer system configuration (contained in a configuration file that can be updated to reflect system changes occurring during the course of an attack) and assumed attacker capabilities (reflected in "attacker profiles"). High risk attack paths, which correspond to those considered suited to application of attack countermeasures given limited resources for applying countermeasures, are identified by finding "epsilon optimal paths."

  8. Application of the AHP method to analyze the significance of the factors affecting road traffic safety

    Directory of Open Access Journals (Sweden)

    Justyna SORDYL

    2015-06-01

    Full Text Available Over the past twenty years, the number of vehicles registered in Poland has grown rapidly. At the same time, a relatively small increase in the length of the road network has been observed. As a result of the limited capacity of available infrastructure, it leads to significant congestion and to increase of the probability of road accidents. The overall level of road safety depends on many factors - the behavior of road users, infrastructure solutions and the development of automotive technology. Thus the detailed assessment of the importance of individual elements determining road safety is difficult. The starting point is to organize the factors by grouping them into categories which are components of the DVE system (driver - vehicle - environment. In this work, to analyze the importance of individual factors affecting road safety, the use of analytic hierarchy process method (AHP was proposed. It is one of the multi-criteria methods which allows us to perform hierarchical analysis of the decision process, by means of experts’ opinions. Usage of AHP method enabled us to evaluate and rank the factors affecting road safety. This work attempts to link the statistical data and surveys in significance analysis of the elements determining road safety.

  9. Multivariate factor analysis of Girgentana goat milk composition

    Directory of Open Access Journals (Sweden)

    Pietro Giaccone

    2010-01-01

    Full Text Available The interpretation of the several variables that contribute to defining milk quality is difficult due to the high degree of  correlation among them. In this case, one of the best methods of statistical processing is factor analysis, which belongs  to the multivariate groups; for our study this particular statistical approach was employed.  A total of 1485 individual goat milk samples from 117 Girgentana goats, were collected fortnightly from January to July,  and analysed for physical and chemical composition, and clotting properties. Milk pH and tritable acidity were within the  normal range for fresh goat milk. Morning milk yield resulted 704 ± 323 g with 3.93 ± 1.23% and 3.48±0.38% for fat  and protein percentages, respectively. The milk urea content was 43.70 ± 8.28 mg/dl. The clotting ability of Girgentana  milk was quite good, with a renneting time equal to 16.96 ± 3.08 minutes, a rate of curd formation of 2.01 ± 1.63 min-  utes and a curd firmness of 25.08 ± 7.67 millimetres.  Factor analysis was performed by applying axis orthogonal rotation (rotation type VARIMAX; the analysis grouped the  milk components into three latent or common factors. The first, which explained 51.2% of the total covariance, was  defined as “slow milks”, because it was linked to r and pH. The second latent factor, which explained 36.2% of the total  covariance, was defined as “milk yield”, because it is positively correlated to the morning milk yield and to the urea con-  tent, whilst negatively correlated to the fat percentage. The third latent factor, which explained 12.6% of the total covari-  ance, was defined as “curd firmness,” because it is linked to protein percentage, a30 and titatrable acidity. With the aim  of evaluating the influence of environmental effects (stage of kidding, parity and type of kidding, factor scores were anal-  ysed with the mixed linear model. Results showed significant effects of the season of

  10. Housing price forecastability: A factor analysis

    DEFF Research Database (Denmark)

    Møller, Stig Vinther; Bork, Lasse

    2017-01-01

    We examine U.S. housing price forecastability using principal component analysis (PCA), partial least squares (PLS), and sparse PLS (SPLS). We incorporate information from a large panel of 128 economic time series and show that macroeconomic fundamentals have strong predictive power for future...... movements in housing prices. We find that (S)PLS models systematically dominate PCA models. (S)PLS models also generate significant out-of-sample predictive power over and above the predictive power contained by the price-rent ratio, autoregressive benchmarks, and regression models based on small datasets....

  11. Human factors estimation method in nuclear power plants

    International Nuclear Information System (INIS)

    Takano, Kenichi; Yoshino, Kenji; Nagasaka, Akihiko

    1987-01-01

    It is need for improving a NPS reliability to prevent human-errors of operators in a control room. Especially, the time error or omission error may be often caused by a exceed of the mental work load. Therefore, in order to decrease such kinds of human errors, not only the planning of an equipment and a console is well considered about proper level of mental work load but also the exceeded mental work load must be let down by trainning etc. This paper present measurement techniques of the mental work load by physiological informations and the relation between the error rate and mental work load on the basis of the experiment by various modeled tasks. Following results are obtained. (1) TSF, the indicator of the mental work load, is well correlated to the subsidary task reaction time. Therefore it is able to estimate the TSF by subsidary tasks if the task was loaded instanteniously with main task. (2) The relation between the TSF and GSR pulses rate has a 0.81 correlation factor except the case of a parallel processing task. Because we can evaluate the mental work load by the measurement of the GSR pulses rate if the task was processed by a single channel. But if uses GSR, the atomospheric condition is kept constant and the arousal level must be at the well stage. (3) The human error is greatly increase when the TSF exceed above 60 %, that values are almost agreed to the tolerance limit of the TTS methods. (author)

  12. Preliminary hazard analysis using sequence tree method

    International Nuclear Information System (INIS)

    Huang Huiwen; Shih Chunkuan; Hung Hungchih; Chen Minghuei; Yih Swu; Lin Jiinming

    2007-01-01

    A system level PHA using sequence tree method was developed to perform Safety Related digital I and C system SSA. The conventional PHA is a brainstorming session among experts on various portions of the system to identify hazards through discussions. However, this conventional PHA is not a systematic technique, the analysis results strongly depend on the experts' subjective opinions. The analysis quality cannot be appropriately controlled. Thereby, this research developed a system level sequence tree based PHA, which can clarify the relationship among the major digital I and C systems. Two major phases are included in this sequence tree based technique. The first phase uses a table to analyze each event in SAR Chapter 15 for a specific safety related I and C system, such as RPS. The second phase uses sequence tree to recognize what I and C systems are involved in the event, how the safety related systems work, and how the backup systems can be activated to mitigate the consequence if the primary safety systems fail. In the sequence tree, the defense-in-depth echelons, including Control echelon, Reactor trip echelon, ESFAS echelon, and Indication and display echelon, are arranged to construct the sequence tree structure. All the related I and C systems, include digital system and the analog back-up systems are allocated in their specific echelon. By this system centric sequence tree based analysis, not only preliminary hazard can be identified systematically, the vulnerability of the nuclear power plant can also be recognized. Therefore, an effective simplified D3 evaluation can be performed as well. (author)

  13. CARBON SEQUESTRATION: A METHODS COMPARATIVE ANALYSIS

    International Nuclear Information System (INIS)

    Christopher J. Koroneos; Dimitrios C. Rovas

    2008-01-01

    All human activities are related with the energy consumption. Energy requirements will continue to rise, due to the modern life and the developing countries growth. Most of the energy demand emanates from fossil fuels. Fossil fuels combustion has negative environmental impacts, with the CO 2 production to be dominating. The fulfillment of the Kyoto protocol criteria requires the minimization of CO 2 emissions. Thus the management of the CO 2 emissions is an urgent matter. The use of appliances with low energy use and the adoption of an energy policy that prevents the unnecessary energy use, can play lead to the reduction of carbon emissions. A different route is the introduction of ''clean'' energy sources, such as renewable energy sources. Last but not least, the development of carbon sequestration methods can be promising technique with big future potential. The objective of this work is the analysis and comparison of different carbon sequestration and deposit methods. Ocean deposit, land ecosystems deposit, geological formations deposit and radical biological and chemical approaches will be analyzed

  14. Efficiency limit factor analysis for the Francis-99 hydraulic turbine

    Science.gov (United States)

    Zeng, Y.; Zhang, L. X.; Guo, J. P.; Guo, Y. K.; Pan, Q. L.; Qian, J.

    2017-01-01

    The energy loss in hydraulic turbine is the most direct factor that affects the efficiency of the hydraulic turbine. Based on the analysis theory of inner energy loss of hydraulic turbine, combining the measurement data of the Francis-99, this paper calculates characteristic parameters of inner energy loss of the hydraulic turbine, and establishes the calculation model of the hydraulic turbine power. Taken the start-up test conditions given by Francis-99 as case, characteristics of the inner energy of the hydraulic turbine in transient and transformation law are researched. Further, analyzing mechanical friction in hydraulic turbine, we think that main ingredients of mechanical friction loss is the rotation friction loss between rotating runner and water body, and defined as the inner mechanical friction loss. The calculation method of the inner mechanical friction loss is given roughly. Our purpose is that explore and research the method and way increasing transformation efficiency of water flow by means of analysis energy losses in hydraulic turbine.

  15. Investigation and analysis of aircrew ametropia and related factors

    Directory of Open Access Journals (Sweden)

    Li-Juan Zheng

    2014-10-01

    Full Text Available AIM: To investigate the refractive distribution and analysis risk factors for aircrew ametropia.METHODS: The number of 49 cases with ametropia from 1031 aircrew during May 2013 to May 2014 were reviewed. Various types of refraction composition, age, type, position, time of flight with the subjective assessment of aircrew were analyzed and compared. RESULTS: Of 49 cases, 43 cases(88%were myopia, 6 cases(12%were hypermetropia.,Detection rates were higher in age over 50 years aircrew and flight time more than 3000h. Detection rates were lower in self-conscious symptom heavy aircrew, fighter aircrew and good habit of using eyes. CONCLUSION: The myopia incidence in aircrew with age >50 years and long flight time is higher, than that of fighter pilots and good habit of using eyes. We should pay attention to the increasing late-onset myopia of aviators and habit of using eyes, work intensity and time of using eyes about aircrew.

  16. Confirmatory Factor Analysis Alternative: Free, Accessible CBID Software.

    Science.gov (United States)

    Bott, Marjorie; Karanevich, Alex G; Garrard, Lili; Price, Larry R; Mudaranthakam, Dinesh Pal; Gajewski, Byron

    2018-02-01

    New software that performs Classical and Bayesian Instrument Development (CBID) is reported that seamlessly integrates expert (content validity) and participant data (construct validity) to produce entire reliability estimates with smaller sample requirements. The free CBID software can be accessed through a website and used by clinical investigators in new instrument development. Demonstrations are presented of the three approaches using the CBID software: (a) traditional confirmatory factor analysis (CFA), (b) Bayesian CFA using flat uninformative prior, and (c) Bayesian CFA using content expert data (informative prior). Outcomes of usability testing demonstrate the need to make the user-friendly, free CBID software available to interdisciplinary researchers. CBID has the potential to be a new and expeditious method for instrument development, adding to our current measurement toolbox. This allows for the development of new instruments for measuring determinants of health in smaller diverse populations or populations of rare diseases.

  17. Recent improvement of the resonance analysis methods

    International Nuclear Information System (INIS)

    Sirakov, I.; Lukyanov, A.

    2000-01-01

    By the use of a two-step method called Combined, the R-matrix Wigner-Eisenbud representation in the resonance reaction theory has been converted into other equivalent representations (parameterizations) of the collision matrix with Poles in E domain. Two of them called Capture Elimination (CE) and Reaction Elimination (RE) representation respectively, have energy independent parameters and are both rigorous and applicable. The CE representation is essentially a generalization of the Reich-Moore (RM) formalism. The RE representation, in turn, offers some distinct advantages when analyzing fissile nuclei. The latter does not require any approximation for the capture channels and does not need any assumption about the number of fission channels in contrast to the RM representation. Unlike the RM parameters the RE ones are uniquely determined for applications in the resonance analysis. When given in the RE representation, neutron cross sections of fissile nuclei in the resolved resonance region are presented through simple scalar expressions without the need of matrix inversion. Various computer codes have been developed to demonstrate the viability of the new method. The RM parameters of the fissile nuclei have been converted into equivalent RE parameters implying the RM assumptions (REFINE code). Conversely, the RE parameters have been converted into corresponding RM parameters when one fission channel is present and the RM parameter set is unique, e.g. Pu-239, J =1 (REVERSE code). To further enhance the flexibility of the proposed method the obtained RE parameters have been converted into equivalent Generalized Pole parameters (REFILE code), which are parameters of the rigorous pole expansion of the collision matrix in √E domain. equi valent sets of RM, RE and GP parameters of 239 Pu are given as an example. It has been pointed out that all the advantages of the newly proposed representation can be implemented through an independent evaluation of the RE resonance

  18. Factoring handedness data: I. Item analysis.

    Science.gov (United States)

    Messinger, H B; Messinger, M I

    1995-12-01

    Recently in this journal Peters and Murphy challenged the validity of factor analyses done on bimodal handedness data, suggesting instead that right- and left-handers be studied separately. But bimodality may be avoidable if attention is paid to Oldfield's questionnaire format and instructions for the subjects. Two characteristics appear crucial: a two-column LEFT-RIGHT format for the body of the instrument and what we call Oldfield's Admonition: not to indicate strong preference for handedness item, such as write, unless "... the preference is so strong that you would never try to use the other hand unless absolutely forced to...". Attaining unimodality of an item distribution would seem to overcome the objections of Peters and Murphy. In a 1984 survey in Boston we used Oldfield's ten-item questionnaire exactly as published. This produced unimodal item distributions. With reflection of the five-point item scale and a logarithmic transformation, we achieved a degree of normalization for the items. Two surveys elsewhere based on Oldfield's 20-item list but with changes in the questionnaire format and the instructions, yielded markedly different item distributions with peaks at each extreme and sometimes in the middle as well.

  19. Sparse multivariate factor analysis regression models and its applications to integrative genomics analysis.

    Science.gov (United States)

    Zhou, Yan; Wang, Pei; Wang, Xianlong; Zhu, Ji; Song, Peter X-K

    2017-01-01

    The multivariate regression model is a useful tool to explore complex associations between two kinds of molecular markers, which enables the understanding of the biological pathways underlying disease etiology. For a set of correlated response variables, accounting for such dependency can increase statistical power. Motivated by integrative genomic data analyses, we propose a new methodology-sparse multivariate factor analysis regression model (smFARM), in which correlations of response variables are assumed to follow a factor analysis model with latent factors. This proposed method not only allows us to address the challenge that the number of association parameters is larger than the sample size, but also to adjust for unobserved genetic and/or nongenetic factors that potentially conceal the underlying response-predictor associations. The proposed smFARM is implemented by the EM algorithm and the blockwise coordinate descent algorithm. The proposed methodology is evaluated and compared to the existing methods through extensive simulation studies. Our results show that accounting for latent factors through the proposed smFARM can improve sensitivity of signal detection and accuracy of sparse association map estimation. We illustrate smFARM by two integrative genomics analysis examples, a breast cancer dataset, and an ovarian cancer dataset, to assess the relationship between DNA copy numbers and gene expression arrays to understand genetic regulatory patterns relevant to the disease. We identify two trans-hub regions: one in cytoband 17q12 whose amplification influences the RNA expression levels of important breast cancer genes, and the other in cytoband 9q21.32-33, which is associated with chemoresistance in ovarian cancer. © 2016 WILEY PERIODICALS, INC.

  20. Dancoff factors with partial absorption in cluster geometry by the direct method

    International Nuclear Information System (INIS)

    Rodrigues, Leticia Jenisch; Leite, Sergio de Queiroz Bogado; Vilhena, Marco Tullio de; Bodmann, Bardo Ernest Josef

    2007-01-01

    Accurate analysis of resonance absorption in heterogeneous systems is essential in problems like criticality, breeding ratios and fuel depletion calculations. In compact arrays of fuel rods, resonance absorption is strongly affected by the Dancoff factor, defined in this study as the probability that a neutron emitted from the surface of a fuel element, enters another fuel element without any collision in the moderator or cladding. In the original WIMS code, Black Dancoff factors were computed in cluster geometry by the collision probability method, for each one of the symmetrically distinct fuel pin positions in the cell. Recent improvements to the code include a new routine (PIJM) that was created to incorporate a more efficient scheme for computing the collision matrices. In that routine, each system region is considered individually, minimizing convergence problems and reducing the number of neutron track lines required in the in-plane integrations of the Bickley functions for any given accuracy. In the present work, PIJM is extended to compute Grey Dancoff factors for two-dimensional cylindrical cells in cluster geometry. The effectiveness of the method is accessed by comparing Grey Dancoff factors as calculated by PIJM, with those available in the literature by the Monte Carlo method, for the irregular geometry of the Canadian CANDU37 assembly. Dancoff factors at five symmetrically distinct fuel pin positions are found in very good agreement with the literature results (author)

  1. Meta-Analysis of Comparing Personal and Environmental Factors Effective in Addiction Relapse (Iran, 2004 -2012

    Directory of Open Access Journals (Sweden)

    s Safari

    2014-12-01

    Full Text Available Objective: This As a meta-analysis, this study aimed to integrate different studies and investigate the impact of individual and environmental factors on the reappearance of addiction in quitted people. Method: This study is a meta-analysis which uses Hunter and Schmidt approach. For this purpose, 28 out of 42 studies enjoying acceptable methodologies were selected, upon which the meta-analysis was conducted. A meta-analysis checklist was the research instrument. Using summary of the study results, the researcher manually calculated effect size and interpreted it based on the meta-analysis approach and Cohen’s table. Findings: Results revealed that the effect size of environmental factors on addiction relapse was 0.64 while it was obtained 0.41 for individual factors on addiction relapse. Conclusion: According to Cohen’s table, the effect sizes are evaluated as moderate and high for individual factors and environmental factors on addiction relapse, respectively.

  2. The use of human factors methods to identify and mitigate safety issues in radiation therapy

    International Nuclear Information System (INIS)

    Chan, Alvita J.; Islam, Mohammad K.; Rosewall, Tara; Jaffray, David A.; Easty, Anthony C.; Cafazzo, Joseph A.

    2010-01-01

    Background and purpose: New radiation therapy technologies can enhance the quality of treatment and reduce error. However, the treatment process has become more complex, and radiation dose is not always delivered as intended. Using human factors methods, a radiotherapy treatment delivery process was evaluated, and a redesign was undertaken to determine the effect on system safety. Material and methods: An ethnographic field study and workflow analysis was conducted to identify human factors issues of the treatment delivery process. To address specific issues, components of the user interface were redesigned through a user-centered approach. Sixteen radiation therapy students were then used to experimentally evaluate the redesigned system through a usability test to determine the effectiveness in mitigating use errors. Results: According to findings from the usability test, the redesigned system successfully reduced the error rates of two common errors (p < .04 and p < .01). It also improved the mean task completion time by 5.5% (p < .02) and achieved a higher level of user satisfaction. Conclusions: These findings demonstrated the importance and benefits of applying human factors methods in the design of radiation therapy systems. Many other opportunities still exist to improve patient safety in this area using human factors methods.

  3. Dynamic factor analysis in the frequency domain: causal modeling of multivariate psychophysiological time series

    NARCIS (Netherlands)

    Molenaar, P.C.M.

    1987-01-01

    Outlines a frequency domain analysis of the dynamic factor model and proposes a solution to the problem of constructing a causal filter of lagged factor loadings. The method is illustrated with applications to simulated and real multivariate time series. The latter applications involve topographic

  4. Using exploratory factor analysis in personality research: Best-practice recommendations

    Directory of Open Access Journals (Sweden)

    Sumaya Laher

    2010-11-01

    Research purpose: This article presents more objective methods to determine the number of factors, most notably parallel analysis and Velicer’s minimum average partial (MAP. The benefits of rotation are also discussed. The article argues for more consistent use of Procrustes rotation and congruence coefficients in factor analytic studies. Motivation for the study: Exploratory factor analysis is often criticised for not being rigorous and objective enough in terms of the methods used to determine the number of factors, the rotations to be used and ultimately the validity of the factor structure. Research design, approach and method: The article adopts a theoretical stance to discuss the best-practice recommendations for factor analytic research in the field of psychology. Following this, an example located within personality assessment and using the NEO-PI-R specifically is presented. A total of 425 students at the University of the Witwatersrand completed the NEO-PI-R. These responses were subjected to a principal components analysis using varimax rotation. The rotated solution was subjected to a Procrustes rotation with Costa and McCrae’s (1992 matrix as the target matrix. Congruence coefficients were also computed. Main findings: The example indicates the use of the methods recommended in the article and demonstrates an objective way of determining the number of factors. It also provides an example of Procrustes rotation with coefficients of agreement as an indication of how factor analytic results may be presented more rigorously in local research. Practical/managerial implications: It is hoped that the recommendations in this article will have best-practice implications for both researchers and practitioners in the field who employ factor analysis regularly. Contribution/value-add: This article will prove useful to all researchers employing factor analysis and has the potential to set the trend for better use of factor analysis in the South African context.

  5. Fast and accurate methods of independent component analysis: A survey

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Koldovský, Zbyněk

    2011-01-01

    Roč. 47, č. 3 (2011), s. 426-438 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf

  6. A comparison of cosegregation analysis methods for the clinical setting.

    Science.gov (United States)

    Rañola, John Michael O; Liu, Quanhui; Rosenthal, Elisabeth A; Shirts, Brian H

    2018-04-01

    Quantitative cosegregation analysis can help evaluate the pathogenicity of genetic variants. However, genetics professionals without statistical training often use simple methods, reporting only qualitative findings. We evaluate the potential utility of quantitative cosegregation in the clinical setting by comparing three methods. One thousand pedigrees each were simulated for benign and pathogenic variants in BRCA1 and MLH1 using United States historical demographic data to produce pedigrees similar to those seen in the clinic. These pedigrees were analyzed using two robust methods, full likelihood Bayes factors (FLB) and cosegregation likelihood ratios (CSLR), and a simpler method, counting meioses. Both FLB and CSLR outperform counting meioses when dealing with pathogenic variants, though counting meioses is not far behind. For benign variants, FLB and CSLR greatly outperform as counting meioses is unable to generate evidence for benign variants. Comparing FLB and CSLR, we find that the two methods perform similarly, indicating that quantitative results from either of these methods could be combined in multifactorial calculations. Combining quantitative information will be important as isolated use of cosegregation in single families will yield classification for less than 1% of variants. To encourage wider use of robust cosegregation analysis, we present a website ( http://www.analyze.myvariant.org ) which implements the CSLR, FLB, and Counting Meioses methods for ATM, BRCA1, BRCA2, CHEK2, MEN1, MLH1, MSH2, MSH6, and PMS2. We also present an R package, CoSeg, which performs the CSLR analysis on any gene with user supplied parameters. Future variant classification guidelines should allow nuanced inclusion of cosegregation evidence against pathogenicity.

  7. The mathematical pathogenetic factors analysis of acute inflammatory diseases development of bronchopulmonary system among infants

    Directory of Open Access Journals (Sweden)

    G. O. Lezhenko

    2017-10-01

    Full Text Available The purpose. To study the factor structure and to establish the associative interaction of pathogenetic links of acute diseases development of the bronchopulmonary system in infants.Materials and methods. The examination group consisted of 59 infants (average age 13.8 ± 1.4 months sick with acute inflammatory bronchopulmonary diseases. Also we tested the level of 25-hydroxyvitamin D (25(ОНD, vitamin D-binding protein, hBPI, cathelicidin LL-37, ß1-defensins, lactoferrin in blood serum with the help of immunoenzymometric analysis. Selection of prognostically important pathogenetic factors of acute bronchopulmonary disease among infants was conducted using ROC-analysis. The procedure for classifying objects was carried out using Hierarchical Cluster Analysis by the method of Centroid-based clustering. Results. Based on the results of the ROC-analysis were selected 15 potential predictors of the development of acute inflammatory diseases of the bronchopulmonary system among infants. The factor analysis made it possible to determine the 6 main components . The biggest influence in the development of the disease was made by "the anemia factor", "the factor of inflammation", "the maternal factor", "the vitamin D supply factor", "the immune factor" and "the phosphorus-calcium exchange factor” with a factor load of more than 0.6. The performed procedure of hierarchical cluster analysis confirmed the initial role of immuno-inflammatory components. The conclusions. The highlighted factors allowed to define a group of parameters, that must be influenced to achieve a maximum effect in carrying out preventive and therapeutic measures. First of all, it is necessary to influence the "the anemia factor" and "the calcium exchange factor", as well as the "the vitamin D supply factor". In other words, to correct vitamin D deficiency and carry out measures aimed at preventing the development of anemia. The prevention and treatment of the pathological course of

  8. Factor analysis for imperfect maintenance planning at nuclear power plants by cognitive task analysis

    International Nuclear Information System (INIS)

    Takagawa, Kenichi; Iida, Hiroyasu

    2011-01-01

    Imperfect maintenance planning was frequently identified in domestic nuclear power plants. To prevent such an event, we analyzed causal factors in maintenance planning stages and showed the directionality of countermeasures in this study. There is a pragmatic limit in finding the causal factors from the items based on report descriptions. Therefore, the idea of the systemic accident model, which is used to monitor the performance variability in normal circumstances, is taken as a new concept instead of investigating negative factors. As an actual method for analyzing usual activities, cognitive task analysis (CTA) was applied. Persons who experienced various maintenance activities at one electric power company were interviewed about sources related to decision making during maintenance planning, and then usual factors affecting planning were extracted as performance variability factors. The tendency of domestic events was analyzed using the classification item of those factors, and the directionality of countermeasures was shown. The following are critical for preventing imperfect maintenance planning: the persons in charge should fully understand the situation of the equipment for which they are responsible in the work planning and maintenance evaluation stages, and they should definitely understand, for example, the maintenance bases of that equipment. (author)

  9. Research on Human-Error Factors of Civil Aircraft Pilots Based On Grey Relational Analysis

    Directory of Open Access Journals (Sweden)

    Guo Yundong

    2018-01-01

    Full Text Available In consideration of the situation that civil aviation accidents involve many human-error factors and show the features of typical grey systems, an index system of civil aviation accident human-error factors is built using human factor analysis and classification system model. With the data of accidents happened worldwide between 2008 and 2011, the correlation between human-error factors can be analyzed quantitatively using the method of grey relational analysis. Research results show that the order of main factors affecting pilot human-error factors is preconditions for unsafe acts, unsafe supervision, organization and unsafe acts. The factor related most closely with second-level indexes and pilot human-error factors is the physical/mental limitations of pilots, followed by supervisory violations. The relevancy between the first-level indexes and the corresponding second-level indexes and the relevancy between second-level indexes can also be analyzed quantitatively.

  10. DORIAN, Bayes Method Plant Age Risk Analysis

    International Nuclear Information System (INIS)

    Atwood, C.L.

    2002-01-01

    1 - Description of program or function: DORIAN is an integrated package for performing Bayesian aging analysis of reliability data; e.g. for identifying trends in component failure rates and/or outage durations as a function of time. The user must specify several alternative hypothesized 'aging models' (i.e., possible trends) along prior probabilities indicating the subject probability that each trend is actually the correct one. DORIAN then uses component failure and/or repair data over time to update these prior probabilities and develop a posterior probability for each aging model, representing the probability that each model is the correct one in light of the observed data rather than a priori. Mean, median, and 5. and 95. percentile trends are also compiled from the posterior probabilities. 2 - Method of solution: DORIAN carries out a Bayesian analysis of failure data and a prior distribution on a time-dependent failure rate to obtain a posterior distribution on the failure rate. The form of the time-dependent failure rate is arbitrary, because DORIAN approximates the form by a step-function, constant within specified time intervals. Similarly, the parameters may have any prior distribution, because DORIAN uses a discrete distribution to approximate this. Likewise, the database file produced by DORIAN approximates the entire range of possible failure rates or outage durations developed by means of a discrete probability distribution containing no more than 20 distinct values with their probabilities. 3 - Restrictions on the complexity of the problem: Prior distribution is discrete with up to 25 values. Up to 60 times are accommodated in the discrete time history

  11. Session-RPE Method for Training Load Monitoring: Validity, Ecological Usefulness, and Influencing Factors

    Directory of Open Access Journals (Sweden)

    Monoem Haddad

    2017-11-01

    Full Text Available Purpose: The aim of this review is to (1 retrieve all data validating the Session-rating of perceived exertion (RPE-method using various criteria, (2 highlight the rationale of this method and its ecological usefulness, and (3 describe factors that can alter RPE and users of this method should take into consideration.Method: Search engines such as SPORTDiscus, PubMed, and Google Scholar databases in the English language between 2001 and 2016 were consulted for the validity and usefulness of the session-RPE method. Studies were considered for further analysis when they used the session-RPE method proposed by Foster et al. in 2001. Participants were athletes of any gender, age, or level of competition. Studies using languages other than English were excluded in the analysis of the validity and reliability of the session-RPE method. Other studies were examined to explain the rationale of the session-RPE method and the origin of RPE.Results: A total of 950 studies cited the Foster et al. study that proposed the session RPE-method. 36 studies have examined the validity and reliability of this proposed method using the modified CR-10.Conclusion: These studies confirmed the validity and good reliability and internal consistency of session-RPE method in several sports and physical activities with men and women of different age categories (children, adolescents, and adults among various expertise levels. This method could be used as “standing alone” method for training load (TL monitoring purposes though some recommend to combine it with other physiological parameters as heart rate.

  12. Parametric study on single shot peening by dimensional analysis method incorporated with finite element method

    Science.gov (United States)

    Wu, Xian-Qian; Wang, Xi; Wei, Yan-Peng; Song, Hong-Wei; Huang, Chen-Guang

    2012-06-01

    Shot peening is a widely used surface treatment method by generating compressive residual stress near the surface of metallic materials to increase fatigue life and resistance to corrosion fatigue, cracking, etc. Compressive residual stress and dent profile are important factors to evaluate the effectiveness of shot peening process. In this paper, the influence of dimensionless parameters on maximum compressive residual stress and maximum depth of the dent were investigated. Firstly, dimensionless relations of processing parameters that affect the maximum compressive residual stress and the maximum depth of the dent were deduced by dimensional analysis method. Secondly, the influence of each dimensionless parameter on dimensionless variables was investigated by the finite element method. Furthermore, related empirical formulas were given for each dimensionless parameter based on the simulation results. Finally, comparison was made and good agreement was found between the simulation results and the empirical formula, which shows that a useful approach is provided in this paper for analyzing the influence of each individual parameter.

  13. Exploring Technostress: Results of a Large Sample Factor Analysis

    Directory of Open Access Journals (Sweden)

    Steponas Jonušauskas

    2016-06-01

    Full Text Available With reference to the results of a large sample factor analysis, the article aims to propose the frame examining technostress in a population. The survey and principal component analysis of the sample consisting of 1013 individuals who use ICT in their everyday work was implemented in the research. 13 factors combine 68 questions and explain 59.13 per cent of the answers dispersion. Based on the factor analysis, questionnaire was reframed and prepared to reasonably analyze the respondents’ answers, revealing technostress causes and consequences as well as technostress prevalence in the population in a statistically validated pattern. A key elements of technostress based on factor analysis can serve for the construction of technostress measurement scales in further research.

  14. Economic Analysis of Factors Affecting Technical Efficiency of ...

    African Journals Online (AJOL)

    Economic Analysis of Factors Affecting Technical Efficiency of Smallholders ... socio-economic characteristics which influence technical efficiency in maize production. ... Ministry of Agriculture and livestock, records, books, reports and internet.

  15. A new cyber security risk evaluation method for oil and gas SCADA based on factor state space

    International Nuclear Information System (INIS)

    Yang, Li; Cao, Xiedong; Li, Jie

    2016-01-01

    Based on comprehensive analysis of the structure and the potential safety problem of oil and gas SCADA(Supervisor control and data acquisition) network, aiming at the shortcomings of traditional evaluation methods, combining factor state space and fuzzy comprehensive evaluation method, a new network security risk evaluation method of oil and gas SCADA is proposed. First of all, formal description of factor state space and its complete mathematical definition were presented; secondly, factor fuzzy evaluation steps were discussed; then, using analytic hierarchy method, evaluation index system for oil and gas SCADA system was established, the index weights of all factors were determined by two-two comparisons; structure design of three layers in reasoning machine was completed. Experiments and tests show that the proposed method is accurate, reliable and practical. Research results provide the template and the new method for the other industries.

  16. An Analysis of the SURF Method

    Directory of Open Access Journals (Sweden)

    Edouard Oyallon

    2015-07-01

    Full Text Available The SURF method (Speeded Up Robust Features is a fast and robust algorithm for local, similarity invariant representation and comparison of images. Similarly to many other local descriptor-based approaches, interest points of a given image are defined as salient features from a scale-invariant representation. Such a multiple-scale analysis is provided by the convolution of the initial image with discrete kernels at several scales (box filters. The second step consists in building orientation invariant descriptors, by using local gradient statistics (intensity and orientation. The main interest of the SURF approach lies in its fast computation of operators using box filters, thus enabling real-time applications such as tracking and object recognition. The SURF framework described in this paper is based on the PhD thesis of H. Bay [ETH Zurich, 2009], and more specifically on the paper co-written by H. Bay, A. Ess, T. Tuytelaars and L. Van Gool [Computer Vision and Image Understanding, 110 (2008, pp. 346–359]. An implementation is proposed and used to illustrate the approach for image matching. A short comparison with a state-of-the-art approach is also presented, the SIFT algorithm of D. Lowe [International Journal of Computer Vision, 60 (2004, pp. 91–110], with which SURF shares a lot in common.

  17. Monte Carlo methods for the reliability analysis of Markov systems

    International Nuclear Information System (INIS)

    Buslik, A.J.

    1985-01-01

    This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator

  18. Comparison of the Effects of the Different Methods for Computing the Slope Length Factor at a Watershed Scale

    Directory of Open Access Journals (Sweden)

    Fu Suhua

    2013-09-01

    Full Text Available The slope length factor is one of the parameters of the Universal Soil Loss Equation (USLE and the Revised Universal Soil Loss Equation (RUSLE and is sometimes calculated based on a digital elevation model (DEM. The methods for calculating the slope length factor are important because the values obtained may depend on the methods used for calculation. The purpose of this study was to compare the difference in spatial distribution of the slope length factor between the different methods at a watershed scale. One method used the uniform slope length factor equation (USLFE where the effects of slope irregularities (such as slope gradient, etc. on soil erosion by water were not considered. The other method used segmented slope length factor equation(SSLFE which considered the effects of slope irregularities on soil erosion by water. The Arc Macro Language (AML Version 4 program for the revised universal soil loss equation(RUSLE.which uses the USLFE, was chosen to calculate the slope length factor. In a parallel analysis, the AML code of RUSLE Version 4 was modified according to the SSLFE to calculate the slope length factor. Two watersheds with different slope and gully densities were chosen. The results show that the slope length factor and soil loss using the USLFE method were lower than those using the SSLFE method, especially on downslopes watershed with more frequent steep slopes and higher gully densities. In addition, the slope length factor and soil loss calculated by the USLFE showed less spatial variation.

  19. An automated Monte-Carlo based method for the calculation of cascade summing factors

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, M.J., E-mail: mark.j.jackson@awe.co.uk; Britton, R.; Davies, A.V.; McLarty, J.L.; Goodwin, M.

    2016-10-21

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ–γ, γ–X, γ–511 and γ–e{sup −} coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted. - Highlights: • Versatile method to calculate coincidence summing factors for gamma-spectrometry analysis. • Based solely on ENSDF format nuclear data and detector efficiency characterisations. • Enables generation of a CSF library for any detector, geometry and radionuclide. • Improves measurement accuracy and reduces acquisition times required to meet MDA.

  20. Sustainable Manufacturing Practices in Malaysian Automotive Industry: Confirmatory Factor Analysis

    OpenAIRE

    Habidin, Nurul Fadly; Zubir, Anis Fadzlin Mohd; Fuz, Nursyazwani Mohd; Latip, Nor Azrin Md; Azman, Mohamed Nor Azhari

    2015-01-01

    Sustainable manufacturing practices (SMPs) have received enormous attention in current years as an effective solution to support the continuous growth and expansion of the automotive manufacturing industry. This reported study was conducted to examine confirmatory factor analysis for SMP such as manufacturing process, supply chain management, social responsibility, and environmental management based on automotive manufacturing industry. The results of confirmatory factor analysis show that fo...

  1. The Recoverability of P-Technique Factor Analysis

    Science.gov (United States)

    Molenaar, Peter C. M.; Nesselroade, John R.

    2009-01-01

    It seems that just when we are about to lay P-technique factor analysis finally to rest as obsolete because of newer, more sophisticated multivariate time-series models using latent variables--dynamic factor models--it rears its head to inform us that an obituary may be premature. We present the results of some simulations demonstrating that even…

  2. Likelihood-based Dynamic Factor Analysis for Measurement and Forecasting

    NARCIS (Netherlands)

    Jungbacker, B.M.J.P.; Koopman, S.J.

    2015-01-01

    We present new results for the likelihood-based analysis of the dynamic factor model. The latent factors are modelled by linear dynamic stochastic processes. The idiosyncratic disturbance series are specified as autoregressive processes with mutually correlated innovations. The new results lead to

  3. Strength Analysis on Ship Ladder Using Finite Element Method

    Science.gov (United States)

    Budianto; Wahyudi, M. T.; Dinata, U.; Ruddianto; Eko P., M. M.

    2018-01-01

    In designing the ship’s structure, it should refer to the rules in accordance with applicable classification standards. In this case, designing Ladder (Staircase) on a Ferry Ship which is set up, it must be reviewed based on the loads during ship operations, either during sailing or at port operations. The classification rules in ship design refer to the calculation of the structure components described in Classification calculation method and can be analysed using the Finite Element Method. Classification Regulations used in the design of Ferry Ships used BKI (Bureau of Classification Indonesia). So the rules for the provision of material composition in the mechanical properties of the material should refer to the classification of the used vessel. The analysis in this structure used program structure packages based on Finite Element Method. By using structural analysis on Ladder (Ladder), it obtained strength and simulation structure that can withstand load 140 kg both in static condition, dynamic, and impact. Therefore, the result of the analysis included values of safety factors in the ship is to keep the structure safe but the strength of the structure is not excessive.

  4. Analysis methods used by the geochemistry department

    International Nuclear Information System (INIS)

    Berthollet, P.

    1958-06-01

    This note presents various analytical techniques which are respectively used for the dosing of uranium in soils (fluorescence method, chromatographic method), for the dosing of uranium in natural waters (ion exchange method, evaporation method), and for the dosing of uranium in plants. Principles, equipment and products, reactant preparation, operation mode, sample preparation and measurements, expression of results and calculations) are indicated for each of these methods

  5. Influencing Factors of Catering and Food Service Industry Based on Principal Component Analysis

    OpenAIRE

    Zi Tang

    2014-01-01

    Scientific analysis of influencing factors is of great importance for the healthy development of catering and food service industry. This study attempts to present a set of critical indicators for evaluating the contribution of influencing factors to catering and food service industry in the particular context of Harbin City, Northeast China. Ten indicators that correlate closely with catering and food service industry were identified and performed by the principal component analysis method u...

  6. Survival analysis and classification methods for forest fire size.

    Science.gov (United States)

    Tremblay, Pier-Olivier; Duchesne, Thierry; Cumming, Steven G

    2018-01-01

    Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at "being held" (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at "being held" exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances.

  7. Survival analysis and classification methods for forest fire size

    Science.gov (United States)

    2018-01-01

    Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at “being held” (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at “being held” exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances. PMID:29320497

  8. Exploring factors that influence work analysis data: A meta-analysis of design choices, purposes, and organizational context.

    Science.gov (United States)

    DuVernet, Amy M; Dierdorff, Erich C; Wilson, Mark A

    2015-09-01

    Work analysis is fundamental to designing effective human resource systems. The current investigation extends previous research by identifying the differential effects of common design decisions, purposes, and organizational contexts on the data generated by work analyses. The effects of 19 distinct factors that span choices of descriptor, collection method, rating scale, and data source, as well as project purpose and organizational features, are explored. Meta-analytic results cumulated from 205 articles indicate that many of these variables hold significant consequences for work analysis data. Factors pertaining to descriptor choice, collection method, rating scale, and the purpose for conducting the work analysis each showed strong associations with work analysis data. The source of the work analysis information and organizational context in which it was conducted displayed fewer relationships. Findings can be used to inform choices work analysts make about methodology and postcollection evaluations of work analysis information. (c) 2015 APA, all rights reserved).

  9. Methods in carbon K-edge NEXAFS: Experiment and analysis

    International Nuclear Information System (INIS)

    Watts, B.; Thomsen, L.; Dastoor, P.C.

    2006-01-01

    Near-edge X-ray absorption spectroscopy (NEXAFS) is widely used to probe the chemistry and structure of surface layers. Moreover, using ultra-high brilliance polarised synchrotron light sources, it is possible to determine the molecular alignment of ultra-thin surface films. However, the quantitative analysis of NEXAFS data is complicated by many experimental factors and, historically, the essential methods of calibration, normalisation and artefact removal are presented in the literature in a somewhat fragmented manner, thus hindering their integrated implementation as well as their further development. This paper outlines a unified, systematic approach to the collection and quantitative analysis of NEXAFS data with a particular focus upon carbon K-edge spectra. As a consequence, we show that current methods neglect several important aspects of the data analysis process, which we address with a combination of novel and adapted techniques. We discuss multiple approaches in solving the issues commonly encountered in the analysis of NEXAFS data, revealing the inherent assumptions of each approach and providing guidelines for assessing their appropriateness in a broad range of experimental situations

  10. Application of texture analysis method for mammogram density classification

    Science.gov (United States)

    Nithya, R.; Santhi, B.

    2017-07-01

    Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.

  11. LISA data analysis using Markov chain Monte Carlo methods

    International Nuclear Information System (INIS)

    Cornish, Neil J.; Crowder, Jeff

    2005-01-01

    The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low-frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50 000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analysis, and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we supercool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions

  12. Comparison of Different Numerical Methods for Quality Factor Calculation of Nano and Micro Photonic Cavities

    DEFF Research Database (Denmark)

    Taghizadeh, Alireza; Mørk, Jesper; Chung, Il-Sug

    2014-01-01

    Four different numerical methods for calculating the quality factor and resonance wavelength of a nano or micro photonic cavity are compared. Good agreement was found for a wide range of quality factors. Advantages and limitations of the different methods are discussed.......Four different numerical methods for calculating the quality factor and resonance wavelength of a nano or micro photonic cavity are compared. Good agreement was found for a wide range of quality factors. Advantages and limitations of the different methods are discussed....

  13. An automated Monte-Carlo based method for the calculation of cascade summing factors

    Science.gov (United States)

    Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.

    2016-10-01

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.

  14. Multiplication factor versus regression analysis in stature estimation from hand and foot dimensions.

    Science.gov (United States)

    Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha

    2012-05-01

    Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  15. The concept of key success factors: Theory and method

    DEFF Research Database (Denmark)

    Grunert, Klaus G.; Ellegaard, Charlotte

    1992-01-01

    Executive summary: 1. The term key success factors can be used in four different ways: a) as a necessary ingre-dient in a management information system, b) as a unique characteristic of a company, c) as a heuristic tool for managers to sharpen their thinking, d) as a description of the major skills...... and resources required to be successful in a given market. We adopt the last view. 2. The actual key success factors on a market, and those key success factors perceived by decision-makers in companies operating in the market, will be different. A number of psychological mechanisms result in misperceptions...... or resource that a business can i in, which, on the market the business is operating on, explains a major part of the observable differences in perceived value and/or relative costs. 4. Key success factors differ from core skills and resources, which are prerequisites for being on a market, but do not explain...

  16. Instrumental neutron activation analysis - a routine method

    International Nuclear Information System (INIS)

    Bruin, M. de.

    1983-01-01

    This thesis describes the way in which at IRI instrumental neutron activation analysis (INAA) has been developed into an automated system for routine analysis. The basis of this work are 20 publications describing the development of INAA since 1968. (Auth.)

  17. Basic methods of linear functional analysis

    CERN Document Server

    Pryce, John D

    2011-01-01

    Introduction to the themes of mathematical analysis, geared toward advanced undergraduate and graduate students. Topics include operators, function spaces, Hilbert spaces, and elementary Fourier analysis. Numerous exercises and worked examples.1973 edition.

  18. Area specific stripping factors for AGS. A method for extracting stripping factors from survey data

    Energy Technology Data Exchange (ETDEWEB)

    Aage, H.K.; Korsbech, U. [Technical Univ. of Denmark (Denmark)

    2006-04-15

    In order to use Airborne Gamma-ray Spectrometry (AGS) for contamination mapping, for source search etc. one must to be able to eliminate the contribution to the spectra from natural radioactivity. This in general is done by a stripping technique. The parameters for performing a stripping have until recently been measured by recording gamma spectra at special calibration sites (pads). This may be cumbersome and the parameters may not be correct when used at low gamma energies for environmental spectra. During 2000-2001 DTU tested with success a new technique for Carborne Gamma-ray Spectrometry (CGS) where the spectra from the surveyed area (or from a similar area) were used for calculating the stripping parameters. It was possible to calculate usable stripping ratios for a number of low energy windows - and weak source signals not detectable by other means were discovered with the ASS technique. In this report it is shown that the ASS technique also works for AGS data, and it has been used for recent Danish AGS tests with point sources. (Check of calibration of AGS parameters.) By using the ASS technique with the Boden data (Barents Rescue) an exercise source was detected that has not been detected by any of the teams during the exercise. The ASS technique therefore seems to be better for search for radiation anomalies than any other method known presently. The experiences also tell that although the stripping can be performed correctly at any altitude there is a variation of the stripping parameters with altitude that has not yet been quite understood. However, even with the oddly variations the stripping worked as expected. It was also observed that one might calculate a single common set of usable stripping factors for all altitudes from the entire data set i.e. some average a, b and c values. When those stripping factors were used the stripping technique still worked well. (au)

  19. Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments

    NARCIS (Netherlands)

    Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke

    Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode,

  20. Vulnerability analysis methods for road networks

    Science.gov (United States)

    Bíl, Michal; Vodák, Rostislav; Kubeček, Jan; Rebok, Tomáš; Svoboda, Tomáš

    2014-05-01

    steps can be taken in order to make it more resilient. Performing such an analysis of network break-ups requires consideration of the network as a whole, ideally identifying all the cases generated by simultaneous closure of multiple links and evaluating them using various criteria. The spatial distribution of settlements, important companies and the overall population in the nodes of the network are several factors, apart from the topology of the network which could be taken into account when computing vulnerability indices and identifying the weakest links and/or weakest link combinations. However, even for small networks (i.e., hundreds of nodes and links), the problem of break-up identification becomes extremely difficult to resolve. The naive approaches of the brute force examination consequently fail and more elaborated algorithms have to be applied. We address the problem of evaluating the vulnerability of road networks in our work by simulating the impacts of the simultaneous closure of multiple roads/links. We present an ongoing work on a sophisticated algorithm focused on the identification of network break-ups and evaluating them by various criteria.

  1. Evaluation of piping fracture analysis method by benchmark study, 1

    International Nuclear Information System (INIS)

    Takahashi, Yukio; Kashima, Koichi; Kuwabara, Kazuo

    1987-01-01

    Importance of strength evaluation methods for cracked piping is growing with the progress of the rationalization of the nuclear piping system based on the leak-before-break concept. As an analytical tool, finite element method is principally used. To obtain the reliable solutions by the finite element programs, it is important to grasp the influences of various factors on the solutions. In this study, benchmark analysis is carried out for a stainless steel pipe with a circumferential through-wall crack subjected to four-point bending loading. Eight solutions obtained by using five finite element programs are compared with each other. Good agreement is obtained between the solutions on the deformation characteristics as well as fracture mechanics parameters. It is found through this study that the influence of the difference in the solution technique is generally small. (author)

  2. Using Module Analysis for Multiple Choice Responses: A New Method Applied to Force Concept Inventory Data

    Science.gov (United States)

    Brewe, Eric; Bruun, Jesper; Bearden, Ian G.

    2016-01-01

    We describe "Module Analysis for Multiple Choice Responses" (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual…

  3. Towards factor analysis exploration applied to positron emission tomography functional imaging for breast cancer characterization

    International Nuclear Information System (INIS)

    Rekik, W.; Ketata, I.; Sellami, L.; Ben slima, M.; Ben Hamida, A.; Chtourou, K.; Ruan, S.

    2011-01-01

    This paper aims to explore the factor analysis when applied to a dynamic sequence of medical images obtained using nuclear imaging modality, Positron Emission Tomography (PET). This latter modality allows obtaining information on physiological phenomena, through the examination of radiotracer evolution during time. Factor analysis of dynamic medical images sequence (FADMIS) estimates the underlying fundamental spatial distributions by factor images and the associated so-called fundamental functions (describing the signal variations) by factors. This method is based on an orthogonal analysis followed by an oblique analysis. The results of the FADMIS are physiological curves showing the evolution during time of radiotracer within homogeneous tissues distributions. This functional analysis of dynamic nuclear medical images is considered to be very efficient for cancer diagnostics. In fact, it could be applied for cancer characterization, vascularization as well as possible evaluation of response to therapy.

  4. Applying homotopy analysis method for solving differential-difference equation

    International Nuclear Information System (INIS)

    Wang Zhen; Zou Li; Zhang Hongqing

    2007-01-01

    In this Letter, we apply the homotopy analysis method to solving the differential-difference equations. A simple but typical example is applied to illustrate the validity and the great potential of the generalized homotopy analysis method in solving differential-difference equation. Comparisons are made between the results of the proposed method and exact solutions. The results show that the homotopy analysis method is an attractive method in solving the differential-difference equations

  5. Nonstationary Hydrological Frequency Analysis: Theoretical Methods and Application Challenges

    Science.gov (United States)

    Xiong, L.

    2014-12-01

    Because of its great implications in the design and operation of hydraulic structures under changing environments (either climate change or anthropogenic changes), nonstationary hydrological frequency analysis has become so important and essential. Two important achievements have been made in methods. Without adhering to the consistency assumption in the traditional hydrological frequency analysis, the time-varying probability distribution of any hydrological variable can be established by linking the distribution parameters to some covariates such as time or physical variables with the help of some powerful tools like the Generalized Additive Model of Location, Scale and Shape (GAMLSS). With the help of copulas, the multivariate nonstationary hydrological frequency analysis has also become feasible. However, applications of the nonstationary hydrological frequency formula to the design and operation of hydraulic structures for coping with the impacts of changing environments in practice is still faced with many challenges. First, the nonstationary hydrological frequency formulae with time as covariate could only be extrapolated for a very short time period beyond the latest observation time, because such kind of formulae is not physically constrained and the extrapolated outcomes could be unrealistic. There are two physically reasonable methods that can be used for changing environments, one is to directly link the quantiles or the distribution parameters to some measureable physical factors, and the other is to use the derived probability distributions based on hydrological processes. However, both methods are with a certain degree of uncertainty. For the design and operation of hydraulic structures under changing environments, it is recommended that design results of both stationary and nonstationary methods be presented together and compared with each other, to help us understand the potential risks of each method.

  6. Identifying Critical Factors in the Eco-Efficiency of Remanufacturing Based on the Fuzzy DEMATEL Method

    Directory of Open Access Journals (Sweden)

    Qianwang Deng

    2015-11-01

    Full Text Available Remanufacturing can bring considerable economic and environmental benefits such as cost saving, conservation of energy and resources, and reduction of emissions. With the increasing awareness of sustainable manufacturing, remanufacturing gradually becomes the research priority. Most studies concentrate on the analysis of influencing factors, or the evaluation of the economic and environmental performance in remanufacturing, while little effort has been devoted to investigating the critical factors influencing the eco-efficiency of remanufacturing. Considering the current development of the remanufacturing industry in China, this paper proposes a set of factors influencing the eco-efficiency of remanufacturing and then utilizes a fuzzy Decision Making Trial and Evaluation Laboratory (DEMATEL method to establish relation matrixes reflecting the interdependent relationships among these factors. Finally, the contributions of each factor to eco-efficiency and mutual influence values among them are obtained, and critical factors in eco-efficiency of remanufacturing are identified. The results of the present work can provide theoretical supports for the government to make appropriate policies to improve the eco-efficiency of remanufacturing.

  7. New method in obtaining correction factor of power confirming

    International Nuclear Information System (INIS)

    Deng Yongjun; Li Rundong; Liu Yongkang; Zhou Wei

    2010-01-01

    Westcott theory is the most widely used method in reactor power calibration, which particularly suited to research reactor. But this method is very fussy because lots of correction parameters which rely on empirical formula to special reactor type are needed. The incidence coefficient between foil activity and reactor power was obtained by Monte-Carlo calculation, which was carried out with precise description of the reactor core and the foil arrangement position by MCNP input card. So the reactor power was determined by the core neutron fluence profile and the foil activity placed in the position for normalization use. The characteristic of this new method is simpler, more flexible and accurate than Westcott theory. In this paper, the results of SPRR-300 obtained by the new method in theory were compared with the experimental results, which verified the possibility of this new method. (authors)

  8. 2. Methods of elemental analysis of materials

    International Nuclear Information System (INIS)

    Musilek, L.

    1992-01-01

    The principles of activation analysis are outlined including the preparation of samples and reference materials, the choice of suitable activation sources, interfering effects, detection of radiation emitted and analysis of mixtures of emitters, and the potential of activation analysis in various fields of science and technology. The principles of X-ray fluorescence analysis and the associated instrumentation are also dealt with, and examples of applications are given. Described are also the physical nature of the Moessbauer effect, Moessbauer sources and spectrometers, and the applicability of this effect in physical research and in the investigation of iron-containing materials. (Z.S.). 1 tab., 20 figs., 90 refs

  9. Extension and Validation of UNDEX Analysis Methods

    National Research Council Canada - National Science Library

    Donahue, L

    2004-01-01

    .... Adaptive grid schemes for underwater shock and bubble analysis, hydrostatic pressure and airwater/seafloor boundaries, underwater explosion profiles, and fluid-backed shapes were also implemented...

  10. 252Cf-source-driven neutron noise analysis method

    International Nuclear Information System (INIS)

    Mihalczo, J.T.; King, W.T.; Blakeman, E.D.

    1985-01-01

    The 252 Cf-source-driven neutron noise analysis method has been tested in a wide variety of experiments that have indicated the broad range of applicability of the method. The neutron multiplication factor k/sub eff/ has been satisfactorily detemined for a variety of materials including uranium metal, light water reactor fuel pins, fissile solutions, fuel plates in water, and interacting cylinders. For a uranyl nitrate solution tank which is typical of a fuel processing or reprocessing plant, the k/sub eff/ values were satisfactorily determined for values between 0.92 and 0.5 using a simple point kinetics interpretation of the experimental data. The short measurement times, in several cases as low as 1 min, have shown that the development of this method can lead to a practical subcriticality monitor for many in-plant applications. The further development of the method will require experiments oriented toward particular applications including dynamic experiments and the development of theoretical methods to predict the experimental observables

  11. Response matrix method for large LMFBR analysis

    International Nuclear Information System (INIS)

    King, M.J.

    1977-06-01

    The feasibility of using response matrix techniques for computational models of large LMFBRs is examined. Since finite-difference methods based on diffusion theory have generally found a place in fast-reactor codes, a brief review of their general matrix foundation is given first in order to contrast it to the general strategy of response matrix methods. Then, in order to present the general method of response matrix technique, two illustrative examples are given. Matrix algorithms arising in the application to large LMFBRs are discussed, and the potential of the response matrix method is explored for a variety of computational problems. Principal properties of the matrices involved are derived with a view to application of numerical methods of solution. The Jacobi iterative method as applied to the current-balance eigenvalue problem is discussed

  12. Factors determining the use of botanical insect pest control methods ...

    African Journals Online (AJOL)

    A farm survey was conducted in three representative administrative districts of the Lake Victoria Basin (LVB), Kenya to document farmers' indigenous knowledge and the factors that influence the use of botanicals instead of synthetic insecticides in insect pest management. A total of 65 farm households were randomly ...

  13. Development of three-dimensional ENRICHED FREE MESH METHOD and its application to crack analysis

    International Nuclear Information System (INIS)

    Suzuki, Hayato; Matsubara, Hitoshi; Ezawa, Yoshitaka; Yagawa, Genki

    2010-01-01

    In this paper, we describe a method for three-dimensional high accurate analysis of a crack included in a large-scale structure. The Enriched Free Mesh Method (EFMM) is a method for improving the accuracy of the Free Mesh Method (FMM), which is a kind of meshless method. First, we developed an algorithm of the three-dimensional EFMM. The elastic problem was analyzed using the EFMM and we find that its accuracy compares advantageously with the FMM, and the number of CG iterations is smaller. Next, we developed a method for calculating the stress intensity factor by employing the EFMM. The structure with a crack was analyzed using the EFMM, and the stress intensity factor was calculated by the developed method. The analysis results were very well in agreement with reference solution. It was shown that the proposed method is very effective in the analysis of the crack included in a large-scale structure. (author)

  14. Evaluation of the reliability concerning the identification of human factors as contributing factors by a computer supported event analysis (CEA)

    International Nuclear Information System (INIS)

    Wilpert, B.; Maimer, H.; Loroff, C.

    2000-01-01

    The project's objectives are the evaluation of the reliability concerning the identification of Human Factors as contributing factors by a computer supported event analysis (CEA). CEA is a computer version of SOL (Safety through Organizational Learning). Parts of the first step were interviews with experts from the nuclear power industry and the evaluation of existing computer supported event analysis methods. This information was combined to a requirement profile for the CEA software. The next step contained the implementation of the software in an iterative process of evaluation. The completion of this project was the testing of the CEA software. As a result the testing demonstrated that it is possible to identify contributing factors with CEA validly. In addition, CEA received a very positive feedback from the experts. (orig.) [de

  15. Factor Analysis on the Factors that Influencing Rural Environmental Pollution in the Hilly Area of Sichuan Province,China

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    By using factor analysis method and establishing analysis indicator system from four aspects including crop production,poultry farming,rural life and township enterprises,the difference,features,and types of factors influencing the rural environmental pollution in the hilly area in Sichuan Province,China.Results prove that the major factor influencing rural environmental pollution in the study area is livestock and poultry breeding,flowed by crop planting,rural life,and township enterprises.Hence future pollution prevention and control should set about from livestock and poultry breeding.Meanwhile,attention should be paid to the prevention and control of rural environmental pollution caused by rural life and township enterprise production.

  16. An overview on applied methods in the FRG to investigate human factors in control rooms of nuclear power plants

    International Nuclear Information System (INIS)

    Thomas, D.B.

    1985-01-01

    In the first half of 1984 a feasibility study was carried out with respect to the CSNI of the OECD/NEA inventory of methods for the analysis and evaluation of human factors in the control room of nuclear power plants. In order to enable an analysis of the methods to be made, an elementary categorization of the methods under field studies, laboratory studies and theoretical studies was performed. A further differentiation of these categories was used as the basis for a critical analysis and interpretation of the methods employed in the research plan. In the following sections, an explanation is given of the method categories used and the plans included in the investigation. A short representation is given of the breakdown of the applied methods into categories and an analysis is made of the results. Implications for research programs are discussed. (orig./GL) [de

  17. Climate Action Benefits: Methods of Analysis

    Science.gov (United States)

    This page provides detailed information on the methods used in the CIRA analyses, including the overall framework, temperature projections, precipitation projections, sea level rise projections, uncertainty, and limitations.

  18. Numerical analysis in electromagnetics the TLM method

    CERN Document Server

    Saguet, Pierre

    2013-01-01

    The aim of this book is to give a broad overview of the TLM (Transmission Line Matrix) method, which is one of the "time-domain numerical methods". These methods are reputed for their significant reliance on computer resources. However, they have the advantage of being highly general.The TLM method has acquired a reputation for being a powerful and effective tool by numerous teams and still benefits today from significant theoretical developments. In particular, in recent years, its ability to simulate various situations with excellent precision, including complex materials, has been

  19. Methods for Mediation Analysis with Missing Data

    Science.gov (United States)

    Zhang, Zhiyong; Wang, Lijuan

    2013-01-01

    Despite wide applications of both mediation models and missing data techniques, formal discussion of mediation analysis with missing data is still rare. We introduce and compare four approaches to dealing with missing data in mediation analysis including list wise deletion, pairwise deletion, multiple imputation (MI), and a two-stage maximum…

  20. Development of analysis methods for seismically isolated nuclear structures

    International Nuclear Information System (INIS)

    Yoo, Bong; Lee, Jae-Han; Koo, Gyeng-Hoi

    2002-01-01

    KAERI's contributions to the project entitled Development of Analysis Methods for Seismically Isolated Nuclear Structures under IAEA CRP of the intercomparison of analysis methods for predicting the behaviour of seismically isolated nuclear structures during 1996-1999 in effort to develop the numerical analysis methods and to compare the analysis results with the benchmark test results of seismic isolation bearings and isolated nuclear structures provided by participating countries are briefly described. Certain progress in the analysis procedures for isolation bearings and isolated nuclear structures has been made throughout the IAEA CRPs and the analysis methods developed can be improved for future nuclear facility applications. (author)

  1. Quantifying human and organizational factors in accident management using decision trees: the HORAAM method

    International Nuclear Information System (INIS)

    Baumont, G.; Menage, F.; Schneiter, J.R.; Spurgin, A.; Vogel, A.

    2000-01-01

    In the framework of the level 2 Probabilistic Safety Study (PSA 2) project, the Institute for Nuclear Safety and Protection (IPSN) has developed a method for taking into account Human and Organizational Reliability Aspects during accident management. Actions are taken during very degraded installation operations by teams of experts in the French framework of Crisis Organization (ONC). After describing the background of the framework of the Level 2 PSA, the French specific Crisis Organization and the characteristics of human actions in the Accident Progression Event Tree, this paper describes the method developed to introduce in PSA the Human and Organizational Reliability Analysis in Accident Management (HORAAM). This method is based on the Decision Tree method and has gone through a number of steps in its development. The first one was the observation of crisis center exercises, in order to identify the main influence factors (IFs) which affect human and organizational reliability. These IFs were used as headings in the Decision Tree method. Expert judgment was used in order to verify the IFs, to rank them, and to estimate the value of the aggregated factors to simplify the quantification of the tree. A tool based on Mathematica was developed to increase the flexibility and the efficiency of the study

  2. A structured elicitation method to identify key direct risk factors for the management of natural resources

    Directory of Open Access Journals (Sweden)

    Michael Smith

    2015-11-01

    Full Text Available The high level of uncertainty inherent in natural resource management requires planners to apply comprehensive risk analyses, often in situations where there are few resources. In this paper, we demonstrate a broadly applicable, novel and structured elicitation approach to identify important direct risk factors. This new approach combines expert calibration and fuzzy based mathematics to capture and aggregate subjective expert estimates of the likelihood that a set of direct risk factors will cause management failure. A specific case study is used to demonstrate the approach; however, the described methods are widely applicable in risk analysis. For the case study, the management target was to retain all species that characterise a set of natural biological elements. The analysis was bounded by the spatial distribution of the biological elements under consideration and a 20-year time frame. Fourteen biological elements were expected to be at risk. Eleven important direct risk factors were identified that related to surrounding land use practices, climate change, problem species (e.g., feral predators, fire and hydrological change. In terms of their overall influence, the two most important risk factors were salinisation and a lack of water which together pose a considerable threat to the survival of nine biological elements. The described approach successfully overcame two concerns arising from previous risk analysis work: (1 the lack of an intuitive, yet comprehensive scoring method enabling the detection and clarification of expert agreement and associated levels of uncertainty; and (2 the ease with which results can be interpreted and communicated while preserving a rich level of detail essential for informed decision making.

  3. [The estimation nourishment methods of newborns and infants hospitalized in the Department of Pediatric Propedeutics and Bone Metabolism Diseases and analysis of factors which determinate the way of alimentation among these children].

    Science.gov (United States)

    Ligenza, Iwona; Jakubowska-Pietkiewicz, Elzbieta; Łupińska, Anna; Jastrzebska, Anna; Chlebna-Sokół, Danuta

    2009-06-01

    Despite so many advantages of natural feeding, according to the research led in Poland between 2000 and 2005, in the sixth month of life only 8% of infants were strictly breast-fed. The aim of the study was to analyze the factors which have the influence on choosing the way of feeding of children hospitalized in the Department of Pediatric Propedeutics and Bone Metabolism Diseases. The inquiry was established among parents of newborns and infants up to 1 year old, hospitalized in the Department of Pediatric Propedeutics and Bone Metabolism Diseases between January and May 2008. The research was led on the group of 93 children (39 newborns and 54 infants). The inquiry consists of questions about the cause and duration of hospitalization, perinatal interview, ways of nourishment and parents' personal data. At the time of leading the inquiry 27 children (29%) were fed strictly naturally, 36 (38.7%) were bottle-fed, 23 (24.73%) were fed in the mixed way, 6 (6.5%) were fed by the stomach tube and 1 child (1.1%) was fed parenterally. 44.1% of parents obtained information about breast-feeding from media, whereas only 3 (3.2%) got it from medical staff. The most common reason for giving up breast feeding was the lack (or too little amounts) of mother's milk. The doctor appeared to be the main person who decided to introduce formula-feeding. Among children naturally-fed 21 (77.8%) were given formula in the first twenty-four hours after the labour. The factors which appeared to influence the choice of the way of alimentation, in statistically important way (p alimentation. The health care system (perinatal, labour and basic care) concerning mother and child, doesn't promote natural feeding.

  4. Factor analysis of processes of corporate culture formation at industrial enterprises of Ukraine

    Directory of Open Access Journals (Sweden)

    Illiashenko Sergii

    2016-06-01

    Full Text Available Authors have analyzed and synthesized the features of formation and development of the corporate culture at industrial enterprises of Ukraine and on this basis developed recommendations for application in the management of strategic development. During the research authors used the following general scientific methods: at research of patterns of interaction national culture, corporate culture and the culture of the individual authors used logical generalization method; for determining factors influencing corporate culture formation with the level of occurrence authors used factor analysis; for trend analysis of the corporate culture development at appropriate levels authors used comparative method. Results of the analysis showed that macro- and microfactors are external and mezofaktors (adaptability of business and corporate governance, corporate ethics, corporate social responsibility and personnel policies, corporate finance are internal for an enterprise. Authors have identified areas for each of the factors, itemized obstacles to the establishment and development of corporate culture at Ukrainian industrial enterprises and proposed recommendations for these processes management.

  5. A new detrended semipartial cross-correlation analysis: Assessing the important meteorological factors affecting API

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Chen-Hua, E-mail: shenandchen01@163.com [College of Geographical Science, Nanjing Normal University, Nanjing 210046 (China); Jiangsu Center for Collaborative Innovation in Geographical Information Resource, Nanjing 210046 (China); Key Laboratory of Virtual Geographic Environment of Ministry of Education, Nanjing 210046 (China)

    2015-12-04

    To analyze the unique contribution of meteorological factors to the air pollution index (API), a new method, the detrended semipartial cross-correlation analysis (DSPCCA), is proposed. Based on both a detrended cross-correlation analysis and a DFA-based multivariate-linear-regression (DMLR), this method is improved by including a semipartial correlation technique, which is used to indicate the unique contribution of an explanatory variable to multiple correlation coefficients. The advantages of this method in handling nonstationary time series are illustrated by numerical tests. To further demonstrate the utility of this method in environmental systems, new evidence of the primary contribution of meteorological factors to API is provided through DMLR. Results show that the most important meteorological factors affecting API are wind speed and diurnal temperature range, and the explanatory ability of meteorological factors to API gradually strengthens with increasing time scales. The results suggest that DSPCCA is a useful method for addressing environmental systems. - Highlights: • A detrended multiple linear regression is shown. • A detrended semipartial cross correlation analysis is proposed. • The important meteorological factors affecting API are assessed. • The explanatory ability of meteorological factors to API gradually strengthens with increasing time scales.

  6. A new detrended semipartial cross-correlation analysis: Assessing the important meteorological factors affecting API

    International Nuclear Information System (INIS)

    Shen, Chen-Hua

    2015-01-01

    To analyze the unique contribution of meteorological factors to the air pollution index (API), a new method, the detrended semipartial cross-correlation analysis (DSPCCA), is proposed. Based on both a detrended cross-correlation analysis and a DFA-based multivariate-linear-regression (DMLR), this method is improved by including a semipartial correlation technique, which is used to indicate the unique contribution of an explanatory variable to multiple correlation coefficients. The advantages of this method in handling nonstationary time series are illustrated by numerical tests. To further demonstrate the utility of this method in environmental systems, new evidence of the primary contribution of meteorological factors to API is provided through DMLR. Results show that the most important meteorological factors affecting API are wind speed and diurnal temperature range, and the explanatory ability of meteorological factors to API gradually strengthens with increasing time scales. The results suggest that DSPCCA is a useful method for addressing environmental systems. - Highlights: • A detrended multiple linear regression is shown. • A detrended semipartial cross correlation analysis is proposed. • The important meteorological factors affecting API are assessed. • The explanatory ability of meteorological factors to API gradually strengthens with increasing time scales.

  7. Methods of stability analysis in nonlinear mechanics

    International Nuclear Information System (INIS)

    Warnock, R.L.; Ruth, R.D.; Gabella, W.; Ecklund, K.

    1989-01-01

    We review our recent work on methods to study stability in nonlinear mechanics, especially for the problems of particle accelerators, and compare our ideals to those of other authors. We emphasize methods that (1) show promise as practical design tools, (2) are effective when the nonlinearity is large, and (3) have a strong theoretical basis. 24 refs., 2 figs., 2 tabs

  8. Comparative analysis among several methods used to solve the point kinetic equations

    International Nuclear Information System (INIS)

    Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da

    2007-01-01

    The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)

  9. Comparative analysis among several methods used to solve the point kinetic equations

    Energy Technology Data Exchange (ETDEWEB)

    Nunes, Anderson L.; Goncalves, Alessandro da C.; Martinez, Aquilino S.; Silva, Fernando Carvalho da [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear; E-mails: alupo@if.ufrj.br; agoncalves@con.ufrj.br; aquilino@lmp.ufrj.br; fernando@con.ufrj.br

    2007-07-01

    The main objective of this work consists on the methodology development for comparison of several methods for the kinetics equations points solution. The evaluated methods are: the finite differences method, the stiffness confinement method, improved stiffness confinement method and the piecewise constant approximations method. These methods were implemented and compared through a systematic analysis that consists basically of confronting which one of the methods consume smaller computational time with higher precision. It was calculated the relative which function is to combine both criteria in order to reach the goal. Through the analyses of the performance factor it is possible to choose the best method for the solution of point kinetics equations. (author)

  10. Confirmatory factor analysis of the female sexual function index.

    Science.gov (United States)

    Opperman, Emily A; Benson, Lindsay E; Milhausen, Robin R

    2013-01-01

    The Female Sexual Functioning Index (Rosen et al., 2000 ) was designed to assess the key dimensions of female sexual functioning using six domains: desire, arousal, lubrication, orgasm, satisfaction, and pain. A full-scale score was proposed to represent women's overall sexual function. The fifth revision to the Diagnostic and Statistical Manual (DSM) is currently underway and includes a proposal to combine desire and arousal problems. The objective of this article was to evaluate and compare four models of the Female Sexual Functioning Index: (a) single-factor model, (b) six-factor model, (c) second-order factor model, and (4) five-factor model combining the desire and arousal subscales. Cross-sectional and observational data from 85 women were used to conduct a confirmatory factor analysis on the Female Sexual Functioning Index. Local and global goodness-of-fit measures, the chi-square test of differences, squared multiple correlations, and regression weights were used. The single-factor model fit was not acceptable. The original six-factor model was confirmed, and good model fit was found for the second-order and five-factor models. Delta chi-square tests of differences supported best fit for the six-factor model validating usage of the six domains. However, when revisions are made to the DSM-5, the Female Sexual Functioning Index can adapt to reflect these changes and remain a valid assessment tool for women's sexual functioning, as the five-factor structure was also supported.

  11. Spatial epidemiology of cancer: a review of data sources, methods and risk factors

    Directory of Open Access Journals (Sweden)

    Rita Roquette

    2017-05-01

    Full Text Available Cancer is a major concern among chronic diseases today. Spatial epidemiology plays a relevant role in this matter and we present here a review of this subject, including a discussion of the literature in terms of the level of geographic data aggregation, risk factors and methods used to analyse the spatial distribution of patterns and spatial clusters. For this purpose, we performed a websearch in the Pubmed and Web of Science databases including studies published between 1979 and 2015. We found 180 papers from 63 journals and noted that spatial epidemiology of cancer has been addressed with more emphasis during the last decade with research based on data mostly extracted from cancer registries and official mortality statistics. In general, the research questions present in the reviewed papers can be classified into three different sets: i analysis of spatial distribution of cancer and/or its temporal evolution; ii risk factors; iii development of data analysis methods and/or evaluation of results obtained from application of existing methods. This review is expected to help promote research in this area through the identification of relevant knowledge gaps. Cancer’s spatial epidemiology represents an important concern, mainly for public health policies design aimed to minimise the impact of chronic disease in specific populations.

  12. Impact of the Choice of Normalization Method on Molecular Cancer Class Discovery Using Nonnegative Matrix Factorization.

    Science.gov (United States)

    Yang, Haixuan; Seoighe, Cathal

    2016-01-01

    Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.

  13. Nuclear analysis methods in monitoring occupational health

    International Nuclear Information System (INIS)

    Clayton, E.

    1985-01-01

    With the increasing industrialisation of the world has come an increase in exposure to hazardous chemicals. Their effect on the body depends upon the concentration of the element in the work environment; its chemical form; the possible different routes of intake; and the individual's biological response to the chemical. Nuclear techniques of analysis such as neutron activation analysis (NAA) and proton induced X-ray emission analysis (PIXE), have played an important role in understanding the effects hazardous chemicals can have on occupationally exposed workers. In this review, examples of their application, mainly in monitoring exposure to heavy metals is discussed

  14. Novel method for on-road emission factor measurements using a plume capture trailer.

    Science.gov (United States)

    Morawska, L; Ristovski, Z D; Johnson, G R; Jayaratne, E R; Mengersen, K

    2007-01-15

    The method outlined provides for emission factor measurements to be made for unmodified vehicles driving under real world conditions at minimal cost. The method consists of a plume capture trailer towed behind a test vehicle. The trailer collects a sample of the naturally diluted plume in a 200 L conductive bag and this is delivered immediately to a mobile laboratory for subsequent analysis of particulate and gaseous emissions. The method offers low test turnaround times with the potential to complete much larger numbers of emission factor measurements than have been possible using dynamometer testing. Samples can be collected at distances up to 3 m from the exhaust pipe allowing investigation of early dilution processes. Particle size distribution measurements, as well as particle number and mass emission factor measurements, based on naturally diluted plumes are presented. A dilution profile relating the plume dilution ratio to distance from the vehicle tail pipe for a diesel passenger vehicle is also presented. Such profiles are an essential input for new mechanistic roadway air quality models.

  15. METHOD FOR DETERMINING THE MAXIMUM ARRANGEMENT FACTOR OF FOOTWEAR PARTS

    Directory of Open Access Journals (Sweden)

    DRIŞCU Mariana

    2014-05-01

    Full Text Available By classic methodology, designing footwear is a very complex and laborious activity. That is because classic methodology requires many graphic executions using manual means, which consume a lot of the producer’s time. Moreover, the results of this classical methodology may contain many inaccuracies with the most unpleasant consequences for the footwear producer. Thus, the costumer that buys a footwear product by taking into consideration the characteristics written on the product (size, width can notice after a period that the product has flaws because of the inadequate design. In order to avoid this kind of situations, the strictest scientific criteria must be followed when one designs a footwear product. The decisive step in this way has been made some time ago, when, as a result of powerful technical development and massive implementation of electronical calculus systems and informatics, This paper presents a product software for determining all possible arrangements of a footwear product’s reference points, in order to automatically acquire the maximum arrangement factor. The user multiplies the pattern in order to find the economic arrangement for the reference points. In this purpose, the user must probe few arrangement variants, in the translation and rotate-translation system. The same process is used in establishing the arrangement factor for the two points of reference of the designed footwear product. After probing several variants of arrangement in the translation and rotation and translation systems, the maximum arrangement factors are chosen. This allows the user to estimate the material wastes.

  16. Comparative analysis of selected hydromorphological assessment methods

    Czech Academy of Sciences Publication Activity Database

    Šípek, Václav; Matoušková, M.; Dvořák, M.

    2010-01-01

    Roč. 169, 1-4 (2010), s. 309-319 ISSN 0167-6369 Institutional support: RVO:67985874 Keywords : Hydromorphology * Ecohydromorphological river habitat assessment: EcoRivHab * Rapid Bioassessment Protocol * LAWA Field and Overview Survey * Libechovka River * Bilina River * Czech Republic Subject RIV: DA - Hydrology ; Limnology Impact factor: 1.436, year: 2010

  17. Decision making model design for antivirus software selection using Factor Analysis and Analytical Hierarchy Process

    Directory of Open Access Journals (Sweden)

    Nurhayati Ai

    2018-01-01

    Full Text Available Virus spread increase significantly through the internet in 2017. One of the protection method is using antivirus software. The wide variety of antivirus software in the market tends to creating confusion among consumer. Selecting the right antivirus according to their needs has become difficult. This is the reason we conduct our research. We formulate a decision making model for antivirus software consumer. The model is constructed by using factor analysis and AHP method. First we spread questionnaires to consumer, then from those questionnaires we identified 16 variables that needs to be considered on selecting antivirus software. This 16 variables then divided into 5 factors by using factor analysis method in SPSS software. These five factors are security, performance, internal, time and capacity. To rank those factors we spread questionnaires to 6 IT expert then the data is analyzed using AHP method. The result is that performance factors gained the highest rank from all of the other factors. Thus, consumer can select antivirus software by judging the variables in the performance factors. Those variables are software loading speed, user friendly, no excessive memory use, thorough scanning, and scanning virus fast and accurately.

  18. Implementation of statistical analysis methods for medical physics data

    International Nuclear Information System (INIS)

    Teixeira, Marilia S.; Pinto, Nivia G.P.; Barroso, Regina C.; Oliveira, Luis F.

    2009-01-01

    The objective of biomedical research with different radiation natures is to contribute for the understanding of the basic physics and biochemistry of the biological systems, the disease diagnostic and the development of the therapeutic techniques. The main benefits are: the cure of tumors through the therapy, the anticipated detection of diseases through the diagnostic, the using as prophylactic mean for blood transfusion, etc. Therefore, for the better understanding of the biological interactions occurring after exposure to radiation, it is necessary for the optimization of therapeutic procedures and strategies for reduction of radioinduced effects. The group pf applied physics of the Physics Institute of UERJ have been working in the characterization of biological samples (human tissues, teeth, saliva, soil, plants, sediments, air, water, organic matrixes, ceramics, fossil material, among others) using X-rays diffraction and X-ray fluorescence. The application of these techniques for measurement, analysis and interpretation of the biological tissues characteristics are experimenting considerable interest in the Medical and Environmental Physics. All quantitative data analysis must be initiated with descriptive statistic calculation (means and standard deviations) in order to obtain a previous notion on what the analysis will reveal. It is well known que o high values of standard deviation found in experimental measurements of biologicals samples can be attributed to biological factors, due to the specific characteristics of each individual (age, gender, environment, alimentary habits, etc). This work has the main objective the development of a program for the use of specific statistic methods for the optimization of experimental data an analysis. The specialized programs for this analysis are proprietary, another objective of this work is the implementation of a code which is free and can be shared by the other research groups. As the program developed since the

  19. Computational methods for corpus annotation and analysis

    CERN Document Server

    Lu, Xiaofei

    2014-01-01

    This book reviews computational tools for lexical, syntactic, semantic, pragmatic and discourse analysis, with instructions on how to obtain, install and use each tool. Covers studies using Natural Language Processing, and offers ideas for better integration.

  20. Progress in spatial analysis methods and applications

    CERN Document Server

    Páez, Antonio; Buliung, Ron N; Dall'erba, Sandy

    2010-01-01

    This book brings together developments in spatial analysis techniques, including spatial statistics, econometrics, and spatial visualization, and applications to fields such as regional studies, transportation and land use, population and health.

  1. Search Strategy of Detector Position For Neutron Source Multiplication Method by Using Detected-Neutron Multiplication Factor

    International Nuclear Information System (INIS)

    Endo, Tomohiro

    2011-01-01

    In this paper, an alternative definition of a neutron multiplication factor, detected-neutron multiplication factor kdet, is produced for the neutron source multiplication method..(NSM). By using kdet, a search strategy of appropriate detector position for NSM is also proposed. The NSM is one of the practical subcritical measurement techniques, i.e., the NSM does not require any special equipment other than a stationary external neutron source and an ordinary neutron detector. Additionally, the NSM method is based on steady-state analysis, so that this technique is very suitable for quasi real-time measurement. It is noted that the correction factors play important roles in order to accurately estimate subcriticality from the measured neutron count rates. The present paper aims to clarify how to correct the subcriticality measured by the NSM method, the physical meaning of the correction factors, and how to reduce the impact of correction factors by setting a neutron detector at an appropriate detector position

  2. Analysis of queues methods and applications

    CERN Document Server

    Gautam, Natarajan

    2012-01-01

    Introduction Analysis of Queues: Where, What, and How?Systems Analysis: Key ResultsQueueing Fundamentals and Notations Psychology in Queueing Reference Notes Exercises Exponential Interarrival and Service Times: Closed-Form Expressions Solving Balance Equations via Arc CutsSolving Balance Equations Using Generating Functions Solving Balance Equations Using Reversibility Reference Notes ExercisesExponential Interarrival and Service Times: Numerical Techniques and Approximations Multidimensional Birth and Death ChainsMultidimensional Markov Chains Finite-State Markov ChainsReference Notes Exerci

  3. Analysis of Non Local Image Denoising Methods

    Science.gov (United States)

    Pardo, Álvaro

    Image denoising is probably one of the most studied problems in the image processing community. Recently a new paradigm on non local denoising was introduced. The Non Local Means method proposed by Buades, Morel and Coll attracted the attention of other researches who proposed improvements and modifications to their proposal. In this work we analyze those methods trying to understand their properties while connecting them to segmentation based on spectral graph properties. We also propose some improvements to automatically estimate the parameters used on these methods.

  4. Analysis of the sweeped actuator line method

    OpenAIRE

    Nathan Jörn; Masson Christian; Dufresne Louis; Churchfield Matthew

    2015-01-01

    The actuator line method made it possible to describe the near wake of a wind turbine more accurately than with the actuator disk method. Whereas the actuator line generates the helicoidal vortex system shed from the tip blades, the actuator disk method sheds a vortex sheet from the edge of the rotor plane. But with the actuator line come also temporal and spatial constraints, such as the need for a much smaller time step than with actuator disk. While the latter one only has to obey the Cour...

  5. Contributions to robust methods of creep analysis

    International Nuclear Information System (INIS)

    Penny, B.K.

    1991-01-01

    Robust methods for the predictions of deformations and lifetimes of components operating in the creep range are presented. The ingredients used for this are well-tried numerical techniques combined with the concepts of continuum damage and so-called reference stresses. The methods described are derived in order to obtain the maximum benefit during the early stages of design where broad assessments of the influences of material choice, loadings and geometry need to be made quickly and with economical use of computers. It is also intended that the same methods will be of value during operation if estimates of damage or if exercises in life extension or inspection timing are required. (orig.)

  6. Clinicopathological Analysis of Factors Related to Colorectal Tumor Perforation

    OpenAIRE

    Medina-Arana, Vicente; Martínez-Riera, Antonio; Delgado-Plasencia, Luciano; Rodríguez-González, Diana; Bravo-Gutiérrez, Alberto; Álvarez-Argüelles, Hugo; Alarcó-Hernández, Antonio; Salido-Ruiz, Eduardo; Fernández-Peralta, Antonia M.; González-Aguilera, Juan J.

    2015-01-01

    Abstract Colorectal tumor perforation is a life-threatening complication of this disease. However, little is known about the anatomopathological factors or pathophysiologic mechanisms involved. Pathological and immunohistochemical analysis of factors related with tumoral neo-angiogenesis, which could influence tumor perforation are assessed in this study. A retrospective study of patients with perforated colon tumors (Group P) and T4a nonperforated (controls) was conducted between 2001 and 20...

  7. Analysis of Key Factors Driving Japan’s Military Normalization

    Science.gov (United States)

    2017-09-01

    no change to our policy of not giving in to terrorism.”40 Though the prime minister was democratically supported, Koizumi’s leadership style took...of the key driving factors of Japan’s normalization. The areas of prime ministerial leadership , regional security threats, alliance issues, and...analysis of the key driving factors of Japan’s normalization. The areas of prime ministerial leadership , regional security threats, alliance issues, and

  8. Capital Cost Optimization for Prefabrication: A Factor Analysis Evaluation Model

    Directory of Open Access Journals (Sweden)

    Hong Xue

    2018-01-01

    Full Text Available High capital cost is a significant hindrance to the promotion of prefabrication. In order to optimize cost management and reduce capital cost, this study aims to explore the latent factors and factor analysis evaluation model. Semi-structured interviews were conducted to explore potential variables and then questionnaire survey was employed to collect professionals’ views on their effects. After data collection, exploratory factor analysis was adopted to explore the latent factors. Seven latent factors were identified, including “Management Index”, “Construction Dissipation Index”, “Productivity Index”, “Design Efficiency Index”, “Transport Dissipation Index”, “Material increment Index” and “Depreciation amortization Index”. With these latent factors, a factor analysis evaluation model (FAEM, divided into factor analysis model (FAM and comprehensive evaluation model (CEM, was established. The FAM was used to explore the effect of observed variables on the high capital cost of prefabrication, while the CEM was used to evaluate comprehensive cost management level on prefabrication projects. Case studies were conducted to verify the models. The results revealed that collaborative management had a positive effect on capital cost of prefabrication. Material increment costs and labor costs had significant impacts on production cost. This study demonstrated the potential of on-site management and standardization design to reduce capital cost. Hence, collaborative management is necessary for cost management of prefabrication. Innovation and detailed design were needed to improve cost performance. The new form of precast component factories can be explored to reduce transportation cost. Meanwhile, targeted strategies can be adopted for different prefabrication projects. The findings optimized the capital cost and improved the cost performance through providing an evaluation and optimization model, which helps managers to

  9. A method of uranium isotopes concentration analysis

    International Nuclear Information System (INIS)

    Lin Yuangen; Jiang Meng; Wu Changli; Duan Zhanyuan; Guo Chunying

    2010-01-01

    A basic method of uranium isotopes concentration is described in this paper. The iteration method is used to calculate the relative efficiency curve, by analyzing the characteristic γ energy spectrum of 235 U, 232 U and the daughter nuclide of 238 U, then the relative activity can be calculated, at last the uranium isotopes concentration can be worked out, and the result is validated by the experimentation. (authors)

  10. Comparative analysis of accelerogram processing methods

    International Nuclear Information System (INIS)

    Goula, X.; Mohammadioun, B.

    1986-01-01

    The work described here inafter is a short development of an on-going research project, concerning high-quality processing of strong-motion recordings of earthquakes. Several processing procedures have been tested, applied to synthetic signals simulating ground-motion designed for this purpose. The methods of correction operating in the time domain are seen to be strongly dependent upon the sampling rate. Two methods of low-frequency filtering followed by an integration of accelerations yielded satisfactory results [fr

  11. General method of quantitative spectrographic analysis

    International Nuclear Information System (INIS)

    Capdevila, C.; Roca, M.

    1966-01-01

    A spectrographic method was developed to determine 23 elements in a wide range of concentrations; the method can be applied to metallic or refractory samples. Previous melting with lithium tetraborate and germanium oxide is done in order to avoid the influence of matrix composition and crystalline structure. Germanium oxide is also employed as internal standard. The resulting beads ar mixed with graphite powder (1:1) and excited in a 10 amperes direct current arc. (Author) 12 refs

  12. AVIS: analysis method for document coherence

    International Nuclear Information System (INIS)

    Henry, J.Y.; Elsensohn, O.

    1994-06-01

    The present document intends to give a short insight into AVIS, a method which permits to verify the quality of technical documents. The paper includes the presentation of the applied approach based on the K.O.D. method, the definition of quality criteria of a technical document, as well as a description of the means of valuating these criteria. (authors). 9 refs., 2 figs

  13. Analysis of methods for quantitative renography

    International Nuclear Information System (INIS)

    Archambaud, F.; Maksud, P.; Prigent, A.; Perrin-Fayolle, O.

    1995-01-01

    This article reviews the main methods using renography to estimate renal perfusion indices and to quantify differential and global renal function. The review addresses the pathophysiological significance of estimated parameters according to the underlying models and the choice of the radiopharmaceutical. The dependence of these parameters on the region of interest characteristics and on the methods of background and attenuation corrections are surveyed. Some current recommendations are proposed. (authors). 66 refs., 8 figs

  14. Periodic tests: a human factors analysis of documentary aspects

    International Nuclear Information System (INIS)

    Perinet, Romuald; Rousseau, Jean-Marie

    2007-01-01

    Periodic tests are technical inspections aimed at verifying the availability of the safety-related systems during operation. The French licensee, Electricite de France (EDF), manages periodic tests according to procedures, methods of examination and a frequency, which were defined when the systems were designed. These requirements are defined by national authorities of EDF in a reference document composed of rules of testing and tables containing the reference values to be respected. This reference document is analyzed and transformed by each 'Centre Nucleaire de Production d'Electricite' (CNPE) into station-specific operating ranges of periodic tests. In 2003, the IRSN noted that significant events for safety (ESS) involving periodic tests represented more than 20% of ESS between 2000 and 2002. Thus, 340 ESS were related to non-compliance with the conditions of the test and errors in the implementation of the procedures. A first analysis showed that almost 26% of all ESSs from 2000 to 2002 were related to periodic tests. For many of them, the national reference document and the operating ranges of tests were involved. In this context, the 'Direction Generale de la Surete Nucleaire' (DGSNR), requested the 'Institut de Radioprotection et de Surete Nucleaire' (IRSN) to examine the process of definition and implementation of the periodic tests. The IRSN analyzed about thirty French Licensee event reports occurring during the considered period (2000-2002). The IRSN also interviewed the main persons responsible for the processes and observed the performance of 3 periodic tests. The results of this analysis were presented to a group of experts ('Groupe Permanent') charged with delivering advice to the DGSNR about the origin of the problems identified and the improvements to be implemented. The main conclusions of the IRSN addressed the quality of the prescriptive documents. In this context, EDF decided to carry out a thorough analysis of the whole process. The first

  15. Using the method of judgement analysis to address variations in diagnostic decision making

    OpenAIRE

    Hancock, Helen C; Mason, James M; Murphy, Jerry J

    2012-01-01

    Abstract Background Heart failure is not a clear-cut diagnosis but a complex clinical syndrome with consequent diagnostic uncertainty. Judgment analysis is a method to help clinical teams to understand how they make complex decisions. The method of judgment analysis was used to determine the factors that influence clinicians' diagnostic decisions about heart failure. Methods Three consultants, three middle grade doctors, and two junior doctors each evaluated 45 patient scenarios. The main out...

  16. Convergence analysis of CMADR acceleration for the method of characteristics

    International Nuclear Information System (INIS)

    Park, Young Ryong; Cho, Nam Zin

    2005-01-01

    As the nuclear reactor core becomes more complex, heterogeneous, and geometrically irregular, the method of characteristics (MOC) is gaining its wide use in the neutron transport calculations. However, the long computer times require good acceleration methods. In our previous paper, the concept of coarse-mesh angular dependent rebalance (CMADR) acceleration was described and applied to the MOC calculations. The method is based on angular dependent rebalance factors defined on the coarse-mesh boundaries; a coarse-mesh consists of several fine meshes that may be (1) heterogeneous and (2) of mixed geometries with irregular or unstructured mesh shapes. In addition, (3) the coarse-mesh boundaries may not coincide with the structural interfaces of the problem and can be chosen artificially for convenience. The CMADR acceleration method on the MOC scheme that enables the very desirable features (1), (2), and (3) above is new in the neutron transport literature to the best of the authors knowledge. In this paper, we analyze the convergence of CMADR acceleration for MOC calculation in x-y-z (infinite) geometry by using Fourier analysis

  17. Factor analysis of the contextual fine motor questionnaire in children.

    Science.gov (United States)

    Lin, Chin-Kai; Meng, Ling-Fu; Yu, Ya-Wen; Chen, Che-Kuo; Li, Kuan-Hua

    2014-02-01

    Most studies treat fine motor as one subscale in a developmental test, hence, further factor analysis of fine motor has not been conducted. In fact, fine motor has been treated as a multi-dimensional domain from both clinical and theoretical perspectives, and therefore to know its factors would be valuable. The aim of this study is to analyze the internal consistency and factor validity of the Contextual Fine Motor Questionnaire (CFMQ). Based on the ecological observation and literature, the Contextual Fine Motor Questionnaire (CFMQ) was developed and includes 5 subscales: Pen Control, Tool Use During Handicraft Activities, the Use of Dining Utensils, Connecting and Separating during Dressing and Undressing, and Opening Containers. The main purpose of this study is to establish the factorial validity of the CFMQ through conducting this factor analysis study. Among 1208 questionnaires, 904 were successfully completed. Data from the children's CFMQ submitted by primary care providers was analyzed, including 485 females (53.6%) and 419 males (46.4%) from grades 1 to 5, ranging in age from 82 to 167 months (M=113.9, SD=16.3). Cronbach's alpha was used to measure internal consistency and explorative factor analysis was applied to test the five factor structures within the CFMQ. Results showed that Cronbach's alpha coefficient of the CFMQ for 5 subscales ranged from .77 to .92 and all item-total correlations with corresponding subscales were larger than .4 except one item. The factor loading of almost all items classified to their factor was larger than .5 except 3 items. There were five factors, explaining a total of 62.59% variance for the CFMQ. In conclusion, the remaining 24 items in the 5 subscales of the CFMQ had appropriate internal consistency, test-retest reliability and construct validity. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Adjusting for multiple prognostic factors in the analysis of randomised trials

    Science.gov (United States)

    2013-01-01

    Background When multiple prognostic factors are adjusted for in the analysis of a randomised trial, it is unclear (1) whether it is necessary to account for each of the strata, formed by all combinations of the prognostic factors (stratified analysis), when randomisation has been balanced within each stratum (stratified randomisation), or whether adjusting for the main effects alone will suffice, and (2) the best method of adjustment in terms of type I error rate and power, irrespective of the randomisation method. Methods We used simulation to (1) determine if a stratified analysis is necessary after stratified randomisation, and (2) to compare different methods of adjustment in terms of power and type I error rate. We considered the following methods of analysis: adjusting for covariates in a regression model, adjusting for each stratum using either fixed or random effects, and Mantel-Haenszel or a stratified Cox model depending on outcome. Results Stratified analysis is required after stratified randomisation to maintain correct type I error rates when (a) there are strong interactions between prognostic factors, and (b) there are approximately equal number of patients in each stratum. However, simulations based on real trial data found that type I error rates were unaffected by the method of analysis (stratified vs unstratified), indicating these conditions were not met in real datasets. Comparison of different analysis methods found that with small sample sizes and a binary or time-to-event outcome, most analysis methods lead to either inflated type I error rates or a reduction in power; the lone exception was a stratified analysis using random effects for strata, which gave nominal type I error rates and adequate power. Conclusions It is unlikely that a stratified analysis is necessary after stratified randomisation except in extreme scenarios. Therefore, the method of analysis (accounting for the strata, or adjusting only for the covariates) will not

  19. Microalbuminuria: It's Significance, risk factors and methods of ...

    African Journals Online (AJOL)

    Alasia Datonye

    Male gender Hypertension. High salt (and protein ) ... gender and high salt intake are also to be associated with a .... method has advantages and disadvantages, and the choice depends .... single voided urine samples to estimate quantitative proteinuria. .... in an Urban and Periurban School, Port Harcourt , Rivers. State.

  20. Electromagnetic modeling method for eddy current signal analysis

    International Nuclear Information System (INIS)

    Lee, D. H.; Jung, H. K.; Cheong, Y. M.; Lee, Y. S.; Huh, H.; Yang, D. J.

    2004-10-01

    An electromagnetic modeling method for eddy current signal analysis is necessary before an experiment is performed. Electromagnetic modeling methods consists of the analytical method and the numerical method. Also, the numerical methods can be divided by Finite Element Method(FEM), Boundary Element Method(BEM) and Volume Integral Method(VIM). Each modeling method has some merits and demerits. Therefore, the suitable modeling method can be chosen by considering the characteristics of each modeling. This report explains the principle and application of each modeling method and shows the comparison modeling programs