WorldWideScience

Sample records for test analysis methods

  1. A well test analysis method accounting for pre-test operations

    International Nuclear Information System (INIS)

    Silin, D.B.; Tsang, C.-F.

    2003-01-01

    We propose to use regular monitoring data from a production or injection well for estimating the formation hydraulic properties in the vicinity of the wellbore without interrupting the operations. In our approach, we select a portion of the pumping data over a certain time interval and then derive our conclusions from analysis of these data. A distinctive feature of the proposed approach differing it form conventional methods is in the introduction of an additional parameter, an effective pre-test pumping rate. The additional parameter is derived based on a rigorous asymptotic analysis of the flow model. Thus, we account for the non-uniform pressure distribution at the beginning of testing time interval caused by pre-test operations at the well. By synthetic and field examples, we demonstrate that deviation of the matching curve from the data that is usually attributed to skin and wellbore storage effects, can also be interpreted through this new parameter. Moreover, with our method, the data curve is matched equally well and the results of the analysis remain stable when the analyzed data interval is perturbed, whereas traditional methods are sensitive to the choice of the data interval. A special efficient minimization procedure has been developed for searching the best fitting parameters. We enhanced our analysis above with a procedure of estimating ambient reservoir pressure and dimensionless wellbore radius. The methods reported here have been implemented in code ODA (Operations Data Analysis). A beta version of the code is available for free testing and evaluation to interested parties

  2. Integrated Data Collection Analysis (IDCA) Program - SSST Testing Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Whinnery, LeRoy L. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Phillips, Jason J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Shelley, Timothy J. [Bureau of Alcohol, Tobacco and Firearms (ATF), Huntsville, AL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-03-25

    The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the methods used for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis during the IDCA program. These methods changed throughout the Proficiency Test and the reasons for these changes are documented in this report. The most significant modifications in standard testing methods are: 1) including one specified sandpaper in impact testing among all the participants, 2) diversifying liquid test methods for selected participants, and 3) including sealed sample holders for thermal testing by at least one participant. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study will suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent.

  3. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    Energy Technology Data Exchange (ETDEWEB)

    Ekstroem, P.A.; Broed, R. [Facilia AB, Stockholm, (Sweden)

    2006-05-15

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several

  4. Sensitivity analysis methods and a biosphere test case implemented in EIKOS

    International Nuclear Information System (INIS)

    Ekstroem, P.A.; Broed, R.

    2006-05-01

    Computer-based models can be used to approximate real life processes. These models are usually based on mathematical equations, which are dependent on several variables. The predictive capability of models is therefore limited by the uncertainty in the value of these. Sensitivity analysis is used to apportion the relative importance each uncertain input parameter has on the output variation. Sensitivity analysis is therefore an essential tool in simulation modelling and for performing risk assessments. Simple sensitivity analysis techniques based on fitting the output to a linear equation are often used, for example correlation or linear regression coefficients. These methods work well for linear models, but for non-linear models their sensitivity estimations are not accurate. Usually models of complex natural systems are non-linear. Within the scope of this work, various sensitivity analysis methods, which can cope with linear, non-linear, as well as non-monotone problems, have been implemented, in a software package, EIKOS, written in Matlab language. The following sensitivity analysis methods are supported by EIKOS: Pearson product moment correlation coefficient (CC), Spearman Rank Correlation Coefficient (RCC), Partial (Rank) Correlation Coefficients (PCC), Standardized (Rank) Regression Coefficients (SRC), Sobol' method, Jansen's alternative, Extended Fourier Amplitude Sensitivity Test (EFAST) as well as the classical FAST method and the Smirnov and the Cramer-von Mises tests. A graphical user interface has also been developed, from which the user easily can load or call the model and perform a sensitivity analysis as well as uncertainty analysis. The implemented sensitivity analysis methods has been benchmarked with well-known test functions and compared with other sensitivity analysis software, with successful results. An illustration of the applicability of EIKOS is added to the report. The test case used is a landscape model consisting of several linked

  5. Preliminary Tests For Development Of A Non-Pertechnetate Analysis Method

    Energy Technology Data Exchange (ETDEWEB)

    Diprete, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); McCabe, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-09-28

    The objective of this task was to develop a non-pertechnetate analysis method that 222-S lab could easily implement. The initial scope involved working with 222-S laboratory personnel to adapt the existing Tc analytical method to fractionate the non-pertechnetate and pertechnetate. SRNL then developed and tested a method using commercial sorbents containing Aliquat® 336 to extract the pertechnetate (thereby separating it from non-pertechnetate), followed by oxidation, extraction, and stripping steps, and finally analysis by beta counting and Mass Spectroscopy. Several additional items were partially investigated, including impacts of a 137Cs removal step. The method was initially tested on SRS tank waste samples to determine its viability. Although SRS tank waste does not contain non-pertechnetate, testing with it was useful to investigate the compatibility, separation efficiency, interference removal efficacy, and method sensitivity.

  6. Characteristic Value Method of Well Test Analysis for Horizontal Gas Well

    Directory of Open Access Journals (Sweden)

    Xiao-Ping Li

    2014-01-01

    Full Text Available This paper presents a study of characteristic value method of well test analysis for horizontal gas well. Owing to the complicated seepage flow mechanism in horizontal gas well and the difficulty in the analysis of transient pressure test data, this paper establishes the mathematical models of well test analysis for horizontal gas well with different inner and outer boundary conditions. On the basis of obtaining the solutions of the mathematical models, several type curves are plotted with Stehfest inversion algorithm. For gas reservoir with closed outer boundary in vertical direction and infinite outer boundary in horizontal direction, while considering the effect of wellbore storage and skin effect, the pseudopressure behavior of the horizontal gas well can manifest four characteristic periods: pure wellbore storage period, early vertical radial flow period, early linear flow period, and late horizontal pseudoradial flow period. For gas reservoir with closed outer boundary both in vertical and horizontal directions, the pseudopressure behavior of the horizontal gas well adds the pseudosteady state flow period which appears after the boundary response. For gas reservoir with closed outer boundary in vertical direction and constant pressure outer boundary in horizontal direction, the pseudopressure behavior of the horizontal gas well adds the steady state flow period which appears after the boundary response. According to the characteristic lines which are manifested by pseudopressure derivative curve of each flow period, formulas are developed to obtain horizontal permeability, vertical permeability, skin factor, reservoir pressure, and pore volume of the gas reservoir, and thus the characteristic value method of well test analysis for horizontal gas well is established. Finally, the example study verifies that the new method is reliable. Characteristic value method of well test analysis for horizontal gas well makes the well test analysis

  7. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    Science.gov (United States)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  8. An analytic data analysis method for oscillatory slug tests.

    Science.gov (United States)

    Chen, Chia-Shyun

    2006-01-01

    An analytical data analysis method is developed for slug tests in partially penetrating wells in confined or unconfined aquifers of high hydraulic conductivity. As adapted from the van der Kamp method, the determination of the hydraulic conductivity is based on the occurrence times and the displacements of the extreme points measured from the oscillatory data and their theoretical counterparts available in the literature. This method is applied to two sets of slug test response data presented by Butler et al.: one set shows slow damping with seven discernable extremities, and the other shows rapid damping with three extreme points. The estimates of the hydraulic conductivity obtained by the analytic method are in good agreement with those determined by an available curve-matching technique.

  9. Quantitative Evaluation of gamma-Spectrum Analysis Methods using IAEA Test Spectra

    DEFF Research Database (Denmark)

    Nielsen, Sven Poul

    1982-01-01

    A description is given of a γ-spectrum analysis method based on nonlinear least-squares fitting. The quality of the method is investigated by using statistical tests on the results from analyses of IAEA test spectra. By applying an empirical correction factor of 0.75 to the calculated peak-area u...

  10. Hydrocoin level 3 - Testing methods for sensitivity/uncertainty analysis

    International Nuclear Information System (INIS)

    Grundfelt, B.; Lindbom, B.; Larsson, A.; Andersson, K.

    1991-01-01

    The HYDROCOIN study is an international cooperative project for testing groundwater hydrology modelling strategies for performance assessment of nuclear waste disposal. The study was initiated in 1984 by the Swedish Nuclear Power Inspectorate and the technical work was finalised in 1987. The participating organisations are regulatory authorities as well as implementing organisations in 10 countries. The study has been performed at three levels aimed at studying computer code verification, model validation and sensitivity/uncertainty analysis respectively. The results from the first two levels, code verification and model validation, have been published in reports in 1988 and 1990 respectively. This paper focuses on some aspects of the results from Level 3, sensitivity/uncertainty analysis, for which a final report is planned to be published during 1990. For Level 3, seven test cases were defined. Some of these aimed at exploring the uncertainty associated with the modelling results by simply varying parameter values and conceptual assumptions. In other test cases statistical sampling methods were applied. One of the test cases dealt with particle tracking and the uncertainty introduced by this type of post processing. The amount of results available is substantial although unevenly spread over the test cases. It has not been possible to cover all aspects of the results in this paper. Instead, the different methods applied will be illustrated by some typical analyses. 4 figs., 9 refs

  11. A new modification of summary-based analysis method for large software system testing

    Directory of Open Access Journals (Sweden)

    A. V. Sidorin

    2015-01-01

    Full Text Available The automated testing tools becoming a frequent practice require thorough computer-aided testing of large software systems, including system inter-component interfaces. To achieve a good coverage, one should overcome scalability problems of different methods of analysis. These problems arise from impossibility to analyze all the execution paths. The objective of this research is to build a method for inter-procedural analysis, which efficiency enables us to analyse large software systems (such as Android OS codebase as a whole for a reasonable time (no more than 4 hours. This article reviews existing methods of software analysis to detect their potential defects. It focuses on the symbolic execution method since it is widely used both in static analysis of source code and in hybrid analysis of object files and intermediate representation (concolic testing. The method of symbolic execution involves separation of a set of input data values into equivalence classes while choosing an execution path. The paper also considers advantages of this method and its shortcomings. One of the main scalability problems is related to inter-procedural analysis. Analysis time grows rapidly if an inlining method is used for inter-procedural analysis. So this work proposes a summary-based analysis method to solve scalability problems. Clang Static Analyzer, an open source static analyzer (a part of the LLVM project, has been chosen as a target system. It allows us to compare performance of inlining and summary-based inter-procedural analysis. A mathematical model for preliminary estimations is described in order to identify possible factors of performance improvement.

  12. Standard test methods for analysis of sintered gadolinium oxide-uranium dioxide pellets

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2006-01-01

    1.1 These test methods cover procedures for the analysis of sintered gadolinium oxide-uranium dioxide pellets to determine compliance with specifications. 1.2 The analytical procedures appear in the following order: Section Carbon (Total) by Direct CombustionThermal Conductivity Method C1408 Test Method for Carbon (Total) in Uranium Oxide Powders and Pellets By Direct Combustion-Infrared Detection Method Chlorine and Fluorine by Pyrohydrolysis Ion-Selective Electrode Method C1502 Test Method for Determination of Total Chlorine and Fluorine in Uranium Dioxide and Gadolinium Oxide Gadolinia Content by Energy-Dispersive X-Ray Spectrometry C1456 Test Method for Determination of Uranium or Gadolinium, or Both, in Gadolinium Oxide-Uranium Oxide Pellets or by X-Ray Fluorescence (XRF) Hydrogen by Inert Gas Fusion C1457 Test Method for Determination of Total Hydrogen Content of Uranium Oxide Powders and Pellets by Carrier Gas Extraction Isotopic Uranium Composition by Multiple-Filament Surface-Ioni...

  13. MCC-15: waste/canister accident testing and analysis method

    International Nuclear Information System (INIS)

    Slate, S.C.; Pulsipher, B.A.; Scott, P.A.

    1985-02-01

    The Materials Characterization Center (MCC) at the Pacific Northwest Laboratory (PNL) is developing standard tests to characterize the performance of nuclear waste forms under normal and accident conditions. As part of this effort, the MCC is developing MCC-15, Waste/Canister Accident Testing and Analysis. MCC-15 is used to test canisters containing simulated waste forms to provide data on the effects of accidental impacts on the waste form particle size and on canister integrity. The data is used to support the design of transportation and handling equipment and to demonstrate compliance with repository waste acceptance specifications. This paper reviews the requirements that led to the development of MCC-15, describes the test method itself, and presents some early results from tests on canisters representative of those proposed for the Defense Waste Processing Facility (DWPF). 13 references, 6 figures

  14. Application of wavelet analysis to signal processing methods for eddy-current test

    International Nuclear Information System (INIS)

    Chen, G.; Yoneyama, H.; Yamaguchi, A.; Uesugi, N.

    1998-01-01

    This study deals with the application of wavelet analysis to detection and characterization of defects from eddy-current and ultrasonic testing signals of a low signal-to-noise ratio. Presented in this paper are the methods for processing eddy-current testing signals of heat exchanger tubes of a steam generator in a nuclear power plant. The results of processing eddy-current testing signals of tube testpieces with artificial flaws show that the flaw signals corrupted by noise and/or non-defect signals can be effectively detected and characterized by using the wavelet methods. (author)

  15. LOGICAL CONDITIONS ANALYSIS METHOD FOR DIAGNOSTIC TEST RESULTS DECODING APPLIED TO COMPETENCE ELEMENTS PROFICIENCY

    Directory of Open Access Journals (Sweden)

    V. I. Freyman

    2015-11-01

    Full Text Available Subject of Research.Representation features of education results for competence-based educational programs are analyzed. Solution importance of decoding and proficiency estimation for elements and components of discipline parts of competences is shown. The purpose and objectives of research are formulated. Methods. The paper deals with methods of mathematical logic, Boolean algebra, and parametrical analysis of complex diagnostic test results, that controls proficiency of some discipline competence elements. Results. The method of logical conditions analysis is created. It will give the possibility to formulate logical conditions for proficiency determination of each discipline competence element, controlled by complex diagnostic test. Normalized test result is divided into noncrossing zones; a logical condition about controlled elements proficiency is formulated for each of them. Summarized characteristics for test result zones are imposed. An example of logical conditions forming for diagnostic test with preset features is provided. Practical Relevance. The proposed method of logical conditions analysis is applied in the decoding algorithm of proficiency test diagnosis for discipline competence elements. It will give the possibility to automate the search procedure for elements with insufficient proficiency, and is also usable for estimation of education results of a discipline or a component of competence-based educational program.

  16. Non-destructive Testing of Wood Defects Based on Discriminant Analysis Method

    Directory of Open Access Journals (Sweden)

    Wenshu LIN

    2015-09-01

    Full Text Available The defects of wood samples were tested by the technique of stress wave and ultrasonic technology, and the testing results were comparatively analyzed by using the Fisher discriminant analysis in the statistic software of SPSS. The differences of defect detection sensitivity and accuracy for stress wave and ultrasonic under different wood properties and defects were concluded. Therefore, in practical applications, according to different situations the corresponding wood non- destructive testing method should be used, or the two detection methods are applied at the same time in order to compensate for its shortcomings with each other to improve the ability to distinguish the timber defects. The results can provide a reference for further improvement of the reliability of timber defects detection.

  17. Standard test methods for chemical, mass spectrometric, spectrochemical, nuclear, and radiochemical analysis of uranium hexafluoride

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2011-01-01

    1.1 These test methods cover procedures for subsampling and for chemical, mass spectrometric, spectrochemical, nuclear, and radiochemical analysis of uranium hexafluoride UF6. Most of these test methods are in routine use to determine conformance to UF6 specifications in the Enrichment and Conversion Facilities. 1.2 The analytical procedures in this document appear in the following order: Note 1—Subcommittee C26.05 will confer with C26.02 concerning the renumbered section in Test Methods C761 to determine how concerns with renumbering these sections, as analytical methods are replaced with stand-alone analytical methods, are best addressed in subsequent publications. Sections Subsampling of Uranium Hexafluoride 7 - 10 Gravimetric Determination of Uranium 11 - 19 Titrimetric Determination of Uranium 20 Preparation of High-Purity U3O 8 21 Isotopic Analysis 22 Isotopic Analysis by Double-Standard Mass-Spectrometer Method 23 - 29 Determination of Hydrocarbons, Chlorocarbons, and Partially Substitut...

  18. Standard test method for creep-fatigue testing

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers the determination of mechanical properties pertaining to creep-fatigue deformation or crack formation in nominally homogeneous materials, or both by the use of test specimens subjected to uniaxial forces under isothermal conditions. It concerns fatigue testing at strain rates or with cycles involving sufficiently long hold times to be responsible for the cyclic deformation response and cycles to crack formation to be affected by creep (and oxidation). It is intended as a test method for fatigue testing performed in support of such activities as materials research and development, mechanical design, process and quality control, product performance, and failure analysis. The cyclic conditions responsible for creep-fatigue deformation and cracking vary with material and with temperature for a given material. 1.2 The use of this test method is limited to specimens and does not cover testing of full-scale components, structures, or consumer products. 1.3 This test method is primarily ...

  19. Standard test method for isotopic analysis of uranium hexafluoride by double standard single-collector gas mass spectrometer method

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This is a quantitative test method applicable to determining the mass percent of uranium isotopes in uranium hexafluoride (UF6) samples with 235U concentrations between 0.1 and 5.0 mass %. 1.2 This test method may be applicable for the entire range of 235U concentrations for which adequate standards are available. 1.3 This test method is for analysis by a gas magnetic sector mass spectrometer with a single collector using interpolation to determine the isotopic concentration of an unknown sample between two characterized UF6 standards. 1.4 This test method is to replace the existing test method currently published in Test Methods C761 and is used in the nuclear fuel cycle for UF6 isotopic analyses. 1.5 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appro...

  20. A test case of the deformation rate analysis (DRA) stress measurement method

    Energy Technology Data Exchange (ETDEWEB)

    Dight, P.; Hsieh, A. [Australian Centre for Geomechanics, Univ. of WA, Crawley (Australia); Johansson, E. [Saanio and Riekkola Oy, Helsinki (Finland); Hudson, J.A. [Rock Engineering Consultants (United Kingdom); Kemppainen, K.

    2012-01-15

    As part of Posiva's site and ONKALO investigations, the in situ rock stress has been measured by a variety of techniques, including hydraulic fracturing, overcoring, and convergence measurements. All these techniques involve direct measurements in a drillhole or at the rock surface. An alternative method is to test drillhole core in a way that enables estimation of the magnitudes and orientations of the in situ rock stress. The Kaiser Effect (KE) and Deformation Rate Analysis (DRA) are two ways to do this. In the work reported here, a 'blind' DRA test was conducted on core obtained from the POSE (Posiva's Olkiluoto Spalling Experiment) niche in the ONKALO. The term 'blind' means that the two first authors of this report, who conducted the tests at the Australian Centre for Geomechanics, did not know the depths below surface at which the cores had been obtained. The results of this DRA Test Case are presented, together with an explanation of the DRA procedure. Also, additional information that would help in such DRA testing and associated analysis is explained. One of the problems in comparing the DRA results with the known Olkiluoto stress field is that the latter is highly variable across the site, as experienced by the previous in situ stress measurements and as predicted by numerical analysis. The variability is mainly caused by the presence of the large brittle deformation zones which perturb the local stress state. However, this variability reduces with depth and the stress field becomes more stable at the {approx} 350 m at which the drillhole cores were obtained. Another compounding difficulty is that the stress quantity, being a second order tensor, requires six independent components for its specification. In other words, comparison of the DRA results and the known stress field requires comparison of six different quantities. In terms of the major principal stress orientation, the DRA results predict an orientation completely

  1. A test case of the deformation rate analysis (DRA) stress measurement method

    International Nuclear Information System (INIS)

    Dight, P.; Hsieh, A.; Johansson, E.; Hudson, J.A.; Kemppainen, K.

    2012-01-01

    As part of Posiva's site and ONKALO investigations, the in situ rock stress has been measured by a variety of techniques, including hydraulic fracturing, overcoring, and convergence measurements. All these techniques involve direct measurements in a drillhole or at the rock surface. An alternative method is to test drillhole core in a way that enables estimation of the magnitudes and orientations of the in situ rock stress. The Kaiser Effect (KE) and Deformation Rate Analysis (DRA) are two ways to do this. In the work reported here, a 'blind' DRA test was conducted on core obtained from the POSE (Posiva's Olkiluoto Spalling Experiment) niche in the ONKALO. The term 'blind' means that the two first authors of this report, who conducted the tests at the Australian Centre for Geomechanics, did not know the depths below surface at which the cores had been obtained. The results of this DRA Test Case are presented, together with an explanation of the DRA procedure. Also, additional information that would help in such DRA testing and associated analysis is explained. One of the problems in comparing the DRA results with the known Olkiluoto stress field is that the latter is highly variable across the site, as experienced by the previous in situ stress measurements and as predicted by numerical analysis. The variability is mainly caused by the presence of the large brittle deformation zones which perturb the local stress state. However, this variability reduces with depth and the stress field becomes more stable at the ∼ 350 m at which the drillhole cores were obtained. Another compounding difficulty is that the stress quantity, being a second order tensor, requires six independent components for its specification. In other words, comparison of the DRA results and the known stress field requires comparison of six different quantities. In terms of the major principal stress orientation, the DRA results predict an orientation completely different to the NW-SE regional

  2. E-learning platform for automated testing of electronic circuits using signature analysis method

    Science.gov (United States)

    Gherghina, Cǎtǎlina; Bacivarov, Angelica; Bacivarov, Ioan C.; Petricǎ, Gabriel

    2016-12-01

    Dependability of electronic circuits can be ensured only through testing of circuit modules. This is done by generating test vectors and their application to the circuit. Testability should be viewed as a concerted effort to ensure maximum efficiency throughout the product life cycle, from conception and design stage, through production to repairs during products operating. In this paper, is presented the platform developed by authors for training for testability in electronics, in general and in using signature analysis method, in particular. The platform allows highlighting the two approaches in the field namely analog and digital signature of circuits. As a part of this e-learning platform, it has been developed a database for signatures of different electronic components meant to put into the spotlight different techniques implying fault detection, and from this there were also self-repairing techniques of the systems with this kind of components. An approach for realizing self-testing circuits based on MATLAB environment and using signature analysis method is proposed. This paper analyses the benefits of signature analysis method and simulates signature analyzer performance based on the use of pseudo-random sequences, too.

  3. Analysis of Within-Test Variability of Non-Destructive Test Methods to Evaluate Compressive Strength of Normal Vibrated and Self-Compacting Concretes

    Science.gov (United States)

    Nepomuceno, Miguel C. S.; Lopes, Sérgio M. R.

    2017-10-01

    Non-destructive tests (NDT) have been used in the last decades for the assessment of in-situ quality and integrity of concrete elements. An important step in the application of NDT methods concerns to the interpretation and validation of the test results. In general, interpretation of NDT results should involve three distinct phases leading to the development of conclusions: processing of collected data, analysis of within-test variability and quantitative evaluation of property under investigation. The analysis of within-test variability can provide valuable information, since this can be compared with that of within-test variability associated with the NDT method in use, either to provide a measure of the quality control or to detect the presence of abnormal circumstances during the in-situ application. This paper reports the analysis of the experimental results of within-test variability of NDT obtained for normal vibrated concrete and self-compacting concrete. The NDT reported includes the surface hardness test, ultrasonic pulse velocity test, penetration resistance test, pull-off test, pull-out test and maturity test. The obtained results are discussed and conclusions are presented.

  4. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Statistical methods for the analysis of a screening test for chronic beryllium disease

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L.; Neubert, R.L. [Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Smith, M.H.; Littlefield, L.G.; Colyer, S.P. [Oak Ridge Inst. for Science and Education, TN (United States). Medical Sciences Div.

    1994-10-01

    The lymphocyte proliferation test (LPT) is a noninvasive screening procedure used to identify persons who may have chronic beryllium disease. A practical problem in the analysis of LPT well counts is the occurrence of outlying data values (approximately 7% of the time). A log-linear regression model is used to describe the expected well counts for each set of test conditions. The variance of the well counts is proportional to the square of the expected counts, and two resistant regression methods are used to estimate the parameters of interest. The first approach uses least absolute values (LAV) on the log of the well counts to estimate beryllium stimulation indices (SIs) and the coefficient of variation. The second approach uses a resistant regression version of maximum quasi-likelihood estimation. A major advantage of the resistant regression methods is that it is not necessary to identify and delete outliers. These two new methods for the statistical analysis of the LPT data and the outlier rejection method that is currently being used are applied to 173 LPT assays. The authors strongly recommend the LAV method for routine analysis of the LPT.

  6. Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods

    Science.gov (United States)

    Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan

    2016-01-01

    The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.

  7. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    Science.gov (United States)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an

  8. Statistical methods in epidemiology. VII. An overview of the chi2 test for 2 x 2 contingency table analysis.

    Science.gov (United States)

    Rigby, A S

    2001-11-10

    The odds ratio is an appropriate method of analysis for data in 2 x 2 contingency tables. However, other methods of analysis exist. One such method is based on the chi2 test of goodness-of-fit. Key players in the development of statistical theory include Pearson, Fisher and Yates. Data are presented in the form of 2 x 2 contingency tables and a method of analysis based on the chi2 test is introduced. There are many variations of the basic test statistic, one of which is the chi2 test with Yates' continuity correction. The usefulness (or not) of Yates' continuity correction is discussed. Problems of interpretation when the method is applied to k x m tables are highlighted. Some properties of the chi2 the test are illustrated by taking examples from the author's teaching experiences. Journal editors should be encouraged to give both observed and expected cell frequencies so that better information comes out of the chi2 test statistic.

  9. Simplified method of ''push-pull'' test data analysis for determining in situ reaction rate coefficients

    International Nuclear Information System (INIS)

    Haggerty, R.; Schroth, M.H.; Istok, J.D.

    1998-01-01

    The single-well, ''''push-pull'''' test method is useful for obtaining information on a wide variety of aquifer physical, chemical, and microbiological characteristics. A push-pull test consists of the pulse-type injection of a prepared test solution into a single monitoring well followed by the extraction of the test solution/ground water mixture from the same well. The test solution contains a conservative tracer and one or more reactants selected to investigate a particular process. During the extraction phase, the concentrations of tracer, reactants, and possible reaction products are measured to obtain breakthrough curves for all solutes. This paper presents a simplified method of data analysis that can be used to estimate a first-order reaction rate coefficient from these breakthrough curves. Rate coefficients are obtained by fitting a regression line to a plot of normalized concentrations versus elapsed time, requiring no knowledge of aquifer porosity, dispersivity, or hydraulic conductivity. A semi-analytical solution to the advective-dispersion equation is derived and used in a sensitivity analysis to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a homogeneous, confined aquifer with a fully-penetrating injection/extraction well and varying porosity, dispersivity, test duration, and reaction rate. A numerical flow and transport code (SUTRA) is used to evaluate the ability of the simplified method to estimate reaction rate coefficients in simulated push-pull tests in a heterogeneous, unconfined aquifer with a partially penetrating well. In all cases the simplified method provides accurate estimates of reaction rate coefficients; estimation errors ranged from 0.1 to 8.9% with most errors less than 5%

  10. Standard test methods for chemical, mass spectrometric, and spectrochemical analysis of nuclear-grade uranium dioxide powders and pellets

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 These test methods cover procedures for the chemical, mass spectrometric, and spectrochemical analysis of nuclear-grade uranium dioxide powders and pellets to determine compliance with specifications. 1.2 This test method covers the determination of uranium and the oxygen to uranium atomic ratio in nuclear-grade uranium dioxide powder and pellets. 1.4 This test method covers the determination of chlorine and fluorine in nuclear-grade uranium dioxide. With a 1 to 10-g sample, concentrations of 5 to 200 g/g of chlorine and 1 to 200 μg/g of fluorine are determined without interference. 1.5 This test method covers the determination of moisture in uranium dioxide samples. Detection limits are as low as 10 μg. 1.6 This test method covers the determination of nitride nitrogen in uranium dioxide in the range from 10 to 250 μg. 1.7 This test method covers the spectrographic analysis of nuclear-grade UO2 for the 26 elements in the ranges indicated in Table 2. 1.8 For simultaneous determination of trace ele...

  11. Mobile Image Ratiometry: A New Method for Instantaneous Analysis of Rapid Test Strips

    OpenAIRE

    Donald C. Cooper; Bryan Callahan; Phil Callahan; Lee Burnett

    2012-01-01

    Here we describe Mobile Image Ratiometry (MIR), a new method for the automated quantification of standardized rapid immunoassay strips using consumer-based mobile smartphone and tablet cameras. To demonstrate MIR we developed a standardized method using rapid immunotest strips directed against cocaine (COC) and its major metabolite, benzoylecgonine (BE). We performed image analysis of three brands of commercially available dye-conjugated anti-COC/BE antibody test strips in response to three d...

  12. Methods of data analysis for the micro-scale abrasion test on coated substrates

    DEFF Research Database (Denmark)

    Kusano, Y.; Acker, K. Van; Hutchings, I.M.

    2004-01-01

    is proposed for plotting the experimental results, termed the double intercept method, which provides a clear graphical representation of the data and usually gives reliable values for kappa(c) and kappa(s). However, for the analysis of typical experimental data to obtain values for the specific wear rates...... available for data analysis in this test and proposes some new approaches. The wear volumes of the coating and the substrate can be described by two parameters chosen from among the inner and outer crater diameters, the coating thickness, and the penetration depth. The inner crater diameter can usually...... another method, termed the KVH plot, is shown to be somewhat more consistently accurate. Detailed guidelines are proposed for analysing the data by this method. (C) 2003 Elsevier B.V. All rights reserved....

  13. A Teaching Method on Basic Chemistry for Freshman (II) : Teaching Method with Pre-test and Post-test

    OpenAIRE

    立木, 次郎; 武井, 庚二

    2004-01-01

    This report deals with review of a teaching method on basic chemistry for freshman in this first semester. We tried to review this teaching method with pre-test and post-test by means of the official and private questionnaires. Several hints and thoughts on teaching skills are obtained from this analysis.

  14. Standard test methods for chemical and mass spectrometric analysis of nuclear-grade gadolinium oxide (Gd2O3) powder

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2006-01-01

    1.1 These test methods cover procedures for the chemical and mass spectrometric analysis of nuclear-grade gadolinium oxide powders to determine compliance with specifications. 1.2 The analytical procedures appear in the following order: Sections Carbon by Direct CombustionThermal Conductivity C1408 Test Method for Carbon (Total) in Uranium Oxide Powders and Pellets By Direct Combustion-Infrared Detection Method Total Chlorine and Fluorine by Pyrohydrolysis Ion Selective Electrode C1502 Test Method for Determination of Total Chlorine and Fluorine in Uranium Dioxide and Gadolinium Oxide Loss of Weight on Ignition 7-13 Sulfur by CombustionIodometric Titration Impurity Elements by a Spark-Source Mass Spectrographic C761 Test Methods for Chemical, Mass Spectrometric, Spectrochemical,Nuclear, and Radiochemical Analysis of Uranium Hexafluoride C1287 Test Method for Determination of Impurities In Uranium Dioxide By Inductively Coupled Plasma Mass Spectrometry Gadolinium Content in Gadolinium Oxid...

  15. Standard test methods for chemical, mass spectrometric, spectrochemical, nuclear, and radiochemical analysis of nuclear-grade plutonium nitrate solutions

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 These test methods cover procedures for the chemical, mass spectrometric, spectrochemical, nuclear, and radiochemical analysis of nuclear-grade plutonium nitrate solutions to determine compliance with specifications. 1.2 The analytical procedures appear in the following order: Sections Plutonium by Controlled-Potential Coulometry Plutonium by Amperometric Titration with Iron(II) Plutonium by Diode Array Spectrophotometry Free Acid by Titration in an Oxalate Solution 8 to 15 Free Acid by Iodate Precipitation-Potentiometric Titration Test Method 16 to 22 Uranium by Arsenazo I Spectrophotometric Test Method 23 to 33 Thorium by Thorin Spectrophotometric Test Method 34 to 42 Iron by 1,10-Phenanthroline Spectrophotometric Test Method 43 to 50 Impurities by ICP-AES Chloride by Thiocyanate Spectrophotometric Test Method 51 to 58 Fluoride by Distillation-Spectrophotometric Test Method 59 to 66 Sulfate by Barium Sulfate Turbidimetric Test Method 67 to 74 Isotopic Composition by Mass Spectrom...

  16. Three-beam interferogram analysis method for surface flatness testing of glass plates and wedges

    Science.gov (United States)

    Sunderland, Zofia; Patorski, Krzysztof

    2015-09-01

    When testing transparent plates with high quality flat surfaces and a small angle between them the three-beam interference phenomenon is observed. Since the reference beam and the object beams reflected from both the front and back surface of a sample are detected, the recorded intensity distribution may be regarded as a sum of three fringe patterns. Images of that type cannot be succesfully analyzed with standard interferogram analysis methods. They contain, however, useful information on the tested plate surface flatness and its optical thickness variations. Several methods were elaborated to decode the plate parameters. Our technique represents a competitive solution which allows for retrieval of phase components of the three-beam interferogram. It requires recording two images: a three-beam interferogram and the two-beam one with the reference beam blocked. Mutually subtracting these images leads to the intensity distribution which, under some assumptions, provides access to the two component fringe sets which encode surfaces flatness. At various stages of processing we take advantage of nonlinear operations as well as single-frame interferogram analysis methods. Two-dimensional continuous wavelet transform (2D CWT) is used to separate a particular fringe family from the overall interferogram intensity distribution as well as to estimate the phase distribution from a pattern. We distinguish two processing paths depending on the relative density of fringe sets which is connected with geometry of a sample and optical setup. The proposed method is tested on simulated data.

  17. Development of Test-Analysis Models (TAM) for correlation of dynamic test and analysis results

    Science.gov (United States)

    Angelucci, Filippo; Javeed, Mehzad; Mcgowan, Paul

    1992-01-01

    The primary objective of structural analysis of aerospace applications is to obtain a verified finite element model (FEM). The verified FEM can be used for loads analysis, evaluate structural modifications, or design control systems. Verification of the FEM is generally obtained as the result of correlating test and FEM models. A test analysis model (TAM) is very useful in the correlation process. A TAM is essentially a FEM reduced to the size of the test model, which attempts to preserve the dynamic characteristics of the original FEM in the analysis range of interest. Numerous methods for generating TAMs have been developed in the literature. The major emphasis of this paper is a description of the procedures necessary for creation of the TAM and the correlation of the reduced models with the FEM or the test results. Herein, three methods are discussed, namely Guyan, Improved Reduced System (IRS), and Hybrid. Also included are the procedures for performing these analyses using MSC/NASTRAN. Finally, application of the TAM process is demonstrated with an experimental test configuration of a ten bay cantilevered truss structure.

  18. [Inappropriate test methods in allergy].

    Science.gov (United States)

    Kleine-Tebbe, J; Herold, D A

    2010-11-01

    Inappropriate test methods are increasingly utilized to diagnose allergy. They fall into two categories: I. Tests with obscure theoretical basis, missing validity and lacking reproducibility, such as bioresonance, electroacupuncture, applied kinesiology and the ALCAT-test. These methods lack both the technical and clinical validation needed to justify their use. II. Tests with real data, but misleading interpretation: Detection of IgG or IgG4-antibodies or lymphocyte proliferation tests to foods do not allow to separate healthy from diseased subjects, neither in case of food intolerance, allergy or other diagnoses. The absence of diagnostic specificity induces many false positive findings in healthy subjects. As a result unjustified diets might limit quality of life and lead to malnutrition. Proliferation of lymphocytes in response to foods can show elevated rates in patients with allergies. These values do not allow individual diagnosis of hypersensitivity due to their broad variation. Successful internet marketing, infiltration of academic programs and superficial reporting by the media promote the popularity of unqualified diagnostic tests; also in allergy. Therefore, critical observation and quick analysis of and clear comments to unqualified methods by the scientific medical societies are more important than ever.

  19. Testing of the derivative method and Kruskal-Wallis technique for sensitivity analysis of SYVAC

    International Nuclear Information System (INIS)

    Prust, J.O.; Edwards, H.H.

    1985-04-01

    The Kruskal-Wallis method of one-way analysis of variance by ranks has proved successful in identifying input parameters which have an important influence on dose. This technique was extended to test for first order interactions between parameters. In view of a number of practical difficulties and the computing resources required to carry out a large number of runs, this test is not recommended for detecting interactions between parameters. The derivative method of sensitivity analysis examines the partial derivative values of each input parameter with dose at various points across the parameter range. Important input parameters are associated with high derivatives and the results agreed well with previous sensitivity studies. The derivative values also provided information on the data generation distributions to be used for the input parameters in order to concentrate sampling in the high dose region of the parameter space to improve the sampling efficiency. Furthermore, the derivative values provided information on parameter interactions, the feasibility of developing a high dose algorithm and formed the basis for developing a regression equation. (author)

  20. Nondestructive testing method

    International Nuclear Information System (INIS)

    Porter, J.F.

    1996-01-01

    Nondestructive testing (NDT) is the use of physical and chemical methods for evaluating material integrity without impairing its intended usefulness or continuing service. Nondestructive tests are used by manufaturer's for the following reasons: 1) to ensure product reliability; 2) to prevent accidents and save human lives; 3) to aid in better product design; 4) to control manufacturing processes; and 5) to maintain a uniform quality level. Nondestructive testing is used extensively on power plants, oil and chemical refineries, offshore oil rigs and pipeline (NDT can even be conducted underwater), welds on tanks, boilers, pressure vessels and heat exchengers. NDT is now being used for testing concrete and composite materials. Because of the criticality of its application, NDT should be performed and the results evaluated by qualified personnel. There are five basic nondestructive examination methods: 1) liquid penetrant testing - method used for detecting surface flaws in materials. This method can be used for metallic and nonmetallic materials, portable and relatively inexpensive. 2) magnetic particle testing - method used to detect surface and subsurface flaws in ferromagnetic materials; 3) radiographic testing - method used to detect internal flaws and significant variation in material composition and thickness; 4) ultrasonic testing - method used to detect internal and external flaws in materials. This method uses ultrasonics to measure thickness of a material or to examine the internal structure for discontinuities. 5) eddy current testing - method used to detect surface and subsurface flaws in conductive materials. Not one nondestructive examination method can find all discontinuities in all of the materials capable of being tested. The most important consideration is for the specifier of the test to be familiar with the test method and its applicability to the type and geometry of the material and the flaws to be detected

  1. Selected hydraulic test analysis techniques for constant-rate discharge tests

    International Nuclear Information System (INIS)

    Spane, F.A. Jr.

    1993-03-01

    The constant-rate discharge test is the principal field method used in hydrogeologic investigations for characterizing the hydraulic properties of aquifers. To implement this test, the aquifer is stressed by withdrawing ground water from a well, by using a downhole pump. Discharge during the withdrawal period is regulated and maintained at a constant rate. Water-level response within the well is monitored during the active pumping phase (i.e., drawdown) and during the subsequent recovery phase following termination of pumping. The analysis of drawdown and recovery response within the stress well (and any monitored, nearby observation wells) provides a means for estimating the hydraulic properties of the tested aquifer, as well as discerning formational and nonformational flow conditions (e.g., wellbore storage, wellbore damage, presence of boundaries, etc.). Standard analytical methods that are used for constant-rate pumping tests include both log-log type-curve matching and semi-log straight-line methods. This report presents a current ''state of the art'' review of selected transient analysis procedures for constant-rate discharge tests. Specific topics examined include: analytical methods for constant-rate discharge tests conducted within confined and unconfined aquifers; effects of various nonideal formation factors (e.g., anisotropy, hydrologic boundaries) and well construction conditions (e.g., partial penetration, wellbore storage) on constant-rate test response; and the use of pressure derivatives in diagnostic analysis for the identification of specific formation, well construction, and boundary conditions

  2. Analysis of Power Transfer Efficiency of Standard Integrated Circuit Immunity Test Methods

    Directory of Open Access Journals (Sweden)

    Hai Au Huynh

    2015-01-01

    Full Text Available Direct power injection (DPI and bulk current injection (BCI methods are defined in IEC 62132-3 and IEC 62132-4 as the electromagnetic immunity test method of integrated circuits (IC. The forward power measured at the RF noise generator when the IC malfunctions is used as the measure of immunity level of the IC. However, the actual power that causes failure in ICs is different from forward power measured at the noise source. Power transfer efficiency is used as a measure of power loss of the noise injection path. In this paper, the power transfer efficiencies of DPI and BCI methods are derived and validated experimentally with immunity test setup of a clock divider IC. Power transfer efficiency varies significantly over the frequency range as a function of the test method used and the IC input impedance. For the frequency range of 15 kHz to 1 GHz, power transfer efficiency of the BCI test was constantly higher than that of the DPI test. In the DPI test, power transfer efficiency is particularly low in the lower test frequency range up to 10 MHz. When performing the IC immunity tests following the standards, these characteristics of the test methods need to be considered.

  3. [Seed quality test methods of Paeonia suffruticosa].

    Science.gov (United States)

    Cao, Ya-Yue; Zhu, Zai-Biao; Guo, Qiao-Sheng; Liu, Li; Wang, Chang-Lin

    2014-11-01

    In order to optimize the testing methods for Paeonia suffruticosa seed quality, and provide basis for establishing seed testing rules and seed quality standard of P. suffruticosa. The seed quality of P. suffruticosa from different producing areas was measured based on the related seed testing regulations. The seed testing methods for quality items of P. suffruticosa was established preliminarily. The samples weight of P. suffruticosa was at least 7 000 g for purity analysis and was at least 700 g for test. The phenotypic observation and size measurement were used for authenticity testing. The 1 000-seed weight was determined by 100-seed method, and the water content was carried out by low temperature drying method (10 hours). After soaking in distilled water for 24 h, the seeds was treated with different temperature stratifications of day and night (25 degrees C/20 degrees C, day/night) in the dark for 60 d. After soaking in the liquor of GA3 300 mg x L(-1) for 24 h, the P. suffruticos seeds were cultured in wet sand at 15 degrees C for 12-60 days for germination testing. Seed viability was tested by TlC method.

  4. Standard test method for creep-fatigue crack growth testing

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers the determination of creep-fatigue crack growth properties of nominally homogeneous materials by use of pre-cracked compact type, C(T), test specimens subjected to uniaxial cyclic forces. It concerns fatigue cycling with sufficiently long loading/unloading rates or hold-times, or both, to cause creep deformation at the crack tip and the creep deformation be responsible for enhanced crack growth per loading cycle. It is intended as a guide for creep-fatigue testing performed in support of such activities as materials research and development, mechanical design, process and quality control, product performance, and failure analysis. Therefore, this method requires testing of at least two specimens that yield overlapping crack growth rate data. The cyclic conditions responsible for creep-fatigue deformation and enhanced crack growth vary with material and with temperature for a given material. The effects of environment such as time-dependent oxidation in enhancing the crack growth ra...

  5. Standard test methods for chemical and spectrochemical analysis of nuclear-Grade silver-indium-cadmium alloys

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1990-01-01

    1.1 These test methods cover procedures for the chemical and spectrochemical analysis of nuclear grade silver-indium-cadmium (Ag-In-Cd) alloys to determine compliance with specifications. 1.2 The analytical procedures appear in the following order: Sections Silver, Indium, and Cadmium by a Titration Method 7-15 Trace Impurities by Carrier-Distillation Spectro- chemical Method 16-22 1.3 The values stated in SI units are to be regarded as the standard. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. For specific hazard and precautionary statements, see Section 5 and Practices E50. 7.1 This test method is applicable to the determination of silver, indium, and cadmium in alloys of approximately 80 % silver, 15 % indium, and 5 % cadmium used in nuclear reactor control r...

  6. Standard test methods for chemical, mass spectrometric, and spectrochemical analysis of nuclear-grade boron carbide

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2004-01-01

    1.1 These test methods cover procedures for the chemical, mass spectrometric, and spectrochemical analysis of nuclear-grade boron carbide powder and pellets to determine compliance with specifications. 1.2 The analytical procedures appear in the following order: Sections Total Carbon by Combustion and Gravimetry 7-17 Total Boron by Titrimetry 18-28 Isotopic Composition by Mass Spectrometry 29-38 Chloride and Fluoride Separation by Pyrohydrolysis 39-45 Chloride by Constant-Current Coulometry 46-54 Fluoride by Ion-Selective Electrode 55-63 Water by Constant-Voltage Coulometry 64-72 Impurities by Spectrochemical Analysis 73-81 Soluble Boron by Titrimetry 82-95 Soluble Carbon by a Manometric Measurement 96-105 Metallic Impurities by a Direct Reader Spectrometric Method 106-114

  7. The comparative performance of PMI estimation in skeletal remains by three methods (C-14, luminol test and OHI): analysis of 20 cases.

    Science.gov (United States)

    Cappella, Annalisa; Gibelli, Daniele; Muccino, Enrico; Scarpulla, Valentina; Cerutti, Elisa; Caruso, Valentina; Sguazza, Emanuela; Mazzarelli, Debora; Cattaneo, Cristina

    2015-01-27

    When estimating post-mortem interval (PMI) in forensic anthropology, the only method able to give an unambiguous result is the analysis of C-14, although the procedure is expensive. Other methods, such as luminol tests and histological analysis, can be performed as preliminary investigations and may allow the operators to gain a preliminary indication concerning PMI, but they lack scientific verification, although luminol testing has been somewhat more accredited in the past few years. Such methods in fact may provide some help as they are inexpensive and can give a fast response, especially in the phase of preliminary investigations. In this study, 20 court cases of human skeletonized remains were dated by the C-14 method. For two cases, results were chronologically set after the 1950s; for one case, the analysis was not possible technically. The remaining 17 cases showed an archaeological or historical collocation. The same bone samples were also screened with histological examination and with the luminol test. Results showed that only four cases gave a positivity to luminol and a high Oxford Histology Index (OHI) score at the same time: among these, two cases were dated as recent by the radiocarbon analysis. Thus, only two false-positive results were given by the combination of these methods and no false negatives. Thus, the combination of two qualitative methods (luminol test and microscopic analysis) may represent a promising solution to cases where many fragments need to be quickly tested.

  8. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1993-01-01

    This report documents progress to date under a three-year contract for developing ''Methods for Testing Transport Models.'' The work described includes (1) choice of best methods for producing ''code emulators'' for analysis of very large global energy confinement databases, (2) recent applications of stratified regressions for treating individual measurement errors as well as calibration/modeling errors randomly distributed across various tokamaks, (3) Bayesian methods for utilizing prior information due to previous empirical and/or theoretical analyses, (4) extension of code emulator methodology to profile data, (5) application of nonlinear least squares estimators to simulation of profile data, (6) development of more sophisticated statistical methods for handling profile data, (7) acquisition of a much larger experimental database, and (8) extensive exploratory simulation work on a large variety of discharges using recently improved models for transport theories and boundary conditions. From all of this work, it has been possible to define a complete methodology for testing new sets of reference transport models against much larger multi-institutional databases

  9. Automated analysis of pumping tests; Analise automatizada de testes de bombeamento

    Energy Technology Data Exchange (ETDEWEB)

    Sugahara, Luiz Alberto Nozaki

    1996-01-01

    An automated procedure for analysis of pumping test data performed in groundwater wells is described. A computer software was developed to be used under the Windows operational system. The software allows the choice of 3 mathematical models for representing the aquifer behavior, which are: Confined aquifer (Theis model); Leaky aquifer (Hantush model); unconfined aquifer (Boulton model). The analysis of pumping test data using the proper aquifer model, allows for the determination of the model parameters such as transmissivity, storage coefficient, leakage coefficient and delay index. The computer program can be used for the analysis of data obtained from both pumping tests, with one or more pumping rates, and recovery tests. In the multiple rate case, a de superposition procedure has been implemented in order to obtain the equivalent aquifer response for the first flow rate, which is used in obtaining an initial estimate of the model parameters. Such initial estimate is required in the non-linear regression analysis method. The solutions to the partial differential equations describing the aquifer behavior were obtained in Laplace space, followed by numerical inversion of the transformed solution using the Stehfest algorithm. The data analysis procedure is based on a non-linear regression method by matching the field data to the theoretical response of a selected aquifer model, for a given type of test. A least squared regression analysis method was implemented using either Gauss-Newton or Levenberg-Marquardt procedures for minimization of a objective function. The computer software can also be applied to multiple rate test data in order to determine the non-linear well coefficient, allowing for the computation of the well inflow performance curve. (author)

  10. Analysis of Precision of Activation Analysis Method

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Nørgaard, K.

    1973-01-01

    The precision of an activation-analysis method prescribes the estimation of the precision of a single analytical result. The adequacy of these estimates to account for the observed variation between duplicate results from the analysis of different samples and materials, is tested by the statistic T...

  11. Standard Test Method for Application and Analysis of Solid State Track Recorder (SSTR) Monitors for Reactor Surveillance, E706(IIIB)

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2003-01-01

    1.1 This test method describes the use of solid-state track recorders (SSTRs) for neutron dosimetry in light-water reactor (LWR) applications. These applications extend from low neutron fluence to high neutron fluence, including high power pressure vessel surveillance and test reactor irradiations as well as low power benchmark field measurement. (1) This test method replaces Method E 418. This test method is more detailed and special attention is given to the use of state-of-the-art manual and automated track counting methods to attain high absolute accuracies. In-situ dosimetry in actual high fluence-high temperature LWR applications is emphasized. 1.2 This test method includes SSTR analysis by both manual and automated methods. To attain a desired accuracy, the track scanning method selected places limits on the allowable track density. Typically good results are obtained in the range of 5 to 800 000 tracks/cm2 and accurate results at higher track densities have been demonstrated for some cases. (2) Trac...

  12. An Evaluation of the Bouwer and Rice Method of Slug Test Analysis

    Science.gov (United States)

    Brown, David L.; Narasimhan, T. N.; Demir, Z.

    1995-05-01

    The method of Bouwer and Rice (1976) for analyzing slug test data is widely used to estimate hydraulic conductivity (K). Based on steady state flow assumptions, this method is specifically intended to be applicable to unconfined aquifers. Therefore it is of practical value to investigate the limits of accuracy of the K estimates obtained with this method. Accordingly, using a numerical model for transient flow, we evaluate the method from two perspectives. First, we apply the method to synthetic slug test data and study the error in estimated values of K. Second, we analyze the logical basis of the method. Parametric studies helped assess the role of the effective radius parameter, specific storage, screen length, and well radius on the estimated values of K. The difference between unconfined and confined systems was studied via conditions on the upper boundary of the flow domain. For the cases studied, the Bouwer and Rice analysis was found to give good estimates of K, with errors ranging from 10% to 100%. We found that the estimates of K were consistently superior to those obtained with Hvorslev's (1951) basic time lag method. In general, the Bouwer and Rice method tends to underestimate K, the greatest errors occurring in the presence of a damaged zone around the well or when the top of the screen is close to the water table. When the top of the screen is far removed from the upper boundary of the system, no difference is manifest between confined and unconfined conditions. It is reasonable to infer from the simulated results that when the screen is close to the upper boundary, the results of the Bouwer and Rice method agree more closely with a "confined" idealization than an "unconfined" idealization. In effect, this method treats the aquifer system as an equivalent radial flow permeameter with an effective radius, Re, which is a function of the flow geometry. Our transient simulations suggest that Re varies with time and specific storage. Thus the effective

  13. Tracers and Tracer Testing: Design, Implementation, Tracer Selection, and Interpretation Methods

    Energy Technology Data Exchange (ETDEWEB)

    G. Michael Shook; Shannon L.; Allan Wylie

    2004-01-01

    Conducting a successful tracer test requires adhering to a set of steps. The steps include identifying appropriate and achievable test goals, identifying tracers with the appropriate properties, and implementing the test as designed. When these steps are taken correctly, a host of tracer test analysis methods are available to the practitioner. This report discusses the individual steps required for a successful tracer test and presents methods for analysis. The report is an overview of tracer technology; the Suggested Reading section offers references to the specifics of test design and interpretation.

  14. Standard test methods for chemical analysis of ceramic whiteware materials using wavelength dispersive X-Ray fluorescence spectrometry

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2004-01-01

    1.1 These test methods cover the determination of ten major elements (SiO2, Al2O3, Fe2O3, MgO, CaO, Na2O, K2O, TiO2, P2O5, MnO, and LOI in ceramic whitewares clays and minerals using wavelength dispersive X-ray fluorescence spectrometry (WDXRF). The sample is first ignited, then fused with lithium tetraborate and the resultant glass disc is introduced into a wavelength dispersive X-ray spectrometer. The disc is irradiated with X-rays from an X-ray tube. X-ray photons emitted by the elements in the samples are counted and concentrations determined using previously prepared calibration standards. (1) In addition to 10 major elements, the method provides a gravimetric loss-on-ignition. Note 1—Much of the text of this test method is derived directly from Major element analysis by wavelength dispersive X-ray fluorescence spectrometry, included in Ref (1). 1.2 Interferences, with analysis by WDXRF, may result from mineralogical or other structural effects, line overlaps, and matrix effects. The structure of the...

  15. GSMA: Gene Set Matrix Analysis, An Automated Method for Rapid Hypothesis Testing of Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Chris Cheadle

    2007-01-01

    Full Text Available Background: Microarray technology has become highly valuable for identifying complex global changes in gene expression patterns. The assignment of functional information to these complex patterns remains a challenging task in effectively interpreting data and correlating results from across experiments, projects and laboratories. Methods which allow the rapid and robust evaluation of multiple functional hypotheses increase the power of individual researchers to data mine gene expression data more efficiently.Results: We have developed (gene set matrix analysis GSMA as a useful method for the rapid testing of group-wise up- or downregulation of gene expression simultaneously for multiple lists of genes (gene sets against entire distributions of gene expression changes (datasets for single or multiple experiments. The utility of GSMA lies in its flexibility to rapidly poll gene sets related by known biological function or as designated solely by the end-user against large numbers of datasets simultaneously.Conclusions: GSMA provides a simple and straightforward method for hypothesis testing in which genes are tested by groups across multiple datasets for patterns of expression enrichment.

  16. Method and data analysis example of fatigue tests

    International Nuclear Information System (INIS)

    Nogami, Shuhei

    2015-01-01

    In the design and operation of a nuclear fusion reactor, it is important to accurately assess the fatigue life. Fatigue life is evaluated by preparing a database on the relationship between the added stress / strain amplitude and the number of cycles to failure based on the fatigue tests on standard specimens, and by comparing this relationship with the generated stress / strain of the actual constructions. This paper mainly chooses low-cycle fatigue as an object, and explains standard test methods, fatigue limit, life prediction formula and the like. Using reduced-activation ferrite steel F82H as a material, strain controlled low-cycle fatigue test was performed under room temperature atmosphere. From these results, the relationship between strain and the number of cycles to failure was analyzed. It was found that the relationship is asymptotic to the formula of Coffin-Manson Law under high-strain (low-cycle condition), and asymptotic to the formula of Basquin Law under low-strain (high-cycle condition). For F82H to be used for the blanket of a nuclear fusion prototype reactor, the arrangement of fatigue life data up to about 700°C and the establishment of optimal fatigue design curves are urgent tasks. As for fusion reactor structural materials, the evaluation of neutron irradiation effect on fatigue damage behavior and life is indispensable. For this purpose, it is necessary to establish standardized testing techniques when applied to small specimens. (A.O.)

  17. Safety relief valve alternate analysis method

    International Nuclear Information System (INIS)

    Adams, R.H.; Javid, A.; Khatua, T.P.

    1981-01-01

    An experimental test program was started in the United States in 1976 to define and quantify Safety Relief Valve (SRV) phenomena in General Electric Mark I Suppression Chambers. The testing considered several discharged devices and was used to correlate SRV load prediction models. The program was funded by utilities with Mark I containments and has resulted in a detailed SRV load definition as a portion of the Mark I containment program Load Definition Report (LDR). The (USNRC) has reviewed and approved the LDR SRV load definition. In addition, the USNRC has permitted calibration of structural models used for predicting torus response to SRV loads. Model calibration is subject to confirmatory in-plant testing. The SRV methodology given in the LDR requires that transient dynamic pressures be applied to a torus structural model that includes a fluid added mass matrix. Preliminary evaluations of torus response have indicated order of magnitude conservatisms, with respect to test results, which could result in unrealistic containment modifications. In addition, structural response trends observed in full-scale tests between cold pipe, first valve actuation and hot pipe, subsequent valve actuation conditions have not been duplicated using current analysis methods. It was suggested by others that an energy approach using current fluid models be utilized to define loads. An alternate SRV analysis method is defined to correct suppression chamber structural response to a level that permits economical but conservative design. Simple analogs are developed for the purpose of correcting the analytical response obtained from LDR analysis methods. Analogs evaluated considered forced vibration and free vibration structural response. The corrected response correlated well with in-plant test response. The correlation of the analytical model at test conditions permits application of the alternate analysis method at design conditions. (orig./HP)

  18. Comments on Thermal Physical Properties Testing Methods of Phase Change Materials

    Directory of Open Access Journals (Sweden)

    Jingchao Xie

    2013-01-01

    Full Text Available There is no standard testing method of the thermal physical properties of phase change materials (PCM. This paper has shown advancements in this field. Developments and achievements in thermal physical properties testing methods of PCM were commented, including differential scanning calorimetry, T-history measurement, the water bath method, and differential thermal analysis. Testing principles, advantages and disadvantages, and important points for attention of each method were discussed. A foundation for standardized testing methods for PCM was made.

  19. CONSTOR registered V/TC drop tests. Pre-test analysis by finite element method

    International Nuclear Information System (INIS)

    Voelzer, W.; Koenig, S.; Klein, K.; Tso, C.F.; Owen, S.; Monk, C.

    2004-01-01

    The CONSTOR registered family of steel-concrete-steel sandwich cask designs have been developed to fulfil both the internationally valid IAEA criteria for transportation and the requirements for long-term intermediate storage in the US and various European countries. A comprehensive drop testing programme using a full-scale prototype test cask (CONSTOR registered V/TC) has been developed as part of the application for a transport license in both Germany and the US. The drop tests using the full-scale cask will be performed by BAM at test facilities in Horstwalde. The tests will include five different 9m drops onto flat unyielding targets and seven different 1m drops onto a punch. The first drop test, a 9m side drop, will be performed during PATRAM 2004. The other drop tests will take place during the following year. The development of the cask design and the formulation of the drop test programme has been supported by an extensive series of finite element analyses. The objectives of the finite element analyses were; to provide an intermediate step in demonstrating the performance of the CONSTOR registered in fulfilling the requirements of 10 CFR 71 and the IAEA transport regulations. To justify the selection of drop tests. To predict the performance of V/TC during the drop tests. To estimate the strain and acceleration time histories at measuring points on the test cask and to aid in the setting up of the test instrumentation. To develop an analysis model that can be used in future safety analyses for transport and storage license applications and which can confidently be used to demonstrate the performance of the package. This paper presents an overview of the analyses performed, including a summary of all the different drop orientations that were considered. The major assumptions employed during the analyses are also discussed, as are the specifics of the modelling techniques that were employed. At the end of the paper, the key results obtained from the analyses

  20. Development of multi-dimensional analysis method for porous blockage in fuel subassembly. Numerical simulation for 4 subchannel geometry water test

    International Nuclear Information System (INIS)

    Tanaka, Masa-aki; Kamide, Hideki

    2001-02-01

    This investigation deals with the porous blockage in a wire spacer type fuel subassembly in Fast Breeder Reactors (FBR's). Multi-dimensional analysis method for a porous blockage in a fuel subassembly is developed using the standard k-ε turbulence model with the typical correlations in handbooks. The purpose of this analysis method is to evaluate the position and the magnitude of the maximum temperature, and to investigate the thermo-hydraulic phenomena in the porous blockage. Verification of this analysis method was conducted based on the results of 4-subchannel geometry water test. It was revealed that the evaluation of the porosity distribution and the particle diameter in a porous blockage was important to predict the temperature distribution. This analysis method could simulate the spatial characteristic of velocity and temperature distributions in the blockage and evaluate the pin surface temperature inside the porous blockage. Through the verification of this analysis method, it is shown that this multi-dimensional analysis method is useful to predict the thermo-hydraulic field and the highest temperature in a porous blockage. (author)

  1. Direct methods of soil-structure interaction analysis for earthquake loadings

    International Nuclear Information System (INIS)

    Yun, J. B.; Kim, J. M.; Kim, Y. S. and others

    1993-07-01

    The objectives of this study are to review the methods of soil- structure interaction system analysis, particularly the direct method, and to carry out the blind prediction analysis of the Forced Vibration Test(FVT) before backfill in the course of Hualien LSST project. The scope and contents of this study are as follows : theoretical review on soil-structure interaction analysis methods, free-field response analysis methods, modelling methods of unbounded exterior region, hualien LSST FVT blind prediction analysis before backfill. The analysis results are found to be very well compared with the field test results

  2. Statistical trend analysis methods for temporal phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Lehtinen, E.; Pulkkinen, U. [VTT Automation, (Finland); Poern, K. [Poern Consulting, Nykoeping (Sweden)

    1997-04-01

    We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods. 14 refs, 10 figs.

  3. Statistical trend analysis methods for temporal phenomena

    International Nuclear Information System (INIS)

    Lehtinen, E.; Pulkkinen, U.; Poern, K.

    1997-04-01

    We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods

  4. Comparative study of fracture mechanical test methods for concrete

    DEFF Research Database (Denmark)

    Østergaard, Lennart; Olesen, John Forbes

    2004-01-01

    and the interpretation, i.e. the analysis needed to extract the stress-crack opening relationship, the fracture energy etc. Experiments are carried out with each test configuration using mature, high performance concrete. The results show that the UTT is a highly complicated test, which only under very well controlled...... circumstances will yield the true fracture mechanical properties. It is also shown that both the three point bending test and the WST are well-suited substitutes for the uniaxial tension test.......This paper describes and compares three different fracture mechanical test methods; the uniaxial tension test (UTT), the three point bending test (TPBT) and the wedge splitting test (WST). Potentials and problems with the test methods will be described with regard to the experiment...

  5. Diagnosis of cystic fibrosis with chloride meter (Sherwood M926S chloride analyzer®) and sweat test analysis system (CFΔ collection system®) compared to the Gibson Cooke method.

    Science.gov (United States)

    Emiralioğlu, Nagehan; Özçelik, Uğur; Yalçın, Ebru; Doğru, Deniz; Kiper, Nural

    2016-01-01

    Sweat test with Gibson Cooke (GC) method is the diagnostic gold standard for cystic fibrosis (CF). Recently, alternative methods have been introduced to simplify both the collection and analysis of sweat samples. Our aim was to compare sweat chloride values obtained by GC method with other sweat test methods in patients diagnosed with CF and whose CF diagnosis had been ruled out. We wanted to determine if the other sweat test methods could reliably identify patients with CF and differentiate them from healthy subjects. Chloride concentration was measured with GC method, chloride meter and sweat test analysis system; also conductivity was determined with sweat test analysis system. Forty eight patients with CF and 82 patients without CF underwent the sweat test, showing median sweat chloride values 98.9 mEq/L with GC method, 101 mmol/L with chloride meter, 87.8 mmol/L with sweat test analysis system. In non-CF group, median sweat chloride values were 16.8 mEq/L with GC method, 10.5 mmol/L with chloride meter, and 15.6 mmol/L with sweat test analysis system. Median conductivity value was 107.3 mmol/L in CF group and 32.1 mmol/L in non CF group. There was a strong positive correlation between GC method and the other sweat test methods with a statistical significance (r=0.85) in all subjects. Sweat chloride concentration and conductivity by other sweat test methods highly correlate with the GC method. We think that the other sweat test equipments can be used as reliably as the classic GC method to diagnose or exclude CF.

  6. Uncertainty Analysis of In leakage Test for Pressurized Control Room Envelop

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J. B. [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    In leakage tests for control room envelops(CRE) of newly constructed nuclear power plants are required to prove the control room habitability. Results of the in leakage tests should be analyzed using an uncertainty analysis. Test uncertainty can be an issue if the test results for pressurized CREs show low in leakage. To have a better knowledge of the test uncertainty, a statistical model for the uncertainty analysis is described here and a representative uncertainty analysis of a sample in leakage test is presented. A statistical method for analyzing the uncertainty of the in leakage test is presented here and a representative uncertainty analysis of a sample in leakage test was performed. By using the statistical method we can evaluate the test result with certain level of significance. This method can be more helpful when the difference of the two mean values of the test result is small.

  7. Uncertainty Analysis of In leakage Test for Pressurized Control Room Envelop

    International Nuclear Information System (INIS)

    Lee, J. B.

    2013-01-01

    In leakage tests for control room envelops(CRE) of newly constructed nuclear power plants are required to prove the control room habitability. Results of the in leakage tests should be analyzed using an uncertainty analysis. Test uncertainty can be an issue if the test results for pressurized CREs show low in leakage. To have a better knowledge of the test uncertainty, a statistical model for the uncertainty analysis is described here and a representative uncertainty analysis of a sample in leakage test is presented. A statistical method for analyzing the uncertainty of the in leakage test is presented here and a representative uncertainty analysis of a sample in leakage test was performed. By using the statistical method we can evaluate the test result with certain level of significance. This method can be more helpful when the difference of the two mean values of the test result is small

  8. Drop Test Using Finite Element Method for Transport Package of Radioactive Material

    International Nuclear Information System (INIS)

    Xu Xiaoxiao; Zhao Bing; Zhang Jiangang; Li Gouqiang; Wang Xuexin; Tang Rongyao

    2010-01-01

    Mechanical test for transport package of radioactive material is one of the important tests for demonstrating package structure design. Drop test of package is a kind of destructive test. It is a common method of adopting the pre-analysis to determine drop orientation.Mechanical test of a sealed source package was calculated with finite element method (FEM) software. Based on the analysis of the calculation results, some values were obtained such as the stress, strain, acceleration and the drop orientation which causes the most severe damage, and the calculation results were compared with the results of test. (authors)

  9. Standard Test Method for Application and Analysis of Helium Accumulation Fluence Monitors for Reactor Vessel Surveillance, E706 (IIIC)

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 This test method describes the concept and use of helium accumulation for neutron fluence dosimetry for reactor vessel surveillance. Although this test method is directed toward applications in vessel surveillance, the concepts and techniques are equally applicable to the general field of neutron dosimetry. The various applications of this test method for reactor vessel surveillance are as follows: 1.1.1 Helium accumulation fluence monitor (HAFM) capsules, 1.1.2 Unencapsulated, or cadmium or gadolinium covered, radiometric monitors (RM) and HAFM wires for helium analysis, 1.1.3 Charpy test block samples for helium accumulation, and 1.1.4 Reactor vessel (RV) wall samples for helium accumulation. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  10. Evaluation of clinical methods for peroneal muscle testing.

    Science.gov (United States)

    Sarig-Bahat, Hilla; Krasovsky, Andrei; Sprecher, Elliot

    2013-03-01

    Manual muscle testing of the peroneal muscles is well accepted as a testing method in musculoskeletal physiotherapy for the assessment of the foot and ankle. The peroneus longus and brevis are primary evertors and secondary plantar flexors of the ankle joint. However, some international textbooks describe them as dorsi flexors, when instructing peroneal muscle testing. The identified variability raised a question whether these educational texts are reflected in the clinical field. The purposes of this study were to investigate what are the methods commonly used in the clinical field for peroneal muscle testing and to evaluate their compatibility with functional anatomy. A cross-sectional study was conducted, using an electronic questionnaire sent to 143 Israeli physiotherapists in the musculoskeletal field. The survey questioned on the anatomical location of manual resistance and the combination of motions resisted. Ninety-seven responses were received. The majority (69%) of respondents related correctly to the peronei as evertors, but asserted that resistance should be located over the dorsal aspect of the fifth metatarsus, thereby disregarding the peroneus longus. Moreover, 38% of the respondents described the peronei as dorsi flexors, rather than plantar flexors. Only 2% selected the correct method of resisting plantarflexion and eversion at the base of the first metatarsus. We consider this technique to be the most compatible with the anatomy of the peroneus longus and brevis. The Fisher-Freeman-Halton test indicated that there was a significant relationship between responses on the questions (P = 0.0253, 95% CI 0.0249-0.0257), thus justifying further correspondence analysis. The correspondence analysis found no clustering of the answers that were compatible with anatomical evidence and were applied in the correct technique, but did demonstrate a common error, resisting dorsiflexion rather than plantarflexion, which was in agreement with the described

  11. Agreement between gastrointestinal panel testing and standard microbiology methods for detecting pathogens in suspected infectious gastroenteritis: Test evaluation and meta-analysis in the absence of a reference standard.

    Science.gov (United States)

    Freeman, Karoline; Tsertsvadze, Alexander; Taylor-Phillips, Sian; McCarthy, Noel; Mistry, Hema; Manuel, Rohini; Mason, James

    2017-01-01

    Multiplex gastrointestinal pathogen panel (GPP) tests simultaneously identify bacterial, viral and parasitic pathogens from the stool samples of patients with suspected infectious gastroenteritis presenting in hospital or the community. We undertook a systematic review to compare the accuracy of GPP tests with standard microbiology techniques. Searches in Medline, Embase, Web of Science and the Cochrane library were undertaken from inception to January 2016. Eligible studies compared GPP tests with standard microbiology techniques in patients with suspected gastroenteritis. Quality assessment of included studies used tailored QUADAS-2. In the absence of a reference standard we analysed test performance taking GPP tests and standard microbiology techniques in turn as the benchmark test, using random effects meta-analysis of proportions. No study provided an adequate reference standard with which to compare the test accuracy of GPP and conventional tests. Ten studies informed a meta-analysis of positive and negative agreement. Positive agreement across all pathogens was 0.93 (95% CI 0.90 to 0.96) when conventional methods were the benchmark and 0.68 (95% CI: 0.58 to 0.77) when GPP provided the benchmark. Negative agreement was high in both instances due to the high proportion of negative cases. GPP testing produced a greater number of pathogen-positive findings than conventional testing. It is unclear whether these additional 'positives' are clinically important. GPP testing has the potential to simplify testing and accelerate reporting when compared to conventional microbiology methods. However the impact of GPP testing upon the management, treatment and outcome of patients is poorly understood and further studies are needed to evaluate the health economic impact of GPP testing compared with standard methods. The review protocol is registered with PROSPERO as CRD42016033320.

  12. A Review of Classical Methods of Item Analysis.

    Science.gov (United States)

    French, Christine L.

    Item analysis is a very important consideration in the test development process. It is a statistical procedure to analyze test items that combines methods used to evaluate the important characteristics of test items, such as difficulty, discrimination, and distractibility of the items in a test. This paper reviews some of the classical methods for…

  13. Test-methods of chemical analysis with visual and scanner indication in ecoanalytical monitoring of nature reservoirs of Kirovograd region

    Directory of Open Access Journals (Sweden)

    Y. V. Bokhan

    2007-03-01

    Full Text Available The features of test analysis with visual and scanner indication for the exposure and semiquantitative determination of general pollutants and indices of water bodies’ quality are considered. Evaluation of some metrological descriptions of the known test-methods of pH determination, concentrations of the dissolved oxygen, nitrate- and phosphate-ions, ions of iron with visual and computer scanner-technologies using is offered.

  14. Well test imaging - a new method for determination of boundaries from well test data

    Energy Technology Data Exchange (ETDEWEB)

    Slevinsky, B.A.

    1997-08-01

    A new method has been developed for analysis of well test data, which allows the direct calculation of the location of arbitrary reservoir boundaries which are detected during a well test. The method is based on elements of ray tracing and information theory, and is centered on the calculation of an instantaneous {open_quote}angle of view{close_quote} of the reservoir boundaries. In the absence of other information, the relative reservoir shape and boundary distances are retrievable in the form of a Diagnostic Image. If other reservoir information, such as 3-D seismic, is available; the full shape and orientation of arbitrary (non-straight line or circular arc) boundaries can be determined in the form of a Reservoir Image. The well test imaging method can be used to greatly enhance the information available from well tests and other geological data, and provides a method to integrate data from multiple disciplines to improve reservoir characterization. This paper covers the derivation of the analytical technique of well test imaging and shows examples of application of the technique to a number of reservoirs.

  15. An assessment of the validity of inelastic design analysis methods by comparisons of predictions with test results

    International Nuclear Information System (INIS)

    Corum, J.M.; Clinard, J.A.; Sartory, W.K.

    1976-01-01

    The use of computer programs that employ relatively complex constitutive theories and analysis procedures to perform inelastic design calculations on fast reactor system components introduces questions of validation and acceptance of the analysis results. We may ask ourselves, ''How valid are the answers.'' These questions, in turn, involve the concepts of verification of computer programs as well as qualification of the computer programs and of the underlying constitutive theories and analysis procedures. This paper addresses the latter - the qualification of the analysis methods for inelastic design calculations. Some of the work underway in the United States to provide the necessary information to evaluate inelastic analysis methods and computer programs is described, and typical comparisons of analysis predictions with inelastic structural test results are presented. It is emphasized throughout that rather than asking ourselves how valid, or correct, are the analytical predictions, we might more properly question whether or not the combination of the predictions and the associated high-temperature design criteria leads to an acceptable level of structural integrity. It is believed that in this context the analysis predictions are generally valid, even though exact correlations between predictions and actual behavior are not obtained and cannot be expected. Final judgment, however, must be reserved for the design analyst in each specific case. (author)

  16. Analysis of the forced vibration test of the Hualien large scale soil-structure interaction model using a flexible volume substructuring method

    International Nuclear Information System (INIS)

    Tang, H.T.; Nakamura, N.

    1995-01-01

    A 1/4-scale cylindrical reactor containment model was constructed in Hualien, Taiwan for foil-structure interaction (SSI) effect evaluation and SSI analysis procedure verification. Forced vibration tests were executed before backfill (FVT-1) and after backfill (FVT-2) to characterize soil-structure system characteristics under low excitations. A number of organizations participated in the pre-test blind prediction and post-test correlation analyses of the forced vibration test using various industry familiar methods. In the current study, correlation analyses were performed using a three-dimensional flexible volume substructuring method. The results are reported and soil property sensitivities are evaluated in the paper. (J.P.N.)

  17. Assessing SOC labile fractions through respiration test, density-size fractionation and thermal analysis - A comparison of methods

    Science.gov (United States)

    Soucemarianadin, Laure; Cécillon, Lauric; Chenu, Claire; Baudin, François; Nicolas, Manuel; Savignac, Florence; Barré, Pierre

    2017-04-01

    Soil organic matter (SOM) is the biggest terrestrial carbon reservoir, storing 3 to 4 times more carbon than the atmosphere. However, despite its major importance for climate regulation SOM dynamics remains insufficiently understood. For instance, there is still no widely accepted method to assess SOM lability. Soil respiration tests and particulate organic matter (POM) obtained by different fractionation schemes have been used for decades and are now considered as classical estimates of very labile and labile soil organic carbon (SOC), respectively. But the pertinence of these methods to characterize SOM turnover can be questioned. Moreover, they are very time-consuming and their reproducibility might be an issue. Alternate ways of determining the labile SOC component are thus well-needed. Thermal analyses have been used to characterize SOM among which Rock-Eval 6 (RE6) analysis of soil has shown promising results in the determination of SOM biogeochemical stability (Gregorich et al., 2015; Barré et al., 2016). Using a large set of samples of French forest soils representing contrasted pedoclimatic conditions, including deep samples (up to 1 m depth), we compared different techniques used for SOM lability assessment. We explored whether results from soil respiration test (10-week laboratory incubations), SOM size-density fractionation and RE6 thermal analysis were comparable and how they were correlated. A set of 222 (respiration test and RE6), 103 (SOM fractionation and RE6) and 93 (respiration test, SOM fractionation and RE6) forest soils samples were respectively analyzed and compared. The comparison of the three methods (n = 93) using a principal component analysis separated samples from the surface (0-10 cm) and deep (40-80 cm) layers, highlighting a clear effect of depth on the short-term persistence of SOC. A correlation analysis demonstrated that, for these samples, the two classical methods of labile SOC determination (respiration and SOM fractionation

  18. Advances in isotopic analysis for food authenticity testing

    DEFF Research Database (Denmark)

    Laursen, Kristian Holst; Bontempo, L.; Camin, Federica

    2016-01-01

    Abstract Stable isotope analysis has been used for food authenticity testing for more than 30 years and is today being utilized on a routine basis for a wide variety of food commodities. During the past decade, major analytical method developments have been made and the fundamental understanding...... authenticity testing is currently developing even further. In this chapter, we aim to provide an overview of the latest developments in stable isotope analysis for food authenticity testing. As several review articles and book chapters have recently addressed this topic, we will primarily focus on relevant...... literature from the past 5 years. We will focus on well-established methods for food authenticity testing using stable isotopes but will also include recent methodological developments, new applications, and current and future challenges....

  19. Method of measurement on materials shielding effectiveness test in time domain

    International Nuclear Information System (INIS)

    Liu Shunkun; Han Jun; Chen Xiangyue

    2009-01-01

    Windows method is a measurement of slot coupling effect in nature when it is used to measure material's shielding effectiveness. The error of measurement will become serious when it is used to measure material's shielding effectiveness in low frequency band. It is difficult to measure material's shielding effectiveness of electromagnetic pulse with Windows method. Device under test method (DUT method) was presented in this paper to overcome the limitations of Windows method in material's shielding effectiveness test. The method can be used to measure any material's shielding Effectiveness effectively through the design and the test of the DUT.The method was used to measure shielding effectiveness of special cement .Compared with theoretical analysis,the measurement result prove the DUT method to be very efficient in material's shielding effectiveness test. (authors)

  20. Standard test method for analysis of isotopic composition of uranium in nuclear-grade fuel material by quadrupole inductively coupled plasma-mass spectrometry

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2000-01-01

    1.1 This test method is applicable to the determination of the isotopic composition of uranium (U) in nuclear-grade fuel material. The following isotopic weight percentages are determined using a quadrupole inductively coupled plasma-mass spectrometer (Q-ICP-MS): 233U, 234U, 235U, 236U, and 238U. The analysis can be performed on various material matrices after acid dissolution and sample dilution into water or dilute nitric (HNO3) acid. These materials include: fuel product, uranium oxide, uranium oxide alloys, uranyl nitrate (UNH) crystals, and solutions. The sample preparation discussed in this test method focuses on fuel product material but may be used for uranium oxide or a uranium oxide alloy. Other preparation techniques may be used and some references are given. Purification of the uranium by anion-exchange extraction is not required for this test method, as it is required by other test methods such as radiochemistry and thermal ionization mass spectroscopy (TIMS). This test method is also described i...

  1. Studies on Hepa filter test methods

    International Nuclear Information System (INIS)

    Lee, S.H.; Jon, K.S.; Park, W.J.; Ryoo, R.

    1981-01-01

    The purpose of this study is to compare testing methods of the HEPA filter adopted in other countries with each other, and to design and construct a test duct system to establish testing methods. The American D.O.P. test method, the British NaCl test method and several other independently developed methods are compared. It is considered that the D.O.P. method is most suitable for in-plant and leak tests

  2. Stored energy analysis in the scaled-down test facilities

    International Nuclear Information System (INIS)

    Deng, Chengcheng; Chang, Huajian; Qin, Benke; Wu, Qiao

    2016-01-01

    Highlights: • Three methods are developed to evaluate stored energy in the scaled-down test facilities. • The mechanism behind stored energy distortion in the test facilities is revealed. • The application of stored energy analysis is demonstrated for the ACME facility of China. - Abstract: In the scaled-down test facilities that simulate the accident transient process of the prototype nuclear power plant, the stored energy release in the metal structures has an important influence on the accuracy and effectiveness of the experimental data. Three methods of stored energy analysis are developed, and the mechanism behind stored energy distortion in the test facilities is revealed. Moreover, the application of stored energy analysis is demonstrated for the ACME test facility newly built in China. The results show that the similarity requirements of three methods analyzing the stored energy release decrease gradually. The physical mechanism of stored energy release process can be characterized by the dimensionless numbers including Stanton number, Fourier number and Biot number. Under the premise of satisfying the overall similarity of natural circulation, the stored energy release process in the scale-down test facilities cannot maintain exact similarity. The results of the application of stored energy analysis illustrate that both the transient release process and integral total stored energy of the reactor pressure vessel wall of CAP1400 power plant can be well reproduced in the ACME test facility.

  3. New significance test methods for Fourier analysis of geophysical time series

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2011-09-01

    Full Text Available When one applies the discrete Fourier transform to analyze finite-length time series, discontinuities at the data boundaries will distort its Fourier power spectrum. In this paper, based on a rigid statistics framework, we present a new significance test method which can extract the intrinsic feature of a geophysical time series very well. We show the difference in significance level compared with traditional Fourier tests by analyzing the Arctic Oscillation (AO and the Nino3.4 time series. In the AO, we find significant peaks at about 2.8, 4.3, and 5.7 yr periods and in Nino3.4 at about 12 yr period in tests against red noise. These peaks are not significant in traditional tests.

  4. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  5. Standard test method for analysis of uranium and thorium in soils by energy dispersive X-Ray fluorescence spectroscopy

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2011-01-01

    1.1 This test method covers the energy dispersive X-ray fluorescence (EDXRF) spectrochemical analysis of trace levels of uranium and thorium in soils. Any sample matrix that differs from the general ground soil composition used for calibration (that is, fertilizer or a sample of mostly rock) would have to be calibrated separately to determine the effect of the different matrix composition. 1.2 The analysis is performed after an initial drying and grinding of the sample, and the results are reported on a dry basis. The sample preparation technique used incorporates into the sample any rocks and organic material present in the soil. This test method of sample preparation differs from other techniques that involve tumbling and sieving the sample. 1.3 Linear calibration is performed over a concentration range from 20 to 1000 μg per gram for uranium and thorium. 1.4 The values stated in SI units are to be regarded as the standard. The inch-pound units in parentheses are for information only. 1.5 This standard...

  6. [Testing methods for seed quality of Bletilla striata].

    Science.gov (United States)

    Zhang, Zhi-Hui; Liu, Da-Hui; Zhu, Xin-Yan; Ji, Peng-Zhang; Wang, Li; Shi, Ya-Na; Ma, Cong-Ji

    2016-06-01

    In order to provide a basis for establishing seed testing rules and seed quality standard of Bletilla striata, the seed quality of B.striata from different producing area was measured referring to the Rules for Agricultural Seed Testing(GB/T 3543-1995).The results showed that the seeds of B.striata passed through 20-mesh sieve for purity analysis.The weight of seeds was measured by 1000-seed method and the water content was measured at the higher temperature (133±2) ℃ for 3 hours.The seeds were cultured on the wet filter paper at 30 ℃ for 4-20 days in light for germination testing.The method of testing seed viability was that seeds were dipped into 1% TTC solution for 7 hours at temperature of 40 ℃. Copyright© by the Chinese Pharmaceutical Association.

  7. Standardized waste form test methods

    International Nuclear Information System (INIS)

    Slate, S.C.

    1984-01-01

    The Materials Characterization Center (MCC) is developing standard tests to characterize nuclear waste forms. Development of the first thirteen tests was originally initiated to provide data to compare different high-level waste (HLW) forms and to characterize their basic performance. The current status of the first thirteen MCC tests and some sample test results are presented: the radiation stability tests (MCC-6 and 12) and the tensile-strength test (MCC-11) are approved; the static leach tests (MCC-1, 2, and 3) are being reviewed for full approval; the thermal stability (MCC-7) and microstructure evaluation (MCC-13) methods are being considered for the first time; and the flowing leach test methods (MCC-4 and 5), the gas generation methods (MCC-8 and 9), and the brittle fracture method (MCC-10) are indefinitely delayed. Sample static leach test data on the ARM-1 approved reference material are presented. Established tests and proposed new tests will be used to meet new testing needs. For waste form production, tests on stability and composition measurement are needed to provide data to ensure waste form quality. In transporation, data are needed to evaluate the effects of accidents on canisterized waste forms. The new MCC-15 accident test method and some data are presented. Compliance testing needs required by the recent draft repository waste acceptance specifications are described. These specifications will control waste form contents, processing, and performance

  8. Standardized waste form test methods

    International Nuclear Information System (INIS)

    Slate, S.C.

    1984-11-01

    The Materials Characterization Center (MCC) is developing standard tests to characterize nuclear waste forms. Development of the first thirteen tests was originally initiated to provide data to compare different high-level waste (HLW) forms and to characterize their basic performance. The current status of the first thirteen MCC tests and some sample test results is presented: The radiation stability tests (MCC-6 and 12) and the tensile-strength test (MCC-11) are approved; the static leach tests (MCC-1, 2, and 3) are being reviewed for full approval; the thermal stability (MCC-7) and microstructure evaluation (MCC-13) methods are being considered for the first time; and the flowing leach tests methods (MCC-4 and 5), the gas generation methods (MCC-8 and 9), and the brittle fracture method (MCC-10) are indefinitely delayed. Sample static leach test data on the ARM-1 approved reference material are presented. Established tests and proposed new tests will be used to meet new testing needs. For waste form production, tests on stability and composition measurement are needed to provide data to ensure waste form quality. In transportation, data are needed to evaluate the effects of accidents on canisterized waste forms. The new MCC-15 accident test method and some data are presented. Compliance testing needs required by the recent draft repository waste acceptance specifications are described. These specifications will control waste form contents, processing, and performance. 2 references, 2 figures

  9. Probabilistic methods for rotordynamics analysis

    Science.gov (United States)

    Wu, Y.-T.; Torng, T. Y.; Millwater, H. R.; Fossum, A. F.; Rheinfurth, M. H.

    1991-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of dynamic systems that can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the eigenvalues or Routh-Hurwitz test functions are investigated. Computational methods based on a fast probability integration concept and an efficient adaptive importance sampling method are proposed to perform efficient probabilistic analysis. A numerical example is provided to demonstrate the methods.

  10. "Comparison of some Structural Analyses Methods used for the Test Pavement in the Danish Road Testing Machine

    DEFF Research Database (Denmark)

    Baltzer, S.; Zhang, W.; Macdonald, R.

    1998-01-01

    A flexible test pavement, instrumented to measure stresses and strains in the three primary axes with the upper 400 mm of the subgrade, has been constructed and load tested in the Danish Road Testing Machine (RTM). One objective of this research, which is part of the International Pavement Subgrade...... Pressure Cells, Thermistors and Pore Pressure Sensors. Routine monitoring of instrument responses and surface profiles with a Profilometer and FWD/LWD structural testing were undertaken at regular intervals during the construction and load testing programmes.This paper compares various structural analysis...... methods used for the RTM test pavement with data from FWD testing undertaken after the construction and loading programmes. Multilayer linear elastic forward and backcalculation methods, a finite element program and MS Excel spreadsheet based methods are compared....

  11. Rationality analysis of field test method for evaluation of geological condition of deposit

    International Nuclear Information System (INIS)

    Huo Jiandang; Wang Ping

    2008-01-01

    The systematical analysis of data obtained is made combined with practical hydrogeology test. The data are treated with computer software for special hydrogeological parameters to obtain hydrogeological parameters at the test zone. Hydrogeological parameters obtained are discussed, the rationality of well pattern employed for hydrogeological test is analyzed, and suggestions are proposed. (authors)

  12. Application of WST-method fore fracture testing of fibre-reinforced concrete

    DEFF Research Database (Denmark)

    Löfgren, Ingemar; Olesen, John Forbes; Flansbjer, Mathias

    be drawn from this study are that: § the wedge-splitting test method is a suitable test method for assessment of fracture properties of steel fibre-reinforced concrete; § the test method is easy to handle and relatively fast to execute § the test can be run with CMOD-control or without, in a machine...... more than three times the fibre length; § using inverse analysis, the tensile fracture properties can be interpreted from the test result as a bi-linear stress-crack opening relationship.......To evaluate the reproducibility of the wedge-splitting test method and to provide guidelines, a round robin study was conducted in which three labs participated. The participating labs were: § DTU – the Technical University of Denmark, Department of Civil Engineering; § CTH – Chalmers University...

  13. Semiconductor testing method

    International Nuclear Information System (INIS)

    Brown, Stephen.

    1992-01-01

    In a method of avoiding use of nuclear radiation, eg gamma rays, X-rays, electron beams, for testing semiconductor components for resistance to hard radiation, which hard radiation causes data corruption in some memory devices and 'latch-up' in others, similar fault effects can be achieved using a xenon or other 'light' flash gun even though the penetration of light is significantly less than that of gamma rays. The method involves treating a device with gamma radiation, measuring a particular fault current at the onset of a fault event, repeating the test with light to confirm the occurrence of the fault event at the same measured fault current, and using the fault current value as a reference for future tests using light on similar devices. (author)

  14. Repeatability study of replicate crash tests: A signal analysis approach.

    Science.gov (United States)

    Seppi, Jeremy; Toczyski, Jacek; Crandall, Jeff R; Kerrigan, Jason

    2017-10-03

    To provide an objective basis on which to evaluate the repeatability of vehicle crash test methods, a recently developed signal analysis method was used to evaluate correlation of sensor time history data between replicate vehicle crash tests. The goal of this study was to evaluate the repeatability of rollover crash tests performed with the Dynamic Rollover Test System (DRoTS) relative to other vehicle crash test methods. Test data from DRoTS tests, deceleration rollover sled (DRS) tests, frontal crash tests, frontal offset crash tests, small overlap crash tests, small overlap impact (SOI) crash tests, and oblique crash tests were obtained from the literature and publicly available databases (the NHTSA vehicle database and the Insurance Institute for Highway Safety TechData) to examine crash test repeatability. Signal analysis of the DRoTS tests showed that force and deformation time histories had good to excellent repeatability, whereas vehicle kinematics showed only fair repeatability due to the vehicle mounting method for one pair of tests and slightly dissimilar mass properties (2.2%) in a second pair of tests. Relative to the DRS, the DRoTS tests showed very similar or higher levels of repeatability in nearly all vehicle kinematic data signals with the exception of global X' (road direction of travel) velocity and displacement due to the functionality of the DRoTS fixture. Based on the average overall scoring metric of the dominant acceleration, DRoTS was found to be as repeatable as all other crash tests analyzed. Vertical force measures showed good repeatability and were on par with frontal crash barrier forces. Dynamic deformation measures showed good to excellent repeatability as opposed to poor repeatability seen in SOI and oblique deformation measures. Using the signal analysis method as outlined in this article, the DRoTS was shown to have the same or better repeatability of crash test methods used in government regulatory and consumer evaluation test

  15. Standard test methods for chemical, mass spectrometric, spectrochemical, nuclear, and radiochemical analysis of nuclear-grade uranyl nitrate solutions

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 These test methods cover procedures for the chemical, mass spectrometric, spectrochemical, nuclear, and radiochemical analysis of nuclear-grade uranyl nitrate solution to determine compliance with specifications. 1.2 The analytical procedures appear in the following order: Sections Determination of Uranium 7 Specific Gravity by Pycnometry 15-20 Free Acid by Oxalate Complexation 21-27 Determination of Thorium 28 Determination of Chromium 29 Determination of Molybdenum 30 Halogens Separation by Steam Distillation 31-35 Fluoride by Specific Ion Electrode 36-42 Halogen Distillate Analysis: Chloride, Bromide, and Iodide by Amperometric Microtitrimetry 43 Determination of Chloride and Bromide 44 Determination of Sulfur by X-Ray Fluorescence 45 Sulfate Sulfur by (Photometric) Turbidimetry 46 Phosphorus by the Molybdenum Blue (Photometric) Method 54-61 Silicon by the Molybdenum Blue (Photometric) Method 62-69 Carbon by Persulfate Oxidation-Acid Titrimetry 70 Conversion to U3O8 71-74 Boron by ...

  16. Sensitivity analysis of a complex, proposed geologic waste disposal system using the Fourier Amplitude Sensitivity Test method

    International Nuclear Information System (INIS)

    Lu Yichi; Mohanty, Sitakanta

    2001-01-01

    The Fourier Amplitude Sensitivity Test (FAST) method has been used to perform a sensitivity analysis of a computer model developed for conducting total system performance assessment of the proposed high-level nuclear waste repository at Yucca Mountain, Nevada, USA. The computer model has a large number of random input parameters with assigned probability density functions, which may or may not be uniform, for representing data uncertainty. The FAST method, which was previously applied to models with parameters represented by the uniform probability distribution function only, has been modified to be applied to models with nonuniform probability distribution functions. Using an example problem with a small input parameter set, several aspects of the FAST method, such as the effects of integer frequency sets and random phase shifts in the functional transformations, and the number of discrete sampling points (equivalent to the number of model executions) on the ranking of the input parameters have been investigated. Because the number of input parameters of the computer model under investigation is too large to be handled by the FAST method, less important input parameters were first screened out using the Morris method. The FAST method was then used to rank the remaining parameters. The validity of the parameter ranking by the FAST method was verified using the conditional complementary cumulative distribution function (CCDF) of the output. The CCDF results revealed that the introduction of random phase shifts into the functional transformations, proposed by previous investigators to disrupt the repetitiveness of search curves, does not necessarily improve the sensitivity analysis results because it destroys the orthogonality of the trigonometric functions, which is required for Fourier analysis

  17. Standard test method for analysis of total and isotopic uranium and total thorium in soils by inductively coupled plasma-mass spectrometry

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method covers the measurement of total uranium (U) and thorium (Th) concentrations in soils, as well as the determination of the isotopic weight percentages of 234U, 235U, 236U, and 238U, thereby allowing for the calculation of individual isotopic uranium activity or total uranium activity. This inductively coupled plasma-mass spectroscopy (ICP-MS) method is intended as an alternative analysis to methods such as alpha spectroscopy or thermal ionization mass spectroscopy (TIMS). Also, while this test method covers only those isotopes listed above, the instrumental technique may be expanded to cover other long-lived radioisotopes since the preparation technique includes the preconcentration of the actinide series of elements. The resultant sample volume can be further reduced for introduction into the ICP-MS via an electrothermal vaporization (ETV) unit or other sample introduction device, even though the standard peristaltic pump introduction is applied for this test method. The sample preparatio...

  18. Reliability Verification of DBE Environment Simulation Test Facility by using Statistics Method

    International Nuclear Information System (INIS)

    Jang, Kyung Nam; Kim, Jong Soeg; Jeong, Sun Chul; Kyung Heum

    2011-01-01

    In the nuclear power plant, all the safety-related equipment including cables under the harsh environment should perform the equipment qualification (EQ) according to the IEEE std 323. There are three types of qualification methods including type testing, operating experience and analysis. In order to environmentally qualify the safety-related equipment using type testing method, not analysis or operation experience method, the representative sample of equipment, including interfaces, should be subjected to a series of tests. Among these tests, Design Basis Events (DBE) environment simulating test is the most important test. DBE simulation test is performed in DBE simulation test chamber according to the postulated DBE conditions including specified high-energy line break (HELB), loss of coolant accident (LOCA), main steam line break (MSLB) and etc, after thermal and radiation aging. Because most DBE conditions have 100% humidity condition, in order to trace temperature and pressure of DBE condition, high temperature steam should be used. During DBE simulation test, if high temperature steam under high pressure inject to the DBE test chamber, the temperature and pressure in test chamber rapidly increase over the target temperature. Therefore, the temperature and pressure in test chamber continue fluctuating during the DBE simulation test to meet target temperature and pressure. We should ensure fairness and accuracy of test result by confirming the performance of DBE environment simulation test facility. In this paper, in order to verify reliability of DBE environment simulation test facility, statistics method is used

  19. Moisture distribution in sludges based on different testing methods

    Institute of Scientific and Technical Information of China (English)

    Wenyi Deng; Xiaodong Li; Jianhua Yan; Fei Wang; Yong Chi; Kefa Cen

    2011-01-01

    Moisture distributions in municipal sewage sludge, printing and dyeing sludge and paper mill sludge were experimentally studied based on four different methods, i.e., drying test, thermogravimetric-differential thermal analysis (TG-DTA) test, thermogravimetricdifferential scanning calorimetry (TG-DSC) test and water activity test. The results indicated that the moistures in the mechanically dewatered sludges were interstitial water, surface water and bound water. The interstitial water accounted for more than 50% wet basis (wb) of the total moisture content. The bond strength of sludge moisture increased with decreasing moisture content, especially when the moisture content was lower than 50% wb. Furthermore, the comparison among the four different testing methods was presented.The drying test was advantaged by its ability to quantify free water, interstitial water, surface water and bound water; while TG-DSC test, TG-DTA test and water activity test were capable of determining the bond strength of moisture in sludge. It was found that the results from TG-DSC and TG-DTA test are more persuasive than water activity test.

  20. Analysis using formal method and testing technique for the processor module for safety-critical application

    International Nuclear Information System (INIS)

    Choi, J. Y.; Choi, B. J.; Song, H. J.; Hwang, D. Y.; Song, G. H.; Lee, H.

    2008-06-01

    This research is on help develop nuclear power plant control system, through the requirement specification and verification method development. As the result of applying the test method, a test standard was obtain through test documentation writing support and a test document reflecting the standard test activities based on the test standard. The specification and verification of the pCOS system and the unified testing documentation and execution helps the entire project to progress and enable us to achieve necessary documents and technology to develop a safety critical system

  1. Analysis using formal method and testing technique for the processor module for safety-critical application

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J. Y.; Choi, B. J.; Song, H. J.; Hwang, D. Y.; Song, G. H.; Lee, H. [Korea University, Seoul (Korea, Republic of)

    2008-06-15

    This research is on help develop nuclear power plant control system, through the requirement specification and verification method development. As the result of applying the test method, a test standard was obtain through test documentation writing support and a test document reflecting the standard test activities based on the test standard. The specification and verification of the pCOS system and the unified testing documentation and execution helps the entire project to progress and enable us to achieve necessary documents and technology to develop a safety critical system.

  2. Exploration of analysis methods for diagnostic imaging tests: problems with ROC AUC and confidence scores in CT colonography.

    Science.gov (United States)

    Mallett, Susan; Halligan, Steve; Collins, Gary S; Altman, Doug G

    2014-01-01

    Different methods of evaluating diagnostic performance when comparing diagnostic tests may lead to different results. We compared two such approaches, sensitivity and specificity with area under the Receiver Operating Characteristic Curve (ROC AUC) for the evaluation of CT colonography for the detection of polyps, either with or without computer assisted detection. In a multireader multicase study of 10 readers and 107 cases we compared sensitivity and specificity, using radiological reporting of the presence or absence of polyps, to ROC AUC calculated from confidence scores concerning the presence of polyps. Both methods were assessed against a reference standard. Here we focus on five readers, selected to illustrate issues in design and analysis. We compared diagnostic measures within readers, showing that differences in results are due to statistical methods. Reader performance varied widely depending on whether sensitivity and specificity or ROC AUC was used. There were problems using confidence scores; in assigning scores to all cases; in use of zero scores when no polyps were identified; the bimodal non-normal distribution of scores; fitting ROC curves due to extrapolation beyond the study data; and the undue influence of a few false positive results. Variation due to use of different ROC methods exceeded differences between test results for ROC AUC. The confidence scores recorded in our study violated many assumptions of ROC AUC methods, rendering these methods inappropriate. The problems we identified will apply to other detection studies using confidence scores. We found sensitivity and specificity were a more reliable and clinically appropriate method to compare diagnostic tests.

  3. Multigroup Moderation Test in Generalized Structured Component Analysis

    Directory of Open Access Journals (Sweden)

    Angga Dwi Mulyanto

    2016-05-01

    Full Text Available Generalized Structured Component Analysis (GSCA is an alternative method in structural modeling using alternating least squares. GSCA can be used for the complex analysis including multigroup. GSCA can be run with a free software called GeSCA, but in GeSCA there is no multigroup moderation test to compare the effect between groups. In this research we propose to use the T test in PLS for testing moderation Multigroup on GSCA. T test only requires sample size, estimate path coefficient, and standard error of each group that are already available on the output of GeSCA and the formula is simple so the user does not need a long time for analysis.

  4. Review of the Air-Coupled Impact-Echo Method for Non-Destructive Testing

    Science.gov (United States)

    Nowotarski, Piotr; Dubas, Sebastian; Milwicz, Roman

    2017-10-01

    The article presents the general idea of Air-Coupled Impact-Echo (ACIE) method which is one of the non-destructive testing (NDT) techniques used in the construction industry. One of the main advantages of the general Impact Echo (IE) method is that it is sufficient to access from one side to that of the structure which greatly facilitate research in the road facilities or places which are difficult to access and diagnose. The main purpose of the article is to present state-of-the-art related to ACIE method based on the publications available at Thomson Reuters Web of Science Core Collection database (WOS) with the further analysis of the mentioned methods. Deeper analysis was also performed for the newest publications published within last 3 years related to ACIE for investigation on the subject of main focus of the researchers and scientists to try to define possible regions where additional examination and work is necessary. One of the main conclusions that comes from the performed analysis is that ACIE methods can be widely used for performing NDT of concrete structures and can be performed faster than standard IE method thanks to the Air-coupled sensors. What is more, 92.3% of the analysed recent research described in publications connected with ACIE was performed in laboratories, and only 23.1% in-situ on real structures. This indicates that method requires further research to prepare test stand ready to perform analysis on real objects outside laboratory conditions. Moreover, algorithms that are used for data processing and later presentation in ACIE method are still being developed and there is no universal solution available for all kinds of the existing and possible to find defects, which indicates possible research area for further works. Authors are of the opinion that emerging ACIE method could be good opportunity for ND testing especially for concrete structures. Development and refinement of test stands that will allow to perform in-situ tests could

  5. Methodology for developing new test methods

    Directory of Open Access Journals (Sweden)

    A. I. Korobko

    2017-06-01

    Full Text Available The paper describes the methodology for developing new test methods and forming solutions for the development of new test methods. The basis of the methodology for developing new test methods is the individual elements of the system and process approaches. They contribute to the development of an effective research strategy for the object, the study of interrelations, the synthesis of an adequate model of the test method. The effectiveness of the developed test method is determined by the correct choice of the set of concepts, their interrelations and mutual influence. This allows you to solve the tasks assigned to achieve the goal. The methodology is based on the use of fuzzy cognitive maps. The question of the choice of the method on the basis of which the model for the formation of solutions is based is considered. The methodology provides for recording a model for a new test method in the form of a finite set of objects. These objects are significant for the test method characteristics. Then a causal relationship is established between the objects. Further, the values of fitness indicators and the observability of the method and metrological tolerance for the indicator are established. The work is aimed at the overall goal of ensuring the quality of tests by improving the methodology for developing the test method.

  6. 40 CFR 63.1365 - Test methods and initial compliance procedures.

    Science.gov (United States)

    2010-07-01

    ... permit limit applicable to the process vent. (D) Design analysis based on accepted chemical engineering... concentration, temperature, and the reaction kinetics of the constituents with the scrubbing liquid. The design... procedures specified in Method 8260 or 8270 in “Test Methods for Evaluating Solid Waste, Physical/Chemical...

  7. Winston-Lutz Test: A quantitative analysis

    International Nuclear Information System (INIS)

    Pereira, Aline Garcia; Nandi, Dorival Menegaz; Saraiva, Crystian Wilian Chagas

    2017-01-01

    Objective: Describe a method of quantitative analysis for the Winston-Lutz test. Materials and methods The research is a qualitative exploratory study. The materials used were: portal film; Winston- Lutz test tools and Omni Pro software. Sixteen portal films were used as samples and were analyzed by five different technicians to measure the deviation between the radiation isocenters and mechanic. Results: Among the results were identified two combinations with offset values greater than 1 mm. In addition, when compared the method developed with the previously studied, it was observed that the data obtained are very close, with the maximum percentage deviation of 32.5%, which demonstrates its efficacy in reducing dependence on the performer. Conclusion: The results show that the method is reproducible and practical, which constitutes one of the fundamental factors for its implementation. (author)

  8. Towards Automatic Testing of Reference Point Based Interactive Methods

    OpenAIRE

    Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa

    2016-01-01

    In order to understand strengths and weaknesses of optimization algorithms, it is important to have access to different types of test problems, well defined performance indicators and analysis tools. Such tools are widely available for testing evolutionary multiobjective optimization algorithms. To our knowledge, there do not exist tools for analyzing the performance of interactive multiobjective optimization methods based on the reference point approach to communicating ...

  9. Study on color difference estimation method of medicine biochemical analysis

    Science.gov (United States)

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Sun, Jiashi; Zhou, Fengkun

    2006-01-01

    The biochemical analysis in medicine is an important inspection and diagnosis method in hospital clinic. The biochemical analysis of urine is one important item. The Urine test paper shows corresponding color with different detection project or different illness degree. The color difference between the standard threshold and the test paper color of urine can be used to judge the illness degree, so that further analysis and diagnosis to urine is gotten. The color is a three-dimensional physical variable concerning psychology, while reflectance is one-dimensional variable; therefore, the estimation method of color difference in urine test can have better precision and facility than the conventional test method with one-dimensional reflectance, it can make an accurate diagnose. The digital camera is easy to take an image of urine test paper and is used to carry out the urine biochemical analysis conveniently. On the experiment, the color image of urine test paper is taken by popular color digital camera and saved in the computer which installs a simple color space conversion (RGB -> XYZ -> L *a *b *)and the calculation software. Test sample is graded according to intelligent detection of quantitative color. The images taken every time were saved in computer, and the whole illness process will be monitored. This method can also use in other medicine biochemical analyses that have relation with color. Experiment result shows that this test method is quick and accurate; it can be used in hospital, calibrating organization and family, so its application prospect is extensive.

  10. Solubility tests and the peripheral blood film method for screening ...

    African Journals Online (AJOL)

    Objective. To determine the cost benefit of screening for sicklecell disease among infants at district health centres in Uganda using sickling, solubility tests and the peripheral blood film method. Methods. Pilot screening services were established at district health centres. Cost benefit analysis (CBA) was performed in four ...

  11. 49 CFR 383.133 - Testing methods.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Testing methods. 383.133 Section 383.133... STANDARDS; REQUIREMENTS AND PENALTIES Tests § 383.133 Testing methods. (a) All tests shall be constructed in... must be at least as stringent as the Federal standards. (c) States shall determine specific methods for...

  12. Standard test method for liquid impingement erosion using rotating apparatus

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers tests in which solid specimens are eroded or otherwise damaged by repeated discrete impacts of liquid drops or jets. Among the collateral forms of damage considered are degradation of optical properties of window materials, and penetration, separation, or destruction of coatings. The objective of the tests may be to determine the resistance to erosion or other damage of the materials or coatings under test, or to investigate the damage mechanisms and the effect of test variables. Because of the specialized nature of these tests and the desire in many cases to simulate to some degree the expected service environment, the specification of a standard apparatus is not deemed practicable. This test method gives guidance in setting up a test, and specifies test and analysis procedures and reporting requirements that can be followed even with quite widely differing materials, test facilities, and test conditions. It also provides a standardized scale of erosion resistance numbers applicab...

  13. Seismic design and analysis methods

    International Nuclear Information System (INIS)

    Varpasuo, P.

    1993-01-01

    Seismic load is in many areas of the world the most important loading situation from the point of view of structural strength. Taking this into account it is understandable, that there has been a strong allocation of resources in the seismic analysis during the past ten years. In this study there are three areas of the center of gravity: (1) Random vibrations; (2) Soil-structure interaction and (3) The methods for determining structural response. The solution of random vibration problems is clarified with the aid of applications in this study and from the point of view of mathematical treatment and mathematical formulations it is deemed sufficient to give the relevant sources. In the soil-structure interaction analysis the focus has been the significance of frequency dependent impedance functions. As a result it was obtained, that the description of the soil with the aid of frequency dependent impedance functions decreases the structural response and it is thus always the preferred method when compared to more conservative analysis types. From the methods to determine the C structural response the following four were tested: (1) The time history method; (2) The complex frequency-response method; (3) Response spectrum method and (4) The equivalent static force method. The time history appeared to be the most accurate method and the complex frequency-response method did have the widest area of application. (orig.). (14 refs., 35 figs.)

  14. Application of software technology to automatic test data analysis

    Science.gov (United States)

    Stagner, J. R.

    1991-01-01

    The verification process for a major software subsystem was partially automated as part of a feasibility demonstration. The methods employed are generally useful and applicable to other types of subsystems. The effort resulted in substantial savings in test engineer analysis time and offers a method for inclusion of automatic verification as a part of regression testing.

  15. Analysis of the optimized H type grid spring by a characterization test and the finite element method under the in-grid boundary

    International Nuclear Information System (INIS)

    Yoon, Kyung Ho; Lee, Kang Hee; Kang, Heung Seok; Song, Kee Nam

    2006-01-01

    Characterization tests (load vs. displacement curve) are conducted for the springs of Zirconium alloy spacer grids for an advanced LWR fuel assembly. Twofold testing is employed: strap-based and assembly-based tests. The assembly-based test satisfies the in situ boundary conditions of the spring within the grid assembly. The aim of the characterization test via the aforementioned two methods is to establish an appropriate assembly-based test method that fulfills the actual boundary conditions. A characterization test under the spacer grid assembly boundary condition is also conducted to investigate the actual behavior of the spring in the core. The stiffness of the characteristic curve is smaller than that of the strap-wised boundary condition. This phenomenon may cause the strap slit condition. A spacer grid consists of horizontal and vertical straps. The strap slit positions are differentiated from each other. They affords examination of the variation of the external load distribution in the grid spring. Localized regions of high stress and their values are analyzed, as they may be affected by the spring shape. Through a comparison of the results of the test and FE analysis, it is concluded that the present assembly-based analysis model and procedure are reasonably well conducted and can be used for spring characterization in the core. Guidelines for improving the mechanical integrity of the spring are also discussed

  16. Development of laboratory acceleration test method for service life prediction of concrete structures

    International Nuclear Information System (INIS)

    Cho, M. S.; Song, Y. C.; Bang, K. S.; Lee, J. S.; Kim, D. K.

    1999-01-01

    Service life prediction of nuclear power plants depends on the application of history of structures, field inspection and test, the development of laboratory acceleration tests, their analysis method and predictive model. In this study, laboratory acceleration test method for service life prediction of concrete structures and application of experimental test results are introduced. This study is concerned with environmental condition of concrete structures and is to develop the acceleration test method for durability factors of concrete structures e.g. carbonation, sulfate attack, freeze-thaw cycles and shrinkage-expansion etc

  17. 40 CFR 63.547 - Test methods.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Test methods. 63.547 Section 63.547... Hazardous Air Pollutants from Secondary Lead Smelting § 63.547 Test methods. (a) The following test methods...), and 63.545(e): (1) Method 1 shall be used to select the sampling port location and the number of...

  18. Materials and test methods

    International Nuclear Information System (INIS)

    Kase, M.B.

    1985-01-01

    The objective of this study was to provide, in cooperation with ORNL and LANL, specimens required for studies to develop organic insulators having the cryogenic neutron irradiation resistance required for MFE systems utilizing superconducting magnetic confinement. To develop test methods and analytical procedures for assessing radiation damage. To stimulate and participate in international cooperation directed toward accomplishing these objectives. The system for producing uniaxially reinforced, 3-4 mm (0.125 in) diameter rod specimens has been refined and validated by production of excellent quality specimens using liquid-mix epoxy resin systems. The methodology is undergoing further modification to permit use of hot-melt epoxy and polyimide resin systems as will be required for the experimental program to be conducted in the NLTNIF reactor at ORNL. Preliminary studies indicate that short beam and torsional shear test methods will be useful in evaluating radiation degradation. Development of these and other applicable test methods are continuing. A cooperative program established with laboratories in Japan and in England has resulted in the production and testing of specimens having an identical configuration

  19. Radon barrier: Method of testing airtightness

    DEFF Research Database (Denmark)

    Rasmussen, Torben Valdbjørn; Buch-Hansen, Thomas Cornelius

    2017-01-01

    The test method NBI 167/02 Radon membrane: Test of airtightness can be used for determining the airtightness of a radon barrier as a system solution. The test determines the air infiltration through the radon barrier for a number of levels of air pressure differences. The airflow through versus...... of the barrier with the low air pressure, through a well-defined opening, as a modification of the test method in general. Results, obtained using the improved test method, are shown for a number of radon barriers tested....

  20. Development of analysis methods for seismically isolated nuclear structures

    International Nuclear Information System (INIS)

    Yoo, Bong; Lee, Jae-Han; Koo, Gyeng-Hoi

    2002-01-01

    KAERI's contributions to the project entitled Development of Analysis Methods for Seismically Isolated Nuclear Structures under IAEA CRP of the intercomparison of analysis methods for predicting the behaviour of seismically isolated nuclear structures during 1996-1999 in effort to develop the numerical analysis methods and to compare the analysis results with the benchmark test results of seismic isolation bearings and isolated nuclear structures provided by participating countries are briefly described. Certain progress in the analysis procedures for isolation bearings and isolated nuclear structures has been made throughout the IAEA CRPs and the analysis methods developed can be improved for future nuclear facility applications. (author)

  1. Quantitative impact characterization of aeronautical CFRP materials with non-destructive testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Kiefel, Denis, E-mail: Denis.Kiefel@airbus.com, E-mail: Rainer.Stoessel@airbus.com; Stoessel, Rainer, E-mail: Denis.Kiefel@airbus.com, E-mail: Rainer.Stoessel@airbus.com [Airbus Group Innovations, Munich (Germany); Grosse, Christian, E-mail: Grosse@tum.de [Technical University Munich (Germany)

    2015-03-31

    In recent years, an increasing number of safety-relevant structures are designed and manufactured from carbon fiber reinforced polymers (CFRP) in order to reduce weight of airplanes by taking the advantage of their specific strength into account. Non-destructive testing (NDT) methods for quantitative defect analysis of damages are liquid- or air-coupled ultrasonic testing (UT), phased array ultrasonic techniques, and active thermography (IR). The advantage of these testing methods is the applicability on large areas. However, their quantitative information is often limited on impact localization and size. In addition to these techniques, Airbus Group Innovations operates a micro x-ray computed tomography (μ-XCT) system, which was developed for CFRP characterization. It is an open system which allows different kinds of acquisition, reconstruction, and data evaluation. One main advantage of this μ-XCT system is its high resolution with 3-dimensional analysis and visualization opportunities, which enables to gain important quantitative information for composite part design and stress analysis. Within this study, different NDT methods will be compared at CFRP samples with specified artificial impact damages. The results can be used to select the most suitable NDT-method for specific application cases. Furthermore, novel evaluation and visualization methods for impact analyzes are developed and will be presented.

  2. Quantitative impact characterization of aeronautical CFRP materials with non-destructive testing methods

    International Nuclear Information System (INIS)

    Kiefel, Denis; Stoessel, Rainer; Grosse, Christian

    2015-01-01

    In recent years, an increasing number of safety-relevant structures are designed and manufactured from carbon fiber reinforced polymers (CFRP) in order to reduce weight of airplanes by taking the advantage of their specific strength into account. Non-destructive testing (NDT) methods for quantitative defect analysis of damages are liquid- or air-coupled ultrasonic testing (UT), phased array ultrasonic techniques, and active thermography (IR). The advantage of these testing methods is the applicability on large areas. However, their quantitative information is often limited on impact localization and size. In addition to these techniques, Airbus Group Innovations operates a micro x-ray computed tomography (μ-XCT) system, which was developed for CFRP characterization. It is an open system which allows different kinds of acquisition, reconstruction, and data evaluation. One main advantage of this μ-XCT system is its high resolution with 3-dimensional analysis and visualization opportunities, which enables to gain important quantitative information for composite part design and stress analysis. Within this study, different NDT methods will be compared at CFRP samples with specified artificial impact damages. The results can be used to select the most suitable NDT-method for specific application cases. Furthermore, novel evaluation and visualization methods for impact analyzes are developed and will be presented

  3. Path-Wise Test Data Generation Based on Heuristic Look-Ahead Methods

    Directory of Open Access Journals (Sweden)

    Ying Xing

    2014-01-01

    Full Text Available Path-wise test data generation is generally considered an important problem in the automation of software testing. In essence, it is a constraint optimization problem, which is often solved by search methods such as backtracking algorithms. In this paper, the backtracking algorithm branch and bound and state space search in artificial intelligence are introduced to tackle the problem of path-wise test data generation. The former is utilized to explore the space of potential solutions and the latter is adopted to construct the search tree dynamically. Heuristics are employed in the look-ahead stage of the search. Dynamic variable ordering is presented with a heuristic rule to break ties, values of a variable are determined by the monotonicity analysis on branching conditions, and maintaining path consistency is achieved through analysis on the result of interval arithmetic. An optimization method is also proposed to reduce the search space. The results of empirical experiments show that the search is conducted in a basically backtrack-free manner, which ensures both test data generation with promising performance and its excellence over some currently existing static and dynamic methods in terms of coverage. The results also demonstrate that the proposed method is applicable in engineering.

  4. Survey of sampling-based methods for uncertainty and sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; Sallaberry, C.J.; Storlie, C.B.

    2006-01-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (i) definition of probability distributions to characterize epistemic uncertainty in analysis inputs (ii) generation of samples from uncertain analysis inputs (iii) propagation of sampled inputs through an analysis (iv) presentation of uncertainty analysis results, and (v) determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two-dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition

  5. Survey of sampling-based methods for uncertainty and sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD. (.; .); Storlie, Curt B. (Colorado State University, Fort Collins, CO)

    2006-06-01

    Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.

  6. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    Science.gov (United States)

    Anggraeni, Novia Antika

    2015-04-01

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano's inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 - 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between -2.86 up to 5.49 days.

  7. A novel method for feasibility testing urban sustainable development policies

    Directory of Open Access Journals (Sweden)

    O’Doherty Travis

    2013-01-01

    Full Text Available Policy making to promote more sustainable development is a complex task due in part to the large number of both stakeholders and potential policies. Policy feasibility testing provides a guide to the viability and practicality of policy implementation and forms an important part of an evidence based policy making process. An extensive literature review has identified no standardized approach to feasibility testing. This paper addresses this knowledge gap by describing a novel method using Multi-Criteria Decision Analysis (MCDA for feasibility testing of policies aimed at increasing the sustainability of towns and villages in Ireland. Feasibility results are provided for 40 frequently cited policy interventions tested for 18 settlements in Ireland. Policies were selected in the arenas of transport, food, housing and urban form, energy, waste and water. Policies are feasibility tested through analysis of operational evidence from both quantitative and qualitative data sources. Following testing, policies are ranked in terms of feasibility. This research examines the effectiveness of local and national level policies and the importance of both local community involvement and central government regulation in policy success. The inter-settlement variation in feasibility testing scores prioritises policy selection and aims to reduce cherry-picking of policies to support the viewpoints of the decision maker. Although developed for an Irish urban context the methods described here may have applicability elsewhere.

  8. Bayesian analysis of heat pipe life test data for reliability demonstration testing

    International Nuclear Information System (INIS)

    Bartholomew, R.J.; Martz, H.F.

    1985-01-01

    The demonstration testing duration requirements to establish a quantitative measure of assurance of expected lifetime for heat pipes was determined. The heat pipes are candidate devices for transporting heat generated in a nuclear reactor core to thermoelectric converters for use as a space-based electric power plant. A Bayesian analysis technique is employed, utilizing a limited Delphi survey, and a geometric mean accelerated test criterion involving heat pipe power (P) and temperature (T). Resulting calculations indicate considerable test savings can be achieved by employing the method, but development testing to determine heat pipe failure mechanisms should not be circumvented

  9. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Science.gov (United States)

    2010-10-01

    ... Institute for Occupational Safety and Health, the American Society for Testing and Materials, and the American Public Health Association. (iii) Methods selected for air sampling and analysis shall be methods... stresses simulated; (B) How the model approximates the geohydrological framework of the assessment area; (C...

  10. Investigation on method of elasto-plastic analysis for piping system (benchmark analysis)

    International Nuclear Information System (INIS)

    Kabaya, Takuro; Kojima, Nobuyuki; Arai, Masashi

    2015-01-01

    This paper provides method of an elasto-plastic analysis for practical seismic design of nuclear piping system. JSME started up the task to establish method of an elasto-plastic analysis for nuclear piping system. The benchmark analyses have been performed in the task to investigate on method of an elasto-plastic analysis. And our company has participated in the benchmark analyses. As a result, we have settled on the method which simulates the result of piping exciting test accurately. Therefore the recommended method of an elasto-plastic analysis is shown as follows; 1) An elasto-plastic analysis is composed of dynamic analysis of piping system modeled by using beam elements and static analysis of deformed elbow modeled by using shell elements. 2) Bi-linear is applied as an elasto-plastic property. Yield point is standardized yield point multiplied by 1.2 times, and second gradient is 1/100 young's modulus. Kinematic hardening is used as a hardening rule. 3) The fatigue life is evaluated on strain ranges obtained by elasto-plastic analysis, by using the rain flow method and the fatigue curve of previous studies. (author)

  11. 40 CFR 80.3 - Test methods.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Test methods. 80.3 Section 80.3... FUELS AND FUEL ADDITIVES General Provisions § 80.3 Test methods. The lead and phosphorus content of gasoline shall be determined in accordance with test methods set forth in the appendices to this part. [47...

  12. 40 CFR 63.1546 - Test methods.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true Test methods. 63.1546 Section 63.1546... Hazardous Air Pollutants for Primary Lead Smelting § 63.1546 Test methods. (a) The following procedure shall....1543(a)(1) through § 63.1543(a)(9) shall be determined according to the following test methods in...

  13. 30 CFR 27.31 - Testing methods.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Testing methods. 27.31 Section 27.31 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING, EVALUATION, AND APPROVAL OF MINING PRODUCTS METHANE-MONITORING SYSTEMS Test Requirements § 27.31 Testing methods. A methane...

  14. Testing sequential extraction methods for the analysis of multiple stable isotope systems from a bone sample

    Science.gov (United States)

    Sahlstedt, Elina; Arppe, Laura

    2017-04-01

    Stable isotope composition of bones, analysed either from the mineral phase (hydroxyapatite) or from the organic phase (mainly collagen) carry important climatological and ecological information and are therefore widely used in paleontological and archaeological research. For the analysis of the stable isotope compositions, both of the phases, hydroxyapatite and collagen, have their more or less well established separation and analytical techniques. Recent development in IRMS and wet chemical extraction methods have facilitated the analysis of very small bone fractions (500 μg or less starting material) for PO43-O isotope composition. However, the uniqueness and (pre-) historical value of each archaeological and paleontological finding lead to preciously little material available for stable isotope analyses, encouraging further development of microanalytical methods for the use of stable isotope analyses. Here we present the first results in developing extraction methods for combining collagen C- and N-isotope analyses to PO43-O-isotope analyses from a single bone sample fraction. We tested sequential extraction starting with dilute acid demineralization and collection of both collagen and PO43-fractions, followed by further purification step by H2O2 (PO43-fraction). First results show that bone sample separates as small as 2 mg may be analysed for their δ15N, δ13C and δ18OPO4 values. The method may be incorporated in detailed investigation of sequentially developing skeletal material such as teeth, potentially allowing for the investigation of interannual variability in climatological/environmental signals or investigation of the early life history of an individual.

  15. A Teaching Method on Basic Chemistry for Freshman : Teaching Method with Pre-test and Post-test

    OpenAIRE

    立木, 次郎; 武井, 庚二

    2003-01-01

    This report deals with a teaching method on basic chemistry for freshman. This teaching method contains guidance and instruction to how to understand basic chemistry. Pre-test and post-test have been put into practice each time. Each test was returned to students at class in the following weeks.

  16. Advances in the analysis of pressure interference tests

    Energy Technology Data Exchange (ETDEWEB)

    Martinez R, N. [Petroleos Mexicanos, PEMEX, Mexico City (Mexico); Samaniego V, F. [Univ. Nacional Autonoma de Mexico (Mexico)

    2010-12-15

    This paper presented an extension for radial, linear, and spherical flow conditions of the El-Khatib method for analyzing pressure interference tests through utilization of the pressure derivative. Conventional analysis of interference tests considers only radial flow, but some reservoirs have physical field conditions in which linear or spherical flow conditions prevail. The INTERFERAN system, a friendly computer code for the automatic analysis of pressure interference tests, was also discussed and demonstrated by way of 2 field cases. INTERFERAN relies on the principle of superposition in time and space to interpret a test of several wells with variable histories of production or injection or both. The first field case addressed interference tests conducted in the naturally fractured geothermal field of Klamath Falls, and the second field case was conducted in a river-formed bed in which linear flow conditions are dominant. The analysis was deemed to be reliable. 13 refs., 1 tab., 7 figs.

  17. The same analysis approach: Practical protection against the pitfalls of novel neuroimaging analysis methods.

    Science.gov (United States)

    Görgen, Kai; Hebart, Martin N; Allefeld, Carsten; Haynes, John-Dylan

    2017-12-27

    Standard neuroimaging data analysis based on traditional principles of experimental design, modelling, and statistical inference is increasingly complemented by novel analysis methods, driven e.g. by machine learning methods. While these novel approaches provide new insights into neuroimaging data, they often have unexpected properties, generating a growing literature on possible pitfalls. We propose to meet this challenge by adopting a habit of systematic testing of experimental design, analysis procedures, and statistical inference. Specifically, we suggest to apply the analysis method used for experimental data also to aspects of the experimental design, simulated confounds, simulated null data, and control data. We stress the importance of keeping the analysis method the same in main and test analyses, because only this way possible confounds and unexpected properties can be reliably detected and avoided. We describe and discuss this Same Analysis Approach in detail, and demonstrate it in two worked examples using multivariate decoding. With these examples, we reveal two sources of error: A mismatch between counterbalancing (crossover designs) and cross-validation which leads to systematic below-chance accuracies, and linear decoding of a nonlinear effect, a difference in variance. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Study of the test method for prediction of air conditioning equipment seasonal performance

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, S.B.

    1980-05-01

    The test procedure, Method of Testing, Rating and Estimating the Seasonal Performance of Central Air-Conditioners and Heat Pumps Operating in the Cooling Mode, has been analyzed. The analysis of the test procedure incorporated two main functions: (1) to determine the validity of the test procedure; and (2) to determine if there are other alternate methods of obtaining the same results with less testing burden. Data were collected from industry and analyzed for any significant trends. Certain conclusions are drawn about the energy efficiency ratios, degradation coefficients and seasonal energy efficiency ratios. An error analysis was performed on the test procedure to determine the approximate amount of error when using this procedure. A semi-empirical model assuming a first order system response was developed to determine the factors that affect the part-load and cooling-load factors. The corresponding transient characteristics are then determined in terms of a single time constant. A thermostat demand cycle is used to determine the relationship between on-time and cycle-time. Recommendations are made regarding an alternate method being used to determine the seasonal energy efficiency ratio.

  19. Impact response analysis of cask for spent fuel by dimensional analysis and mode superposition method

    International Nuclear Information System (INIS)

    Kim, Y. J.; Kim, W. T.; Lee, Y. S.

    2006-01-01

    Full text: Full text: Due to the potentiality of accidents, the transportation safety of radioactive material has become extremely important in these days. The most important means of accomplishing the safety in transportation for radioactive material is the integrity of cask. The cask for spent fuel consists of a cask body and two impact limiters generally. The impact limiters are attached at the upper and the lower of the cask body. The cask comprises general requirements and test requirements for normal transport conditions and hypothetical accident conditions in accordance with IAEA regulations. Among the test requirements for hypothetical accident conditions, the 9 m drop test of dropping the cask from 9 m height to unyielding surface to get maximum damage becomes very important requirement because it can affect the structural soundness of the cask. So far the impact response analysis for 9 m drop test has been obtained by finite element method with complex computational procedure. In this study, the empirical equations of the impact forces for 9 m drop test are formulated by dimensional analysis. And then using the empirical equations the characteristics of material used for impact limiters are analysed. Also the dynamic impact response of the cask body is analysed using the mode superposition method and the analysis method is proposed. The results are also validated by comparing with previous experimental results and finite element analysis results. The present method is simpler than finite element method and can be used to predict the impact response of the cask

  20. 40 CFR 63.344 - Performance test requirements and test methods.

    Science.gov (United States)

    2010-07-01

    ... electroplating tanks or chromium anodizing tanks. The sampling time and sample volume for each run of Methods 306... Chromium Anodizing Tanks § 63.344 Performance test requirements and test methods. (a) Performance test... Emissions From Decorative and Hard Chromium Electroplating and Anodizing Operations,” appendix A of this...

  1. Novel approach in quantitative analysis of shearography method

    International Nuclear Information System (INIS)

    Wan Saffiey Wan Abdullah

    2002-01-01

    The application of laser interferometry in industrial non-destructive testing and material characterization is becoming more prevalent since this method provides non-contact full-field inspection of the test object. However their application only limited to the qualitative analysis, current trend has changed to the development of this method by the introduction of quantitative analysis, which attempts to detail the defect examined. This being the design feature for a ranges of object size to be examined. The growing commercial demand for quantitative analysis for NDT and material characterization is determining the quality of optical and analysis instrument. However very little attention is currently being paid to understanding, quantifying and compensating for the numerous error sources which are a function of interferometers. This paper presents a comparison of measurement analysis using the established theoretical approach and the new approach, taken into account the factor of divergence illumination and other geometrical factors. The difference in the measurement system could be associated in the error factor. (Author)

  2. Test Capability of Comparative NAA Method in Analysis of Long Lived Element in SRM 1648

    International Nuclear Information System (INIS)

    Sri-Wardani

    2005-01-01

    The comparative NAA method had been examine on the analysis of long-lived elements content in air particulate sample of NIST.SRM 1648 for evaluation of a capability of comparative NAA method that used at P2TRR. From the result of analysis it could be determined analysis elements contained in the sample, namely: Sc, Co, Zn, Br, Rb, Sb, Hf and Th with optimum results in bias of 10%. The optimum result of long-lived elements obtained on a good accuracy and precision. From the analysis data obtained showed that the comparative NAA method with Gamma Trac and APTEC software capable to analyze several kinds of elements in environmental samples. Therefore, this method could be implement in biological and healthy samples. (author)

  3. 7 CFR 58.644 - Test methods.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Test methods. 58.644 Section 58.644 Agriculture... Procedures § 58.644 Test methods. (a) Microbiological. Microbiological determinations shall be made in accordance with the methods described in the latest edition of Standard Methods for the Examination of Dairy...

  4. Research on friction coefficient of nuclear Reactor Vessel Internals Hold Down Spring: Stress coefficient test analysis method

    International Nuclear Information System (INIS)

    Linjun, Xie; Guohong, Xue; Ming, Zhang

    2016-01-01

    Graphical abstract: HDS stress coefficient test apparatus. - Highlights: • This paper performs mathematic deduction to the physical model of Hold Down Spring (HDS), establishes a mathematic model of axial load P and stress, stress coefficient and friction coefficient and designs a set of test apparatuses for simulating the pretightening process of the HDS for the first time according to a model similarity criterion. • The mathematical relation between the load and the strain is obtained about the HDS, and the mathematical model of the stress coefficient and the friction coefficient is established. So, a set of test apparatuses for obtaining the stress coefficient is designed according to the model scaling criterion and the friction coefficient of the K1000 HDS is calculated to be 0.336 through the obtained stress coefficient. • The relation curve between the theoretical load and the friction coefficient is obtained through analysis and indicates that the change of the friction coefficient f would influence the pretightening load under the condition of designed stress. The necessary pretightening load in the design process is calculated to be 5469 kN according to the obtained friction coefficient. Therefore, the friction coefficient and the pretightening load under the design conditions can provide accurate pretightening data for the analysis and design of the reactor HDS according to the operations. - Abstract: This paper performs mathematic deduction to the physical model of Hold Down Spring (HDS), establishes a mathematic model of axial load P and stress, stress coefficient and friction coefficient and designs a set of test apparatuses for simulating the pretightening process of the HDS for the first time according to a model similarity criterion. By carrying out tests and researches through a stress testing technique, P–σ curves in loading and unloading processes of the HDS are obtained and the stress coefficient k f of the HDS is obtained. So, the

  5. Research on friction coefficient of nuclear Reactor Vessel Internals Hold Down Spring: Stress coefficient test analysis method

    Energy Technology Data Exchange (ETDEWEB)

    Linjun, Xie, E-mail: linjunx@zjut.edu.cn [College of Mechanical Engineering, Zhejiang University of Technology, Hangzhou 310014 (China); Guohong, Xue; Ming, Zhang [Shanghai Nuclear Engineering Research & Design Institute, Shanghai 200233 (China)

    2016-08-01

    Graphical abstract: HDS stress coefficient test apparatus. - Highlights: • This paper performs mathematic deduction to the physical model of Hold Down Spring (HDS), establishes a mathematic model of axial load P and stress, stress coefficient and friction coefficient and designs a set of test apparatuses for simulating the pretightening process of the HDS for the first time according to a model similarity criterion. • The mathematical relation between the load and the strain is obtained about the HDS, and the mathematical model of the stress coefficient and the friction coefficient is established. So, a set of test apparatuses for obtaining the stress coefficient is designed according to the model scaling criterion and the friction coefficient of the K1000 HDS is calculated to be 0.336 through the obtained stress coefficient. • The relation curve between the theoretical load and the friction coefficient is obtained through analysis and indicates that the change of the friction coefficient f would influence the pretightening load under the condition of designed stress. The necessary pretightening load in the design process is calculated to be 5469 kN according to the obtained friction coefficient. Therefore, the friction coefficient and the pretightening load under the design conditions can provide accurate pretightening data for the analysis and design of the reactor HDS according to the operations. - Abstract: This paper performs mathematic deduction to the physical model of Hold Down Spring (HDS), establishes a mathematic model of axial load P and stress, stress coefficient and friction coefficient and designs a set of test apparatuses for simulating the pretightening process of the HDS for the first time according to a model similarity criterion. By carrying out tests and researches through a stress testing technique, P–σ curves in loading and unloading processes of the HDS are obtained and the stress coefficient k{sub f} of the HDS is obtained. So, the

  6. Reliability test and failure analysis of high power LED packages

    International Nuclear Information System (INIS)

    Chen Zhaohui; Zhang Qin; Wang Kai; Luo Xiaobing; Liu Sheng

    2011-01-01

    A new type application specific light emitting diode (LED) package (ASLP) with freeform polycarbonate lens for street lighting is developed, whose manufacturing processes are compatible with a typical LED packaging process. The reliability test methods and failure criterions from different vendors are reviewed and compared. It is found that test methods and failure criterions are quite different. The rapid reliability assessment standards are urgently needed for the LED industry. 85 0 C/85 RH with 700 mA is used to test our LED modules with three other vendors for 1000 h, showing no visible degradation in optical performance for our modules, with two other vendors showing significant degradation. Some failure analysis methods such as C-SAM, Nano X-ray CT and optical microscope are used for LED packages. Some failure mechanisms such as delaminations and cracks are detected in the LED packages after the accelerated reliability testing. The finite element simulation method is helpful for the failure analysis and design of the reliability of the LED packaging. One example is used to show one currently used module in industry is vulnerable and may not easily pass the harsh thermal cycle testing. (semiconductor devices)

  7. A Fuzzy Logic Based Method for Analysing Test Results

    Directory of Open Access Journals (Sweden)

    Le Xuan Vinh

    2017-11-01

    Full Text Available Network operators must perform many tasks to ensure smooth operation of the network, such as planning, monitoring, etc. Among those tasks, regular testing of network performance, network errors and troubleshooting is very important. Meaningful test results will allow the operators to evaluate network performanceof any shortcomings and to better plan for network upgrade. Due to the diverse and mainly unquantifiable nature of network testing results, there is a needs to develop a method for systematically and rigorously analysing these results. In this paper, we present STAM (System Test-result Analysis Method which employs a bottom-up hierarchical processing approach using Fuzzy logic. STAM is capable of combining all test results into a quantitative description of the network performance in terms of network stability, the significance of various network erros, performance of each function blocks within the network. The validity of this method has been successfully demonstrated in assisting the testing of a VoIP system at the Research Instiute of Post and Telecoms in Vietnam. The paper is organized as follows. The first section gives an overview of fuzzy logic theory the concepts of which will be used in the development of STAM. The next section describes STAM. The last section, demonstrating STAM’s capability, presents a success story in which STAM is successfully applied.

  8. Well Test Analysis of Naturally Fractured Vuggy Reservoirs with an Analytical Triple Porosity – Double Permeability Model and a Global Optimization Method

    Directory of Open Access Journals (Sweden)

    Gómez Susana

    2014-07-01

    Full Text Available The aim of this work is to study the automatic characterization of Naturally Fractured Vuggy Reservoirs via well test analysis, using a triple porosity-dual permeability model. The inter-porosity flow parameters, the storativity ratios, as well as the permeability ratio, the wellbore storage effect, the skin and the total permeability will be identified as parameters of the model. In this work, we will perform the well test interpretation in Laplace space, using numerical algorithms to transfer the discrete real data given in fully dimensional time to Laplace space. The well test interpretation problem in Laplace space has been posed as a nonlinear least squares optimization problem with box constraints and a linear inequality constraint, which is usually solved using local Newton type methods with a trust region. However, local methods as the one used in our work called TRON or the well-known Levenberg-Marquardt method, are often not able to find an optimal solution with a good fit of the data. Also well test analysis with the triple porosity-double permeability model, like most inverse problems, can yield multiple solutions with good match to the data. To deal with these specific characteristics, we will use a global optimization algorithm called the Tunneling Method (TM. In the design of the algorithm, we take into account issues of the problem like the fact that the parameter estimation has to be done with high precision, the presence of noise in the measurements and the need to solve the problem computationally fast. We demonstrate that the use of the TM in this study, showed to be an efficient and robust alternative to solve the well test characterization, as several optimal solutions, with very good match to the data were obtained.

  9. Development of Ultraviolet Spectrophotometric Method for Analysis ...

    African Journals Online (AJOL)

    HP

    Method for Analysis of Lornoxicam in Solid Dosage. Forms. Sunit Kumar Sahoo ... testing. Mean recovery was 100.82 % for tablets. Low values of % RSD indicate .... Saharty E, Refaat YS, Khateeb ME. Stability-. Indicating. Spectrophotometric.

  10. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    International Nuclear Information System (INIS)

    Anggraeni, Novia Antika

    2015-01-01

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano’s inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 – 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between −2.86 up to 5.49 days

  11. Seismic energy data analysis of Merapi volcano to test the eruption time prediction using materials failure forecast method (FFM)

    Energy Technology Data Exchange (ETDEWEB)

    Anggraeni, Novia Antika, E-mail: novia.antika.a@gmail.com [Geophysics Sub-department, Physics Department, Faculty of Mathematic and Natural Science, Universitas Gadjah Mada. BLS 21 Yogyakarta 55281 (Indonesia)

    2015-04-24

    The test of eruption time prediction is an effort to prepare volcanic disaster mitigation, especially in the volcano’s inhabited slope area, such as Merapi Volcano. The test can be conducted by observing the increase of volcanic activity, such as seismicity degree, deformation and SO2 gas emission. One of methods that can be used to predict the time of eruption is Materials Failure Forecast Method (FFM). Materials Failure Forecast Method (FFM) is a predictive method to determine the time of volcanic eruption which was introduced by Voight (1988). This method requires an increase in the rate of change, or acceleration of the observed volcanic activity parameters. The parameter used in this study is the seismic energy value of Merapi Volcano from 1990 – 2012. The data was plotted in form of graphs of seismic energy rate inverse versus time with FFM graphical technique approach uses simple linear regression. The data quality control used to increase the time precision employs the data correlation coefficient value of the seismic energy rate inverse versus time. From the results of graph analysis, the precision of prediction time toward the real time of eruption vary between −2.86 up to 5.49 days.

  12. Gene set analysis using variance component tests.

    Science.gov (United States)

    Huang, Yen-Tsung; Lin, Xihong

    2013-06-28

    Gene set analyses have become increasingly important in genomic research, as many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional repertoire, e.g., a biological pathway/network and are highly correlated. However, most of the existing gene set analysis methods do not fully account for the correlation among the genes. Here we propose to tackle this important feature of a gene set to improve statistical power in gene set analyses. We propose to model the effects of an independent variable, e.g., exposure/biological status (yes/no), on multiple gene expression values in a gene set using a multivariate linear regression model, where the correlation among the genes is explicitly modeled using a working covariance matrix. We develop TEGS (Test for the Effect of a Gene Set), a variance component test for the gene set effects by assuming a common distribution for regression coefficients in multivariate linear regression models, and calculate the p-values using permutation and a scaled chi-square approximation. We show using simulations that type I error is protected under different choices of working covariance matrices and power is improved as the working covariance approaches the true covariance. The global test is a special case of TEGS when correlation among genes in a gene set is ignored. Using both simulation data and a published diabetes dataset, we show that our test outperforms the commonly used approaches, the global test and gene set enrichment analysis (GSEA). We develop a gene set analyses method (TEGS) under the multivariate regression framework, which directly models the interdependence of the expression values in a gene set using a working covariance. TEGS outperforms two widely used methods, GSEA and global test in both simulation and a diabetes microarray data.

  13. Horizontal crash testing and analysis of model flatrols

    International Nuclear Information System (INIS)

    Dowler, H.J.; Soanes, T.P.T.

    1985-01-01

    To assess the behaviour of a full scale flask and flatrol during a proposed demonstration impact into a tunnel abutment, a mathematical modelling technique was developed and validated. The work was performed at quarter scale and comprised of both scale model tests and mathematical analysis in one and two dimensions. Good agreement between model test results of the 26.8m/s (60 mph) abutment impacts and the mathematical analysis, validated the modelling techniques. The modelling method may be used with confidence to predict the outcome of the proposed full scale demonstration. (author)

  14. 30 CFR 36.41 - Testing methods.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Testing methods. 36.41 Section 36.41 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR TESTING, EVALUATION, AND APPROVAL OF... Requirements § 36.41 Testing methods. Mobile diesel-powered transportation equipment submitted for...

  15. [A study of biomechanical method for urine test based on color difference estimation].

    Science.gov (United States)

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Zhou, Fengkun

    2008-02-01

    The biochemical analysis of urine is an important inspection and diagnosis method in hospitals. The conventional method of urine analysis covers mainly colorimetric visual appraisement and automation detection, in which the colorimetric visual appraisement technique has been superseded basically, and the automation detection method is adopted in hospital; moreover, the price of urine biochemical analyzer on market is around twenty thousand RMB yuan (Y), which is hard to enter into ordinary families. It is known that computer vision system is not subject to the physiological and psychological influence of person, its appraisement standard is objective and steady. Therefore, according to the color theory, we have established a computer vision system, which can carry through collection, management, display, and appraisement of color difference between the color of standard threshold value and the color of urine test paper after reaction with urine liquid, and then the level of an illness can be judged accurately. In this paper, we introduce the Urine Test Biochemical Analysis method, which is new and can be popularized in families. Experimental result shows that this test method is easy-to-use and cost-effective. It can realize the monitoring of a whole course and can find extensive applications.

  16. Authentication Test-Based the RFID Authentication Protocol with Security Analysis

    Directory of Open Access Journals (Sweden)

    Minghui Wang

    2014-08-01

    Full Text Available To the problem of many recently proposed RFID authentication protocol was soon find security holes, we analyzed the main reason, which is that protocol design is not rigorous, and the correctness of the protocol cannot be guaranteed. To this end, authentication test method was adopted in the process of the formal analysis and strict proof to the proposed RFID protocol in this paper. Authentication Test is a new type of analysis and design method of security protocols based on Strand space model, and it can be used for most types of the security protocols. After analysis the security, the proposed protocol can meet the RFID security demand: information confidentiality, data integrity and identity authentication.

  17. Seismic analysis and testing of nuclear power plants

    International Nuclear Information System (INIS)

    1979-01-01

    The following subjects are discussed in this guide: General Recommendations for seismic classification, loading combinations and allowable limits; seismic analysis methods; implications for seismic design; seismic testing and qualification; seismic instrumentation; modelling techniques; material property characterization; seismic response of soil deposits and earth structures; liquefaction and ground failure; slope stability; sloshing effects in water pools; qualification testing by means of the transport vehicle

  18. Quantitative analysis of γ-oryzanol content in cold pressed rice bran oil by TLC-image analysis method

    OpenAIRE

    Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana

    2014-01-01

    Objective: To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. Methods: TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Results: Both assays provided good linearity, accuracy, reproducibility and selectivity for dete...

  19. Standard Test Methods for Wet Insulation Integrity Testing of Photovoltaic Modules

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 These test methods provide procedures to determine the insulation resistance of a photovoltaic (PV) module, i.e. the electrical resistance between the module's internal electrical components and its exposed, electrically conductive, non-current carrying parts and surfaces. 1.2 The insulation integrity procedures are a combination of wet insulation resistance and wet dielectric voltage withstand test procedures. 1.3 These procedures are similar to and reference the insulation integrity test procedures described in Test Methods E 1462, with the difference being that the photovoltaic module under test is immersed in a wetting solution during the procedures. 1.4 These test methods do not establish pass or fail levels. The determination of acceptable or unacceptable results is beyond the scope of these test methods. 1.5 The values stated in SI units are to be regarded as the standard. 1.6 There is no similar or equivalent ISO standard. 1.7 This standard does not purport to address all of the safety conce...

  20. Winston-Lutz Test: A quantitative analysis; Teste de Winston-Lutz: uma analise quantitativa

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Aline Garcia; Nandi, Dorival Menegaz; Saraiva, Crystian Wilian Chagas, E-mail: dnandi@gmail.com [Universidade Federal de Santa Catarina (UFSC), Florianopolis (Brazil); Instituto Federal de Santa Catarina (IFSC), Florianopolis (Brazil); Hospital do Coracao, Sao Paulo, SP (Brazil)

    2017-11-01

    Objective: Describe a method of quantitative analysis for the Winston-Lutz test. Materials and methods The research is a qualitative exploratory study. The materials used were: portal film; Winston- Lutz test tools and Omni Pro software. Sixteen portal films were used as samples and were analyzed by five different technicians to measure the deviation between the radiation isocenters and mechanic. Results: Among the results were identified two combinations with offset values greater than 1 mm. In addition, when compared the method developed with the previously studied, it was observed that the data obtained are very close, with the maximum percentage deviation of 32.5%, which demonstrates its efficacy in reducing dependence on the performer. Conclusion: The results show that the method is reproducible and practical, which constitutes one of the fundamental factors for its implementation. (author)

  1. Myasthenia Gravis: Tests and Diagnostic Methods

    Science.gov (United States)

    ... Focus on MG Newsletter MG Quarterly Test & Diagnostic methods In addition to a complete medical and neurological ... How can I help? About MGFA Test & Diagnostic methods Treatment for MG FAQ's Upcoming Events 2018 MG ...

  2. Development of Dissolution Test Method for Drotaverine ...

    African Journals Online (AJOL)

    Development of Dissolution Test Method for Drotaverine ... Methods: Sink conditions, drug stability and specificity in different dissolution media were tested to optimize a dissolution test .... test by Prism 4.0 software, and differences between ...

  3. New method of analyzing well tests in fractured wells using sandface pressure and rate data

    Energy Technology Data Exchange (ETDEWEB)

    Osman, M.; Almehaideb, R.; Abou-Kassem, J. [U.A.E. University, Al-Ain (United Arab Emirates)

    1998-05-01

    Analysis of variable flow rate tests has been of special interest recently because in many cases it is impractical to keep a flow rate constant long enough to perform a drawdown test. Further, in many other drawdown and buildup tests, the early data were influenced by wellbore storage effects, and the duration of these effects could be quite long for low-permeability reservoirs. This paper presents a mathematical model which describes drawdown and buildup tests in hydraulically fractured wells. This new method uses a specialized plot approach to analyze the linear flow data and combines it with the superposition of constant-rate solution method for the analysis of psuedoradial flow data. It does not require prior knowledge of the fracture type (uniform-flux or infinite-conductivity); in fact it predicts the fracture type. This method is useful for the analysis of simultaneously measured downhole pressure and sandface rate data. 12 refs., 11 figs., 3 tabs.

  4. New Graphical Methods and Test Statistics for Testing Composite Normality

    Directory of Open Access Journals (Sweden)

    Marc S. Paolella

    2015-07-01

    Full Text Available Several graphical methods for testing univariate composite normality from an i.i.d. sample are presented. They are endowed with correct simultaneous error bounds and yield size-correct tests. As all are based on the empirical CDF, they are also consistent for all alternatives. For one test, called the modified stabilized probability test, or MSP, a highly simplified computational method is derived, which delivers the test statistic and also a highly accurate p-value approximation, essentially instantaneously. The MSP test is demonstrated to have higher power against asymmetric alternatives than the well-known and powerful Jarque-Bera test. A further size-correct test, based on combining two test statistics, is shown to have yet higher power. The methodology employed is fully general and can be applied to any i.i.d. univariate continuous distribution setting.

  5. Standard test methods for arsenic in uranium hexafluoride

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 These test methods are applicable to the determination of total arsenic in uranium hexafluoride (UF6) by atomic absorption spectrometry. Two test methods are given: Test Method A—Arsine Generation-Atomic Absorption (Sections 5-10), and Test Method B—Graphite Furnace Atomic Absorption (Appendix X1). 1.2 The test methods are equivalent. The limit of detection for each test method is 0.1 μg As/g U when using a sample containing 0.5 to 1.0 g U. Test Method B does not have the complete collection details for precision and bias data thus the method appears as an appendix. 1.3 Test Method A covers the measurement of arsenic in uranyl fluoride (UO2F2) solutions by converting arsenic to arsine and measuring the arsine vapor by flame atomic absorption spectrometry. 1.4 Test Method B utilizes a solvent extraction to remove the uranium from the UO2F2 solution prior to measurement of the arsenic by graphite furnace atomic absorption spectrometry. 1.5 Both insoluble and soluble arsenic are measured when UF6 is...

  6. 40 CFR 63.465 - Test methods.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 9 2010-07-01 2010-07-01 false Test methods. 63.465 Section 63.465... Halogenated Solvent Cleaning § 63.465 Test methods. (a) Except as provided in paragraphs (f) and (g) of this... Reference Method 307 in appendix A of this part. (b) Except as provided in paragraph (g) of this section for...

  7. [Comparison of application of Cochran-Armitage trend test and linear regression analysis for rate trend analysis in epidemiology study].

    Science.gov (United States)

    Wang, D Z; Wang, C; Shen, C F; Zhang, Y; Zhang, H; Song, G D; Xue, X D; Xu, Z L; Zhang, S; Jiang, G H

    2017-05-10

    We described the time trend of acute myocardial infarction (AMI) from 1999 to 2013 in Tianjin incidence rate with Cochran-Armitage trend (CAT) test and linear regression analysis, and the results were compared. Based on actual population, CAT test had much stronger statistical power than linear regression analysis for both overall incidence trend and age specific incidence trend (Cochran-Armitage trend P valuelinear regression P value). The statistical power of CAT test decreased, while the result of linear regression analysis remained the same when population size was reduced by 100 times and AMI incidence rate remained unchanged. The two statistical methods have their advantages and disadvantages. It is necessary to choose statistical method according the fitting degree of data, or comprehensively analyze the results of two methods.

  8. Usage of Latent Class Analysis in Diagnostic Microbiology in the Absence of Gold Standard Test

    Directory of Open Access Journals (Sweden)

    Gul Bayram Abiha

    2016-12-01

    Full Text Available The evaluation of performance of various tests diagnostic tests in the absence of gold standard is an important problem. Latent class analysis (LCA is a statistical analysis method known for many years, especially in the absence of a gold standard for evaluation of diagnostic tests so that LCA has found its wide application area. During the last decade, LCA method has widely used in for determining sensivity and specifity of different microbiological tests. It has investigated in the diagnosis of mycobacterium tuberculosis, mycobacterium bovis, human papilloma virus, bordetella pertussis, influenza viruses, hepatitis E virus (HEV, hepatitis C virus (HCV and other various viral infections. Researchers have compared several diagnostic tests for the diagnosis of different pathogens with LCA. We aimed to evaluate performance of latent class analysis method used microbiological diagnosis in various diseases in several researches. When we took into account all of these tests' results, we suppose that LCA is a good statistical analysis method to assess different test performances in the absence of gold standard. [Archives Medical Review Journal 2016; 25(4.000: 467-488

  9. Dissolution testing of intermediary products in uranium dioxide production by the sol-gel method

    International Nuclear Information System (INIS)

    Melichar, F.; Landspersky, H.; Urbanek, V.

    1979-01-01

    A method was developed of dissolving polyuranates and uranium dioxides in sulphuric acid and in carbonate solutions for testing intermediate products in the sol-gel process preparation of uranium dioxide. A detailed granulometric analysis of spherical particle dispersion was included as part of the tests. Two different production methods were used for the two types of studied materials. The test results show that the test method is suitable for determining temperature sensitivity of the materials to dissolution reaction. The geometrical distribution of impurities in the spherical particles can be determined from the dissolution kinetics. The method allows the determination of the effect of carbon from impurities on the process of uranium dioxide leaching and is thus applicable for testing materials prepared by the sol-gel method. (Z.M.)

  10. Application of Software Safety Analysis Methods

    International Nuclear Information System (INIS)

    Park, G. Y.; Hur, S.; Cheon, S. W.; Kim, D. H.; Lee, D. Y.; Kwon, K. C.; Lee, S. J.; Koo, Y. H.

    2009-01-01

    A fully digitalized reactor protection system, which is called the IDiPS-RPS, was developed through the KNICS project. The IDiPS-RPS has four redundant and separated channels. Each channel is mainly composed of a group of bistable processors which redundantly compare process variables with their corresponding setpoints and a group of coincidence processors that generate a final trip signal when a trip condition is satisfied. Each channel also contains a test processor called the ATIP and a display and command processor called the COM. All the functions were implemented in software. During the development of the safety software, various software safety analysis methods were applied, in parallel to the verification and validation (V and V) activities, along the software development life cycle. The software safety analysis methods employed were the software hazard and operability (Software HAZOP) study, the software fault tree analysis (Software FTA), and the software failure modes and effects analysis (Software FMEA)

  11. A new method for flight test determination of propulsive efficiency and drag coefficient

    Science.gov (United States)

    Bull, G.; Bridges, P. D.

    1983-01-01

    A flight test method is described from which propulsive efficiency as well as parasite and induced drag coefficients can be directly determined using relatively simple instrumentation and analysis techniques. The method uses information contained in the transient response in airspeed for a small power change in level flight in addition to the usual measurement of power required for level flight. Measurements of pitch angle and longitudinal and normal acceleration are eliminated. The theoretical basis for the method, the analytical techniques used, and the results of application of the method to flight test data are presented.

  12. Examining Method Effect of Synonym and Antonym Test in Verbal Abilities Measure

    Directory of Open Access Journals (Sweden)

    Wahyu Widhiarso

    2015-08-01

    Full Text Available Many researchers have assumed that different methods could be substituted to measure the same attributes in assessment. Various models have been developed to accommodate the amount of variance attributable to the methods but these models application in empirical research is rare. The present study applied one of those models to examine whether method effects were presents in synonym and antonym tests. Study participants were 3,469 applicants to graduate school. The instrument used was the Graduate Academic Potential Test (PAPS, which includes synonym and antonym questions to measure verbal abilities. Our analysis showed that measurement models that using correlated trait–correlated methods minus one, CT-C(M–1, that separated trait and method effect into distinct latent constructs yielded slightly better values for multiple goodness-of-fit indices than one factor model. However, either for the synonym or antonym items, the proportion of variance accounted for by the method is smaller than trait variance. The correlation between factor scores of both methods is high (r = 0.994. These findings confirm that synonym and antonym tests represent the same attribute so that both tests cannot be treated as two unique methods for measuring verbal ability.

  13. Examining Method Effect of Synonym and Antonym Test in Verbal Abilities Measure.

    Science.gov (United States)

    Widhiarso, Wahyu; Haryanta

    2015-08-01

    Many researchers have assumed that different methods could be substituted to measure the same attributes in assessment. Various models have been developed to accommodate the amount of variance attributable to the methods but these models application in empirical research is rare. The present study applied one of those models to examine whether method effects were presents in synonym and antonym tests. Study participants were 3,469 applicants to graduate school. The instrument used was the Graduate Academic Potential Test (PAPS), which includes synonym and antonym questions to measure verbal abilities. Our analysis showed that measurement models that using correlated trait-correlated methods minus one, CT-C(M-1), that separated trait and method effect into distinct latent constructs yielded slightly better values for multiple goodness-of-fit indices than one factor model. However, either for the synonym or antonym items, the proportion of variance accounted for by the method is smaller than trait variance. The correlation between factor scores of both methods is high (r = 0.994). These findings confirm that synonym and antonym tests represent the same attribute so that both tests cannot be treated as two unique methods for measuring verbal ability.

  14. Direct methods of soil-structure interaction analysis for earthquake loadings(II)

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Chung Bang; Lee, S R; Kim, J M; Park, K L; Oh, S B; Choi, J S; Kim, Y S [Korea Advanced Institute of Science Technology, Daejeon (Korea, Republic of)

    1994-07-15

    In this study, methods for 3-D soil-structure interaction analysis have been studied. They are 3-D axisymmetric analysis method, 3-D axisymmetric finite element method incorporating infinite elements, and 3-D boundary element methods. The computer code, named as 'KIESSI - PF', has been developed which is based on the 3-D axisymmetric finite element method coupled with infinite element method. It is able to simulate forced vibration test results of a soil-structure interaction system. The Hualien FVT post-correlation analysis before backfill and the blind prediction analysis after backfill have been carried out using the developed computer code 'KIESSI - PF'.

  15. Direct methods of soil-structure interaction analysis for earthquake loadings(II)

    International Nuclear Information System (INIS)

    Yun, Chung Bang; Lee, S. R.; Kim, J. M.; Park, K. L.; Oh, S. B.; Choi, J. S.; Kim, Y. S.

    1994-07-01

    In this study, methods for 3-D soil-structure interaction analysis have been studied. They are 3-D axisymmetric analysis method, 3-D axisymmetric finite element method incorporating infinite elements, and 3-D boundary element methods. The computer code, named as 'KIESSI - PF', has been developed which is based on the 3-D axisymmetric finite element method coupled with infinite element method. It is able to simulate forced vibration test results of a soil-structure interaction system. The Hualien FVT post-correlation analysis before backfill and the blind prediction analysis after backfill have been carried out using the developed computer code 'KIESSI - PF'

  16. Blood transport method for chromosome analysis of residents living near Semipalatinsk nuclear test site.

    Science.gov (United States)

    Rodzi, Mohd; Ihda, Shozo; Yokozeki, Masako; Takeichi, Nobuo; Tanaka, Kimio; Hoshi, Masaharu

    2009-12-01

    A study was conducted to compare the storage conditions and transportation period for blood samples collected from residents living in areas near the Semipalatinsk nuclear test site (SNTS). Experiments were performed to simulate storage and shipping environments. Phytohaemagglutinin (PHA)-stimulated blood was stored in 15-ml tubes (condition A: current transport method) in the absence or in 50-ml flasks (condition B: previous transport method) in the presence of RPMI-1640 and 20% fetal bovine serum (FBS). Samples were kept refrigerated at 4 degrees C and cell viability was assessed after 3, 8, 12 and 14 days of storage. RPMI-1640, 20% FBS and further PHA were added to blood samples under condition A in 50-ml flasks for culture. Whole-blood samples under condition B were directly incubated without further sub-culturing process, neither media nor PHA were added, to adopt a similar protocol to that employed in the previous transport method. Samples in condition A and condition B were incubated for 48 hr at 37 degrees C and their mitotic index was determined. The results showed that viable lymphocytes were consistent in both storage conditions but the mitotic index was higher in condition A than in condition B. Although further confirmation studies have to be carried out, previous chromosomal studies and the present experiment have shown that PHA-stimulated blood could be stored without culture medium for up to 8 days under condition A. The present results will be useful for cytogenetic analysis of blood samples that have been transported long distances wherever a radiation accident has occurred.

  17. Standard test method for dynamic tear testing of metallic materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1983-01-01

    1.1 This test method covers the dynamic tear (DT) test using specimens that are 3/16 in. to 5/8 in. (5 mm to 16 mm) inclusive in thickness. 1.2 This test method is applicable to materials with a minimum thickness of 3/16 in. (5 mm). 1.3 The pressed-knife procedure described for sharpening the notch tip generally limits this test method to materials with a hardness level less than 36 HRC. Note 1—The designation 36 HRC is a Rockwell hardness number of 36 on Rockwell C scale as defined in Test Methods E 18. 1.4 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. 1.5 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  18. Using Response Surface Methods to Correlate the Modal Test of an Inflatable Test Article

    Science.gov (United States)

    Gupta, Anju

    2013-01-01

    This paper presents a practical application of response surface methods (RSM) to correlate a finite element model of a structural modal test. The test article is a quasi-cylindrical inflatable structure which primarily consists of a fabric weave, with an internal bladder and metallic bulkheads on either end. To mitigate model size, the fabric weave was simplified by representing it with shell elements. The task at hand is to represent the material behavior of the weave. The success of the model correlation is measured by comparing the four major modal frequencies of the analysis model to the four major modal frequencies of the test article. Given that only individual strap material properties were provided and material properties of the overall weave were not available, defining the material properties of the finite element model became very complex. First it was necessary to determine which material properties (modulus of elasticity in the hoop and longitudinal directions, shear modulus, Poisson's ratio, etc.) affected the modal frequencies. Then a Latin Hypercube of the parameter space was created to form an efficiently distributed finite case set. Each case was then analyzed with the results input into RSM. In the resulting response surface it was possible to see how each material parameter affected the modal frequencies of the analysis model. If the modal frequencies of the analysis model and its corresponding parameters match the test with acceptable accuracy, it can be said that the model correlation is successful.

  19. Analysis of unbalanced sensor in eddy current method of non destructive testing

    International Nuclear Information System (INIS)

    Chegodaev, V.V.

    2001-01-01

    Different types of sensors are used in eddy current method of non-destructive testing. The choosing of sensor type depends on control object. Different types of sensors can have the same schemes of cut-in in device for formation of information signal. The most common scheme of sensor cut-in is presented. The calculation of output voltage when the sensor is on a segment of the control object, which has not defect is made. The conditions of balancing are adduced and it was shown that the balancing of sensor is very difficult. The methods of compensation or account of voltage of an imbalance are indicated. (author)

  20. Results and Analysis from Space Suit Joint Torque Testing

    Science.gov (United States)

    Matty, Jennifer

    2010-01-01

    This joint mobility KC lecture included information from two papers, "A Method for and Issues Associated with the Determination of Space Suit Joint Requirements" and "Results and Analysis from Space Suit Joint Torque Testing," as presented for the International Conference on Environmental Systems in 2009 and 2010, respectively. The first paper discusses historical joint torque testing methodologies and approaches that were tested in 2008 and 2009. The second paper discusses the testing that was completed in 2009 and 2010.

  1. Testing all six person-oriented principles in dynamic factor analysis.

    Science.gov (United States)

    Molenaar, Peter C M

    2010-05-01

    All six person-oriented principles identified by Sterba and Bauer's Keynote Article can be tested by means of dynamic factor analysis in its current form. In particular, it is shown how complex interactions and interindividual differences/intraindividual change can be tested in this way. In addition, the necessity to use single-subject methods in the analysis of developmental processes is emphasized, and attention is drawn to the possibility to optimally treat developmental psychopathology by means of new computational techniques that can be integrated with dynamic factor analysis.

  2. 40 CFR 59.207 - Test methods.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Test methods. 59.207 Section 59.207 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL... Compound Emission Standards for Consumer Products § 59.207 Test methods. Each manufacturer or importer...

  3. Detection Of Cracks In Composite Materials Using Hybrid Non-Destructive Testing Method Based On Vibro-Thermography And Time-Frequency Analysis Of Ultrasonic Excitation Signal

    Directory of Open Access Journals (Sweden)

    Prokopowicz Wojciech

    2015-09-01

    Full Text Available The theme of the publication is to determine the possibility of diagnosing damage in composite materials using vibrio-thermography and frequency analysis and time-frequency of excitation signal. In order to verify the proposed method experiments were performed on a sample of the composite made in the technology of pressing prepregs. Analysis of the recorded signals and the thermograms were performed in MatLab environment. Hybrid non-destructive testing method based on thermogram and appropriate signal processing algorithm clearly showed damage in the sample composite material.

  4. Standard Test Method for Sandwich Corrosion Test

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method defines the procedure for evaluating the corrosivity of aircraft maintenance chemicals, when present between faying surfaces (sandwich) of aluminum alloys commonly used for aircraft structures. This test method is intended to be used in the qualification and approval of compounds employed in aircraft maintenance operations. 1.2 The values stated in SI units are to be regarded as the standard. The values given in parentheses are for information. 1.3 This standard may involve hazardous materials, operations, and equipment. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. Specific hazard statements appear in Section 9.

  5. Test Methods for Robot Agility in Manufacturing.

    Science.gov (United States)

    Downs, Anthony; Harrison, William; Schlenoff, Craig

    2016-01-01

    The paper aims to define and describe test methods and metrics to assess industrial robot system agility in both simulation and in reality. The paper describes test methods and associated quantitative and qualitative metrics for assessing robot system efficiency and effectiveness which can then be used for the assessment of system agility. The paper describes how the test methods were implemented in a simulation environment and real world environment. It also shows how the metrics are measured and assessed as they would be in a future competition. The test methods described in this paper will push forward the state of the art in software agility for manufacturing robots, allowing small and medium manufacturers to better utilize robotic systems. The paper fulfills the identified need for standard test methods to measure and allow for improvement in software agility for manufacturing robots.

  6. A Systematic Method For Tracer Test Analysis: An Example Using Beowawe Tracer Data

    Energy Technology Data Exchange (ETDEWEB)

    G. Michael Shook

    2005-01-01

    Quantitative analysis of tracer data using moment analysis requires a strict adherence to a set of rules which include data normalization, correction for thermal decay, deconvolution, extrapolation, and integration. If done correctly, the method yields specific information on swept pore volume, flow geometry and fluid velocity, and an understanding of the nature of reservoir boundaries. All calculations required for the interpretation can be done in a spreadsheet. The steps required for moment analysis are reviewed in this paper. Data taken from the literature is used in an example calculation.

  7. Direct methods of soil-structure interaction analysis for earthquake loadings(II)

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Chung Bang; Lee, S. R.; Kim, J. M.; Park, K. L.; Oh, S. B.; Choi, J. S.; Kim, Y. S. [Korea Advanced Institute of Science Technology, Daejeon (Korea, Republic of)

    1994-07-15

    In this study, methods for 3-D soil-structure interaction analysis have been studied. They are 3-D axisymmetric analysis method, 3-D axisymmetric finite element method incorporating infinite elements, and 3-D boundary element methods. The computer code, named as 'KIESSI - PF', has been developed which is based on the 3-D axisymmetric finite element method coupled with infinite element method. It is able to simulate forced vibration test results of a soil-structure interaction system. The Hualien FVT post-correlation analysis before backfill and the blind prediction analysis after backfill have been carried out using the developed computer code 'KIESSI - PF'.

  8. Automated Test Methods for XML Metadata

    Science.gov (United States)

    2017-12-28

    8933 Com (661) 277 8933 email jon.morgan.2.ctr@us.af.mil Secretariat, Range Commanders Council ATTN: TEDT-WS-RCC 1510 Headquarters Avenue White...Sands Missile Range, New Mexico 88002-5110 Phone: DSN 258-1107 Com (575) 678-1107 Fax: DSN 258-7519 Com (575) 678-7519 email ...Method for Testing Syntax The test method is as follows. 1. Initialize the programming environment. 2. Write test application code to use the

  9. Non-destructive testing of full-length bonded rock bolts based on HHT signal analysis

    Science.gov (United States)

    Shi, Z. M.; Liu, L.; Peng, M.; Liu, C. C.; Tao, F. J.; Liu, C. S.

    2018-04-01

    Full-length bonded rock bolts are commonly used in mining, tunneling and slope engineering because of their simple design and resistance to corrosion. However, the length of a rock bolt and grouting quality do not often meet the required design standards in practice because of the concealment and complexity of bolt construction. Non-destructive testing is preferred when testing a rock bolt's quality because of the convenience, low cost and wide detection range. In this paper, a signal analysis method for the non-destructive sound wave testing of full-length bonded rock bolts is presented, which is based on the Hilbert-Huang transform (HHT). First, we introduce the HHT analysis method to calculate the bolt length and identify defect locations based on sound wave reflection test signals, which includes decomposing the test signal via empirical mode decomposition (EMD), selecting the intrinsic mode functions (IMF) using the Pearson Correlation Index (PCI) and calculating the instantaneous phase and frequency via the Hilbert transform (HT). Second, six model tests are conducted using different grouting defects and bolt protruding lengths to verify the effectiveness of the HHT analysis method. Lastly, the influence of the bolt protruding length on the test signal, identification of multiple reflections from defects, bolt end and protruding end, and mode mixing from EMD are discussed. The HHT analysis method can identify the bolt length and grouting defect locations from signals that contain noise at multiple reflected interfaces. The reflection from the long protruding end creates an irregular test signal with many frequency peaks on the spectrum. The reflections from defects barely change the original signal because they are low energy, which cannot be adequately resolved using existing methods. The HHT analysis method can identify reflections from the long protruding end of the bolt and multiple reflections from grouting defects based on mutations in the instantaneous

  10. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated

    OpenAIRE

    Sandurska, Elżbieta; Szulc, Aleksandra

    2016-01-01

    Sandurska Elżbieta, Szulc Aleksandra. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated. Journal of Education Health and Sport. 2016;6(13):275-287. eISSN 2391-8306. DOI http://dx.doi.org/10.5281/zenodo.293762 http://ojs.ukw.edu.pl/index.php/johs/article/view/4278 The journal has had 7 points in Ministry of Science and Higher Education parametric evaluation. Part B item 754 (09.12.2016). 754 Journal...

  11. Flow analysis of HANARO flow simulated test facility

    International Nuclear Information System (INIS)

    Park, Yong-Chul; Cho, Yeong-Garp; Wu, Jong-Sub; Jun, Byung-Jin

    2002-01-01

    The HANARO, a multi-purpose research reactor of 30 MWth open-tank-in-pool type, has been under normal operation since its initial critical in February, 1995. Many experiments should be safely performed to activate the utilization of the NANARO. A flow simulated test facility is being developed for the endurance test of reactivity control units for extended life times and the verification of structural integrity of those experimental facilities prior to loading in the HANARO. This test facility is composed of three major parts; a half-core structure assembly, flow circulation system and support system. The half-core structure assembly is composed of plenum, grid plate, core channel with flow tubes, chimney and dummy pool. The flow channels are to be filled with flow orifices to simulate core channels. This test facility must simulate similar flow characteristics to the HANARO. This paper, therefore, describes an analytical analysis to study the flow behavior of the test facility. The computational flow analysis has been performed for the verification of flow structure and similarity of this test facility assuming that flow rates and pressure differences of the core channel are constant. The shapes of flow orifices were determined by the trial and error method based on the design requirements of core channel. The computer analysis program with standard k - ε turbulence model was applied to three-dimensional analysis. The results of flow simulation showed a similar flow characteristic with that of the HANARO and satisfied the design requirements of this test facility. The shape of flow orifices used in this numerical simulation can be adapted for manufacturing requirements. The flow rate and the pressure difference through core channel proved by this simulation can be used as the design requirements of the flow system. The analysis results will be verified with the results of the flow test after construction of the flow system. (author)

  12. New test and characterization methods for PV modules and cells

    Energy Technology Data Exchange (ETDEWEB)

    Van Aken, B.; Sommeling, P. [ECN Solar Energy, Petten (Netherlands); Scholten, H. [Solland, Heerlen (Netherlands); Muller, J. [Moser-Baer, Eindhoven (Netherlands); Grossiord, N. [Holst Centre, Eindhoven (Netherlands); Smits, C.; Blanco Mantecon, M. [Holland Innovative, Eindhoven (Netherlands); Verheijen, M.; Van Berkum, J. [Philips Innovation Services, Eindhoven (Netherlands)

    2012-08-15

    The results of the project geZONd (shared facility for solar module analysis and reliability testing) are described. The project was set up by Philips, ECN, Holst, Solland, OM and T and Holland Innovative. The partners have shared most of their testing and analysis equipment for PV modules and cells, and together developed new or improved methods (including the necessary application know-how). This enables faster and more efficient innovation projects for each partner, and via commercial exploitation for other interested parties. The project has concentrated on five failure modes: corrosion, delamination, moisture ingress, UV irradiation, and mechanical bending. Test samples represented all main PV technologies: wafer based PV and rigid and flexible thin-film PV. Breakthroughs are in very early detection of corrosion, in quantitative characterization of adhesion, in-situ detection of humidity and oxygen inside modules, and ultra-fast screening of materials on UV stability.

  13. Optimization of an Optical Inspection System Based on the Taguchi Method for Quantitative Analysis of Point-of-Care Testing

    Directory of Open Access Journals (Sweden)

    Chia-Hsien Yeh

    2014-09-01

    Full Text Available This study presents an optical inspection system for detecting a commercial point-of-care testing product and a new detection model covering from qualitative to quantitative analysis. Human chorionic gonadotropin (hCG strips (cut-off value of the hCG commercial product is 25 mIU/mL were the detection target in our study. We used a complementary metal-oxide semiconductor (CMOS sensor to detect the colors of the test line and control line in the specific strips and to reduce the observation errors by the naked eye. To achieve better linearity between the grayscale and the concentration, and to decrease the standard deviation (increase the signal to noise ratio, S/N, the Taguchi method was used to find the optimal parameters for the optical inspection system. The pregnancy test used the principles of the lateral flow immunoassay, and the colors of the test and control line were caused by the gold nanoparticles. Because of the sandwich immunoassay model, the color of the gold nanoparticles in the test line was darkened by increasing the hCG concentration. As the results reveal, the S/N increased from 43.48 dB to 53.38 dB, and the hCG concentration detection increased from 6.25 to 50 mIU/mL with a standard deviation of less than 10%. With the optimal parameters to decrease the detection limit and to increase the linearity determined by the Taguchi method, the optical inspection system can be applied to various commercial rapid tests for the detection of ketamine, troponin I, and fatty acid binding protein (FABP.

  14. A Modified Generalized Fisher Method for Combining Probabilities from Dependent Tests

    Directory of Open Access Journals (Sweden)

    Hongying (Daisy eDai

    2014-02-01

    Full Text Available Rapid developments in molecular technology have yielded a large amount of high throughput genetic data to understand the mechanism for complex traits. The increase of genetic variants requires hundreds and thousands of statistical tests to be performed simultaneously in analysis, which poses a challenge to control the overall Type I error rate. Combining p-values from multiple hypothesis testing has shown promise for aggregating effects in high-dimensional genetic data analysis. Several p-value combining methods have been developed and applied to genetic data; see [Dai, et al. 2012b] for a comprehensive review. However, there is a lack of investigations conducted for dependent genetic data, especially for weighted p-value combining methods. Single nucleotide polymorphisms (SNPs are often correlated due to linkage disequilibrium. Other genetic data, including variants from next generation sequencing, gene expression levels measured by microarray, protein and DNA methylation data, etc. also contain complex correlation structures. Ignoring correlation structures among genetic variants may lead to severe inflation of Type I error rates for omnibus testing of p-values. In this work, we propose modifications to the Lancaster procedure by taking the correlation structure among p-values into account. The weight function in the Lancaster procedure allows meaningful biological information to be incorporated into the statistical analysis, which can increase the power of the statistical testing and/or remove the bias in the process. Extensive empirical assessments demonstrate that the modified Lancaster procedure largely reduces the Type I error rates due to correlation among p-values, and retains considerable power to detect signals among p-values. We applied our method to reassess published renal transplant data, and identified a novel association between B cell pathways and allograft tolerance.

  15. Uncertainty of quantitative microbiological methods of pharmaceutical analysis.

    Science.gov (United States)

    Gunar, O V; Sakhno, N G

    2015-12-30

    The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. ASTM F739 method for testing the permeation resistance of protective clothing materials: critical analysis with proposed changes in procedure and test-cell design.

    Science.gov (United States)

    Anna, D H; Zellers, E T; Sulewski, R

    1998-08-01

    ASTM (American Society for Testing and Materials) Method F739-96 specifies a test-cell design and procedures for measuring the permeation resistance of chemical protective clothing. Among the specifications are open-loop collection stream flow rates of 0.050 to 0.150 L/min for a gaseous medium. At elevated temperatures the test must be maintained within 1 degree C of the set point. This article presents a critical analysis of the effect of the collection stream flow rate on the measured permeation rate and on the temperature uniformity within the test cell. Permeation tests were conducted on four polymeric glove materials with 44 solvents at 25 degrees C. Flow rates > 0.5 L/min were necessary to obtain accurate steady-state permeation rate (SSPR) values in 25 percent of the tests. At the lower flow rates the true SSPR typically was underestimated by a factor of two or less, but errors of up to 33-fold were observed. No clear relationship could be established between the need for a higher collection stream flow rate and either the vapor pressure or the permeation rate of the solvent, but test results suggest that poor mixing within the collection chamber was a contributing factor. Temperature gradients between the challenge and collection chambers and between the bottom and the top of the collection chamber increased with the water-bath temperature and the collection stream flow rate. Use of a test cell modified to permit deeper submersion reduced the gradients to < or = 0.5 degrees C. It is recommended that all SSPR measurements include verification of the adequacy of the collection stream flow rate. For testing at nonambient temperatures, the modified test cell described here could be used to ensure temperature uniformity throughout the cell.

  17. A method of statistical analysis in the field of sports science when assumptions of parametric tests are not violated

    Directory of Open Access Journals (Sweden)

    Elżbieta Sandurska

    2016-12-01

    Full Text Available Introduction: Application of statistical software typically does not require extensive statistical knowledge, allowing to easily perform even complex analyses. Consequently, test selection criteria and important assumptions may be easily overlooked or given insufficient consideration. In such cases, the results may likely lead to wrong conclusions. Aim: To discuss issues related to assumption violations in the case of Student's t-test and one-way ANOVA, two parametric tests frequently used in the field of sports science, and to recommend solutions. Description of the state of knowledge: Student's t-test and ANOVA are parametric tests, and therefore some of the assumptions that need to be satisfied include normal distribution of the data and homogeneity of variances in groups. If the assumptions are violated, the original design of the test is impaired, and the test may then be compromised giving spurious results. A simple method to normalize the data and to stabilize the variance is to use transformations. If such approach fails, a good alternative to consider is a nonparametric test, such as Mann-Whitney, the Kruskal-Wallis or Wilcoxon signed-rank tests. Summary: Thorough verification of the parametric tests assumptions allows for correct selection of statistical tools, which is the basis of well-grounded statistical analysis. With a few simple rules, testing patterns in the data characteristic for the study of sports science comes down to a straightforward procedure.

  18. EXPLANATORY METHODS OF MARKETING DATA ANALYSIS – THEORETICAL AND METHODOLOGICAL CONSIDERATIONS

    Directory of Open Access Journals (Sweden)

    Rozalia GABOR

    2010-01-01

    Full Text Available Explanatory methods of data analysis – also named by some authors supervised learning methods - enable researchers to identify and analyse configurations of relations between two or several variables, most of them with a high accuracy, as there is possibility of testing statistic significance by calculating the confidence level associated with validation of relation concerned across the entire population and not only the surveyed sample. The paper shows some of these methods, respectively: variance analysis, covariance analysis, segmentation and discriminant analysis with the mention - for every method – of applicability area for marketing research.

  19. A method for crack sizing using Bayesian inference arising in eddy current testing

    International Nuclear Information System (INIS)

    Kojima, Fumio; Kikuchi, Mitsuhiro

    2008-01-01

    This paper is concerned with a sizing methodology of crack using Bayesian inference arising in eddy current testing. There is often uncertainty about data through quantitative measurements of nondestructive testing and this can yield misleading inference of crack sizing at on-site monitoring. In this paper, we propose optimal strategies of measurements in eddy current testing using Bayesian prior-to-posteriori analysis. First our likelihood functional is given by Gaussian distribution with the measurement model based on the hybrid use of finite and boundary element methods. Secondly, given a priori distributions of crack sizing, we propose a method for estimating the region of interest for sizing cracks. Finally an optimal sensing method is demonstrated using our idea. (author)

  20. Valid methods: the quality assurance of test method development, validation, approval, and transfer for veterinary testing laboratories.

    Science.gov (United States)

    Wiegers, Ann L

    2003-07-01

    Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.

  1. Powerful Tests for Multi-Marker Association Analysis Using Ensemble Learning.

    Directory of Open Access Journals (Sweden)

    Badri Padhukasahasram

    Full Text Available Multi-marker approaches have received a lot of attention recently in genome wide association studies and can enhance power to detect new associations under certain conditions. Gene-, gene-set- and pathway-based association tests are increasingly being viewed as useful supplements to the more widely used single marker association analysis which have successfully uncovered numerous disease variants. A major drawback of single-marker based methods is that they do not look at the joint effects of multiple genetic variants which individually may have weak or moderate signals. Here, we describe novel tests for multi-marker association analyses that are based on phenotype predictions obtained from machine learning algorithms. Instead of assuming a linear or logistic regression model, we propose the use of ensembles of diverse machine learning algorithms for prediction. We show that phenotype predictions obtained from ensemble learning algorithms provide a new framework for multi-marker association analysis. They can be used for constructing tests for the joint association of multiple variants, adjusting for covariates and testing for the presence of interactions. To demonstrate the power and utility of this new approach, we first apply our method to simulated SNP datasets. We show that the proposed method has the correct Type-1 error rates and can be considerably more powerful than alternative approaches in some situations. Then, we apply our method to previously studied asthma-related genes in 2 independent asthma cohorts to conduct association tests.

  2. Test methods for the dynamic mechanical properties of polymeric materials. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Baker, G.K.

    1980-06-01

    Various test geometries and procedures for the dynamic mechanical analysis of polymers employing a mechanical spectrometer have been evaluated. The methods and materials included in this work are forced torsional pendulum testing of Kevlar/epoxy laminates and rigid urethane foams, oscillatory parallel plate testing to determine the kinetics of the cure of VCE with Hylene MP, oscillatory compressive testing of B-3223 cellular silicone, and oscillatory tensile testing of Silastic E and single Kevlar filaments. Fundamental dynamic mechanical properties, including the storage and loss moduli and loss tangent of the materials tested, were determined as a function of temperature and sometimes of frequency.

  3. Computational methods for nuclear criticality safety analysis

    International Nuclear Information System (INIS)

    Maragni, M.G.

    1992-01-01

    Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)

  4. Validation test case generation based on safety analysis ontology

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Wang, Wen-Shing

    2012-01-01

    Highlights: ► Current practice in validation test case generation for nuclear system is mainly ad hoc. ► This study designs a systematic approach to generate validation test cases from a Safety Analysis Report. ► It is based on a domain-specific ontology. ► Test coverage criteria have been defined and satisfied. ► A computerized toolset has been implemented to assist the proposed approach. - Abstract: Validation tests in the current nuclear industry practice are typically performed in an ad hoc fashion. This study presents a systematic and objective method of generating validation test cases from a Safety Analysis Report (SAR). A domain-specific ontology was designed and used to mark up a SAR; relevant information was then extracted from the marked-up document for use in automatically generating validation test cases that satisfy the proposed test coverage criteria; namely, single parameter coverage, use case coverage, abnormal condition coverage, and scenario coverage. The novelty of this technique is its systematic rather than ad hoc test case generation from a SAR to achieve high test coverage.

  5. Application of advanced irradiation analysis methods to light water reactor pressure vessel test and surveillance programs

    International Nuclear Information System (INIS)

    Odette, R.; Dudey, N.; McElroy, W.; Wullaert, R.; Fabry, A.

    1977-01-01

    Inaccurate characterization and inappropriate application of neutron irradiation exposure variables contribute a substantial amount of uncertainty to embrittlement analysis of light water reactor pressure vessels. Damage analysis involves characterization of the irradiation environment (dosimetry), correlation of test and surveillance metallurgical and dosimetry data, and projection of such data to service conditions. Errors in available test and surveillance dosimetry data are estimated to contribute a factor of approximately 2 to the data scatter. Non-physical (empirical) correlation procedures and the need to extrapolate to the vessel may add further error. Substantial reductions in these uncertainties in future programs can be obtained from a more complete application of available damage analysis tools which have been developed for the fast reactor program. An approach to reducing embrittlement analysis errors is described, and specific examples of potential applications are given. The approach is based on damage analysis techniques validated and calibrated in benchmark environments

  6. CPAS Preflight Drop Test Analysis Process

    Science.gov (United States)

    Englert, Megan E.; Bledsoe, Kristin J.; Romero, Leah M.

    2015-01-01

    Throughout the Capsule Parachute Assembly System (CPAS) drop test program, the CPAS Analysis Team has developed a simulation and analysis process to support drop test planning and execution. This process includes multiple phases focused on developing test simulations and communicating results to all groups involved in the drop test. CPAS Engineering Development Unit (EDU) series drop test planning begins with the development of a basic operational concept for each test. Trajectory simulation tools include the Flight Analysis and Simulation Tool (FAST) for single bodies, and the Automatic Dynamic Analysis of Mechanical Systems (ADAMS) simulation for the mated vehicle. Results are communicated to the team at the Test Configuration Review (TCR) and Test Readiness Review (TRR), as well as at Analysis Integrated Product Team (IPT) meetings in earlier and intermediate phases of the pre-test planning. The ability to plan and communicate efficiently with rapidly changing objectives and tight schedule constraints is a necessity for safe and successful drop tests.

  7. SMORN-III benchmark test on reactor noise analysis methods

    International Nuclear Information System (INIS)

    Shinohara, Yoshikuni; Hirota, Jitsuya

    1984-02-01

    A computational benchmark test was performed in conjunction with the Third Specialists Meeting on Reactor Noise (SMORN-III) which was held in Tokyo, Japan in October 1981. This report summarizes the results of the test as well as the works made for preparation of the test. (author)

  8. Comparison of the analysis result between two laboratories using different methods

    International Nuclear Information System (INIS)

    Sri Murniasih; Agus Taftazani

    2017-01-01

    Comparison of the analysis result of volcano ash sample between two laboratories using different analysis methods. The research aims to improve the testing laboratory quality and cooperate with the testing laboratory from other country. Samples were tested at the Center for Accelerator of Science and Technology (CAST)-NAA laboratory using NAA, while at the University of Texas (UT) USA using ICP-MS and ENAA method. From 12 elements of target, CAST-NAA able to present 11 elements of data analysis. The comparison results shows that the analysis of the K, Mn, Ti and Fe elements from both laboratories have a very good comparison and close one to other. It is known from RSD values and correlation coefficients of the both laboratories analysis results. While observed of the results difference known that the analysis results of Al, Na, K, Fe, V, Mn, Ti, Cr and As elements from both laboratories is not significantly different. From 11 elements were reported, only Zn which have significantly different values for both laboratories. (author)

  9. The Validation of NAA Method Used as Test Method in Serpong NAA Laboratory

    International Nuclear Information System (INIS)

    Rina-Mulyaningsih, Th.

    2004-01-01

    The Validation Of NAA Method Used As Test Method In Serpong NAA Laboratory. NAA Method is a non standard testing method. The testing laboratory shall validate its using method to ensure and confirm that it is suitable with application. The validation of NAA methods have been done with the parameters of accuracy, precision, repeatability and selectivity. The NIST 1573a Tomato Leaves, NIES 10C Rice flour unpolished and standard elements were used in this testing program. The result of testing with NIST 1573a showed that the elements of Na, Zn, Al and Mn are met from acceptance criteria of accuracy and precision, whereas Co is rejected. The result of testing with NIES 10C showed that Na and Zn elements are met from acceptance criteria of accuracy and precision, but Mn element is rejected. The result of selectivity test showed that the value of quantity is between 0.1-2.5 μg, depend on the elements. (author)

  10. Evaluation of Test Method for Solar Collector Efficiency

    DEFF Research Database (Denmark)

    Fan, Jianhua; Shah, Louise Jivan; Furbo, Simon

    The test method of the standard EN12975-2 (European Committee for Standardization, 2004) is used by European test laboratories to determine the efficiency of solar collectors. In the test methods the mean solar collector fluid temperature in the solar collector, Tm is determined by the approximat...... and the sky temperature. Based on the investigations, recommendations for change of the test methods and test conditions are considered. The investigations are carried out within the NEGST (New Generation of Solar Thermal Systems) project financed by EU.......The test method of the standard EN12975-2 (European Committee for Standardization, 2004) is used by European test laboratories to determine the efficiency of solar collectors. In the test methods the mean solar collector fluid temperature in the solar collector, Tm is determined by the approximated...... equation where Tin is the inlet temperature to the collector and Tout is the outlet temperature from the collector. The specific heat of the solar collector fluid is in the test method as an approximation determined as a constant equal to the specific heat of the solar collector fluid at the temperature Tm...

  11. Methods for Equating Mental Tests.

    Science.gov (United States)

    1984-11-01

    1983) compared conventional and IRT methods for equating the Test of English as a Foreign Language ( TOEFL ) after chaining. Three conventional and...three IRT equating methods were examined in this study; two sections of TOEFL were each (separately) equated. The IRT methods included the following: (a...group. A separate base form was established for each of the six equating methods. Instead of equating the base-form TOEFL to itself, the last (eighth

  12. Standard test method for isotopic abundance analysis of uranium hexafluoride and uranyl nitrate solutions by multi-collector, inductively coupled plasma-mass spectrometry

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2014-01-01

    1.1 This test method covers the isotopic abundance analysis of 234U, 235U, 236U and 238U in samples of hydrolysed uranium hexafluoride (UF6) by inductively coupled plasma source, multicollector, mass spectrometry (ICP-MC-MS). The method applies to material with 235U abundance in the range of 0.2 to 6 % mass. This test method is also described in ASTM STP 1344. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  13. A Study on the Improvement of Safety Testing Standards and Methods for Mammography

    International Nuclear Information System (INIS)

    Choi, Seon Hyeong; Jung, Ah Young; Yong, Hwan Seok; Kim, Do Wan; Jang, Gi Won; Cha, Sang Hoon; Jo, Sang Won; Park, Ji Koon

    2012-01-01

    To establish the improved national safety testing standards and methods for mammography. We investigated and compared the current status of mammographic equipment installation with the national and international safety and quality control programs and methods. We established and verified the draft for safety testing standards and methods. We propose that the investigations of the conductor system, hardware leakage radiation profile, illumination intensity test, comparison between X-ray and light photon exposure, X-ray dose exposure on the chest wall, compression equipment size, timing equipment, and the average effective radiation dose, should all be maintained as they are in the present state without any changes. However, the exposure radiation dose reproducibility, kVp and mAs, and the half value layer tests should be reconsidered and revised. Moreover, compression pressure and autonomic exposure control system (AEC) tests should be included as new criteria. Other parameter controls included in the phantom image analysis which overlap with total quality assurance should be excluded. We recommend that AEC and compression pressure tests should be included as new criteria and the methods for the exposure radiation dose reproducibility, kVp, and mAs, and half value layer tests should be reconsidered and revised.

  14. Thermal test requirements and their verification by different test methods

    International Nuclear Information System (INIS)

    Droste, B.; Wieser, G.; Probst, U.

    1993-01-01

    The paper discusses the parameters influencing the thermal test conditions for type B-packages. Criteria for different test methods (by analytical as well as by experimental means) will be developed. A comparison of experimental results from fuel oil pool and LPG fire tests will be given. (J.P.N.)

  15. Standard test methods for chemical, mass spectrometric, and spectrochemical analysis of nuclear-grade plutonium dioxide powders and pellets

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 These test methods cover procedures for the chemical, mass spectrometric, and spectrochemical analysis of nuclear-grade plutonium dioxide powders and pellets to determine compliance with specifications. 1.2 The analytical procedures appear in the following order: Sections Plutonium Sample Handling 8 to 10 Plutonium by Controlled-Potential Coulometry Plutonium by Ceric Sulfate Titration Plutonium by Amperometric Titration with Iron(II) Plutonium by Diode Array Spectrophotometry Nitrogen by Distillation Spectrophotometry Using Nessler Reagent 11 to 18 Carbon (Total) by Direct Combustion–Thermal Conductivity 19 to 30 Total Chlorine and Fluorine by Pyrohydrolysis 31 to 38 Sulfur by Distillation Spectrophotometry 39 to 47 Plutonium Isotopic Analysis by Mass Spectrometry Rare Earth Elements by Spectroscopy 48 to 55 Trace Elements by Carrier–Distillation Spectroscopy 56 to 63 Impurities by ICP-AES Impurity Elements by Spark-Source Mass Spectrography 64 to 70 Moisture by the Coulomet...

  16. Analysis of beryllium and depleted uranium: An overview of detection methods in aerosols and soils

    International Nuclear Information System (INIS)

    Camins, I.; Shinn, J.H.

    1988-06-01

    We conducted a survey of commercially available methods for analysis of beryllium and depleted uranium in aerosols and soils to find a reliable, cost-effective, and sufficiently precise method for researchers involved in environmental testing at the Yuma Proving Ground, Yuma, Arizona. Criteria used for evaluation include cost, method of analysis, specificity, sensitivity, reproducibility, applicability, and commercial availability. We found that atomic absorption spectrometry with graphite furnace meets these criteria for testing samples for beryllium. We found that this method can also be used to test samples for depleted uranium. However, atomic absorption with graphite furnace is not as sensitive a measurement method for depleted uranium as it is for beryllium, so we recommend that quality control of depleted uranium analysis be maintained by testing 10 of every 1000 samples by neutron activation analysis. We also evaluated 45 companies and institutions that provide analyses of beryllium and depleted uranium. 5 refs., 1 tab

  17. Inverse thermal analysis method to study solidification in cast iron

    DEFF Research Database (Denmark)

    Dioszegi, Atilla; Hattel, Jesper

    2004-01-01

    Solidification modelling of cast metals is widely used to predict final properties in cast components. Accurate models necessitate good knowledge of the solidification behaviour. The present study includes a re-examination of the Fourier thermal analysis method. This involves an inverse numerical...... solution of a 1-dimensional heat transfer problem connected to solidification of cast alloys. In the analysis, the relation between the thermal state and the fraction solid of the metal is evaluated by a numerical method. This method contains an iteration algorithm controlled by an under relaxation term...... inverse thermal analysis was tested on both experimental and simulated data....

  18. Validation of spectral methods for the seismic analysis of multi-supported structures

    International Nuclear Information System (INIS)

    Viola, B.

    1999-01-01

    There are many methodologies for the seismic analysis of buildings. When a seism occurs, structures such piping systems in nuclear power plants are subjected to motions that may be different at each support point. Therefore it is necessary to develop methods that take into account the multi-supported effect. In a first time, a bibliography analysis on the different methods that exist has been carried out. The aim was to find a particular method applicable to the study of piping systems. The second step of this work consisted in developing a program that may be used to test and make comparisons on different selected methods. So spectral methods have the advantage to give an estimation of the maximum values for strain in the structure, in reduced calculation time. The time history analysis is used as the reference for the tests. (author)

  19. Substructure method of soil-structure interaction analysis for earthquake loadings

    Energy Technology Data Exchange (ETDEWEB)

    Park, H. G.; Joe, Y. H. [Industrial Development Research Center, Univ. of Incheon, Incheon (Korea, Republic of)

    1997-07-15

    Substructure method has been preferably adopted for soil-structure interaction analysis because of its simplicity and economy in practical application. However, substructure method has some limitation in application and does not always give reliable results especially for embedded structures or layered soil conditions. The objective of this study to validate the reliability of the soil-structure interaction analysis results by the proposed substructure method using lumped-parameter model and suggest a method of seismic design of nuclear power plant structures with specific design conditions. In this study, theoretic background and modeling technique of soil-structure interaction phenomenon have been reviewed and an analysis technique based on substructure method using lumped-parameter model has been suggested. The practicality and reliability of the proposed method have been validated through the application of the method to the seismic analysis of the large-scale seismic test models. A technical guide for practical application and evaluation of the proposed method have been also provided through the various type parametric.

  20. Applicability of soil-structure interaction analysis methods for earthquake loadings (IV)

    International Nuclear Information System (INIS)

    Chang, S. P.; Ko, H. M.; Kim, J. K.; Yoon, J. Y.; Chin, B. M.; Yang, T. S.; Park, D. H.; Chung, W.; Park, J. Y.

    1996-07-01

    The ultimate goals of this research are to cultivate the capability of accurate SSI analysis and to develop the effective soil-structure interaction analysis method and computer program by comparing analysis results obtained in Lotung/Hualien LSST project. In this research, computer analysis program using hyper element was developed to analyze the forced vibration test and seismic test of the on-going Hualien LSST project. Prediction analysis and post-prediction analysis for Hualien LSST forced vibration and seismic response were executed by developed program. Thus this report is mainly composed of two parts. One is the summary of theoretical background of hyper element and the other is prediction analysis and post-prediction analysis results for Hualien LSST forced vibration and seismic response tests executed by developed program. Also, a coupling method of hyper element and generalized three-dimensional finite element or general axisymmetric finite element was presented for the further development of computer analysis program related to three dimensional hybrid soil-structure interaction and for the verification, the dynamic stiffness' of rigid circular /rectangular foundation are calculated. It is confirmed that program using hyper element is efficient and practical because it can consider non-homogeneity easily and execute the analysis in short time by using analytic solution m horizontal direction

  1. Multiple predictor smoothing methods for sensitivity analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  2. Multiple predictor smoothing methods for sensitivity analysis

    International Nuclear Information System (INIS)

    Helton, Jon Craig; Storlie, Curtis B.

    2006-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  3. Advanced Testing Method for Ground Thermal Conductivity

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xiaobing [ORNL; Clemenzi, Rick [Geothermal Design Center Inc.; Liu, Su [University of Tennessee (UT)

    2017-04-01

    A new method is developed that can quickly and more accurately determine the effective ground thermal conductivity (GTC) based on thermal response test (TRT) results. Ground thermal conductivity is an important parameter for sizing ground heat exchangers (GHEXs) used by geothermal heat pump systems. The conventional GTC test method usually requires a TRT for 48 hours with a very stable electric power supply throughout the entire test. In contrast, the new method reduces the required test time by 40%–60% or more, and it can determine GTC even with an unstable or intermittent power supply. Consequently, it can significantly reduce the cost of GTC testing and increase its use, which will enable optimal design of geothermal heat pump systems. Further, this new method provides more information about the thermal properties of the GHEX and the ground than previous techniques. It can verify the installation quality of GHEXs and has the potential, if developed, to characterize the heterogeneous thermal properties of the ground formation surrounding the GHEXs.

  4. Alternative Test Method for Olefins in Gasoline

    Science.gov (United States)

    This action proposes to allow for an additional alternative test method for olefins in gasoline, ASTM D6550-05. The allowance of this additional alternative test method will provide more flexibility to the regulated industry.

  5. Post-test analysis of PANDA test P4

    International Nuclear Information System (INIS)

    Hart, J.; Woudstra, A.; Koning, H.

    1999-01-01

    The results of a post-test analysis of the integral system test P4, which has been executed in the PANDA facility at PSI in Switzerland within the framework of Work Package 2 of the TEPSS project are presented. The post-test analysis comprises an evaluation of the PANDA test P4 and a comparison of the test results with the results of simulations using the RELAPS/MOD3.2, TRAC-BF1, and MELCOR 1.8.4 codes. The PANDA test P4 has provided data about how trapped air released from the drywell later in the transient affects PCCS performance in an adequate manner. The well-defined measurements can serve as an important database for the assessment of thermal hydraulic system analysis codes, especially for conditions that could be met in passively operated advanced reactors, i.e. low pressure and small driving forces. Based on the analysis of the test data, the test acceptance criteria have been met. The test P4 has been successfully completed and the instrument readings were with the permitted ranges. The PCCs showed a favorable and robust performance and a wide margin for decay heat removal from the containment. The PANDA P4 test demonstrated that trapped air, released from the drywell later in the transient, only temporarily and only slightly affected the performance of the passive containment cooling system. The analysis of the results of the RELAPS code showed that the overall behaviour of the test has been calculated quite well with regards to pressure, mass flow rates, and pool boil-down. This accounts both for the pre-test and the post-test simulations. However, due to the one-dimensional, stacked-volume modeling of the PANDA DW, WW, and GDCS vessels, 3D-effects such as in-vessel mixing and recirculation could not be calculated. The post-test MELCOR simulation showed an overall behaviour that is comparable to RELAPS. However, MELCOR calculated almost no air trapping in the PCC tubes that could hinder the steam condensation rate. This resulted in lower calculated

  6. Development of spectral history methods for pin-by-pin core analysis method using three-dimensional direct response matrix

    International Nuclear Information System (INIS)

    Mitsuyasu, T.; Ishii, K.; Hino, T.; Aoyama, M.

    2009-01-01

    Spectral history methods for pin-by-pin core analysis method using the three-dimensional direct response matrix have been developed. The direct response matrix is formalized by four sub-response matrices in order to respond to a core eigenvalue k and thus can be recomposed at each outer iteration in the core analysis. For core analysis, it is necessary to take into account the burn-up effect related to spectral history. One of the methods is to evaluate the nodal burn-up spectrum obtained using the out-going neutron current. The other is to correct the fuel rod neutron production rates obtained the pin-by-pin correction. These spectral history methods were tested in a heterogeneous system. The test results show that the neutron multiplication factor error can be reduced by half during burn-up, the nodal neutron production rates errors can be reduced by 30% or more. The root-mean-square differences between the relative fuel rod neutron production rate distributions can be reduced within 1.1% error. This means that these methods can accurately reflect the effects of intra- and inter-assembly heterogeneities during burn-up and can be used for core analysis. Core analysis with the DRM method was carried out for an ABWR quarter core and it was found that both thermal power and coolant-flow distributions were smoothly converged. (authors)

  7. Validation of the X-ray fluorescence analysis method for coffee grain testing

    International Nuclear Information System (INIS)

    Samaniego, Carlos

    1992-01-01

    Trace elements were qualitatively and quantitatively searched for in coffee samples for analysis were prepared in tablet from before irradiation, this latter having been performed with a Cd 109 radioactive source and with an X-ray tube; with ZnO as the secondary target. Several spectra were obtained. The areas of the spectral peaks were adjusted with the aid of AXIL computer program wich is based on the least squares method. Further on, elemental concentrations were determined by means of sensitivity and regression curves (intensity vs. concentration), methods that demanded the use of pertinent standards, concentration in organic standard certified samples furthermore, atomic absorption was also used to perform comparative checks on results

  8. Operational statistical analysis of the results of computer-based testing of students

    Directory of Open Access Journals (Sweden)

    Виктор Иванович Нардюжев

    2018-12-01

    Full Text Available The article is devoted to the issues of statistical analysis of results of computer-based testing for evaluation of educational achievements of students. The issues are relevant due to the fact that computerbased testing in Russian universities has become an important method for evaluation of educational achievements of students and quality of modern educational process. Usage of modern methods and programs for statistical analysis of results of computer-based testing and assessment of quality of developed tests is an actual problem for every university teacher. The article shows how the authors solve this problem using their own program “StatInfo”. For several years the program has been successfully applied in a credit system of education at such technological stages as loading computerbased testing protocols into a database, formation of queries, generation of reports, lists, and matrices of answers for statistical analysis of quality of test items. Methodology, experience and some results of its usage by university teachers are described in the article. Related topics of a test development, models, algorithms, technologies, and software for large scale computer-based testing has been discussed by the authors in their previous publications which are presented in the reference list.

  9. Neutron activation analysis method - international ring test for proficiency assessment

    International Nuclear Information System (INIS)

    Barbos, D.; Bucsa, A. F.

    2016-01-01

    The main objective of this test is to assess the quality control of analytical procedures for soils and plants which is of utmost importance to produce reliable and reproducible analytical data. For this purpose first, second, and third line quality control measures are taken in analytical laboratories. For first line control certified reference materials (CRM's) are preferred. However, the number and matrix variation in CRM's for environmental analytical research is still very limited. For second line control internal reference samples are often used, but again here the values for many element and parameter concentrations are questionable since almost no check versus CRM's is possible. For third line control participation in laboratory-evaluating exchange programs is recommended. This article contains the results achieved by our neutron activation analysis laboratory after irradiation experiment of soil and vegetation samples in TRIGA Reactor. All the samples were irradiated in the same location of the reactor in roughly similar conditions. (authors)

  10. Standard test method for nondestructive assay of special nuclear material holdup using Gamma-Ray spectroscopic methods

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2007-01-01

    1.1 This test method describes gamma-ray methods used to nondestructively measure the quantity of 235U, or 239Pu remaining as holdup in nuclear facilities. Holdup occurs in all facilities where nuclear material is processed, in process equipment, in exhaust ventilation systems and in building walls and floors. 1.2 This test method includes information useful for management, planning, selection of equipment, consideration of interferences, measurement program definition, and the utilization of resources (1, 2, 3, 4). 1.3 The measurement of nuclear material hold up in process equipment requires a scientific knowledge of radiation sources and detectors, transmission of radiation, calibration, facility operations and error analysis. It is subject to the constraints of the facility, management, budget, and schedule; plus health and safety requirements; as well as the laws of physics. The measurement process includes defining measurement uncertainties and is sensitive to the form and distribution of the material...

  11. Lessons learned in preparing method 29 filters for compliance testing audits.

    Science.gov (United States)

    Martz, R F; McCartney, J E; Bursey, J T; Riley, C E

    2000-01-01

    difference was not significant for most of the Method 29 target metals. As a result of our studies, we conclude: Filters for Method 29 testing should be purchased in lots as large as possible. Testing firms should pre-screen new boxes and/or new lots of filters used for Method 29 testing. Random analysis of three filters (top, middle, bottom of the box) from a new box of vendor filters before allowing them to be used in field tests is a prudent approach. A box of filters from a given vendor should be screened, and filters from this screened box should be used both for testing and as field blanks in each test scenario to provide the level of quality assurance required for stationary source testing.

  12. Standard Test Method for Hot Spot Protection Testing of Photovoltaic Modules

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method provides a procedure to determine the ability of a photovoltaic (PV) module to endure the long-term effects of periodic “hot spot” heating associated with common fault conditions such as severely cracked or mismatched cells, single-point open circuit failures (for example, interconnect failures), partial (or non-uniform) shadowing or soiling. Such effects typically include solder melting or deterioration of the encapsulation, but in severe cases could progress to combustion of the PV module and surrounding materials. 1.2 There are two ways that cells can cause a hot spot problem; either by having a high resistance so that there is a large resistance in the circuit, or by having a low resistance area (shunt) such that there is a high-current flow in a localized region. This test method selects cells of both types to be stressed. 1.3 This test method does not establish pass or fail levels. The determination of acceptable or unacceptable results is beyond the scope of this test method....

  13. Application of econometric and ecology analysis methods in physics software

    Science.gov (United States)

    Han, Min Cheol; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Sung Hun; Grazia Pia, Maria; Ronchieri, Elisabetta; Saracco, Paolo

    2017-10-01

    Some data analysis methods typically used in econometric studies and in ecology have been evaluated and applied in physics software environments. They concern the evolution of observables through objective identification of change points and trends, and measurements of inequality, diversity and evenness across a data set. Within each analysis area, various statistical tests and measures have been examined. This conference paper summarizes a brief overview of some of these methods.

  14. Design, Analysis and Test of Logic Circuits Under Uncertainty

    CERN Document Server

    Krishnaswamy, Smita; Hayes, John P

    2013-01-01

    Integrated circuits (ICs) increasingly exhibit uncertain characteristics due to soft errors, inherently probabilistic devices, and manufacturing variability. As device technologies scale, these effects can be detrimental to the reliability of logic circuits.  To improve future semiconductor designs, this book describes methods for analyzing, designing, and testing circuits subject to probabilistic effects. The authors first develop techniques to model inherently probabilistic methods in logic circuits and to test circuits for determining their reliability after they are manufactured. Then, they study error-masking mechanisms intrinsic to digital circuits and show how to leverage them to design more reliable circuits.  The book describes techniques for:   • Modeling and reasoning about probabilistic behavior in logic circuits, including a matrix-based reliability-analysis framework;   • Accurate analysis of soft-error rate (SER) based on functional-simulation, sufficiently scalable for use in gate-l...

  15. Standard test method for accelerated leach test for diffusive releases from solidified waste and a computer program to model diffusive, fractional leaching from cylindrical waste forms

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method provides procedures for measuring the leach rates of elements from a solidified matrix material, determining if the releases are controlled by mass diffusion, computing values of diffusion constants based on models, and verifying projected long-term diffusive releases. This test method is applicable to any material that does not degrade or deform during the test. 1.1.1 If mass diffusion is the dominant step in the leaching mechanism, then the results of this test can be used to calculate diffusion coefficients using mathematical diffusion models. A computer program developed for that purpose is available as a companion to this test method (Note 1). 1.1.2 It should be verified that leaching is controlled by diffusion by a means other than analysis of the leach test solution data. Analysis of concentration profiles of species of interest near the surface of the solid waste form after the test is recommended for this purpose. 1.1.3 Potential effects of partitioning on the test results can...

  16. Falsification Testing of Instrumental Variables Methods for Comparative Effectiveness Research.

    Science.gov (United States)

    Pizer, Steven D

    2016-04-01

    To demonstrate how falsification tests can be used to evaluate instrumental variables methods applicable to a wide variety of comparative effectiveness research questions. Brief conceptual review of instrumental variables and falsification testing principles and techniques accompanied by an empirical application. Sample STATA code related to the empirical application is provided in the Appendix. Comparative long-term risks of sulfonylureas and thiazolidinediones for management of type 2 diabetes. Outcomes include mortality and hospitalization for an ambulatory care-sensitive condition. Prescribing pattern variations are used as instrumental variables. Falsification testing is an easily computed and powerful way to evaluate the validity of the key assumption underlying instrumental variables analysis. If falsification tests are used, instrumental variables techniques can help answer a multitude of important clinical questions. © Health Research and Educational Trust.

  17. Comparison of methods for estimating the cost of human immunodeficiency virus-testing interventions.

    Science.gov (United States)

    Shrestha, Ram K; Sansom, Stephanie L; Farnham, Paul G

    2012-01-01

    The Centers for Disease Control and Prevention (CDC), Division of HIV/AIDS Prevention, spends approximately 50% of its $325 million annual human immunodeficiency virus (HIV) prevention funds for HIV-testing services. An accurate estimate of the costs of HIV testing in various settings is essential for efficient allocation of HIV prevention resources. To assess the costs of HIV-testing interventions using different costing methods. We used the microcosting-direct measurement method to assess the costs of HIV-testing interventions in nonclinical settings, and we compared these results with those from 3 other costing methods: microcosting-staff allocation, where the labor cost was derived from the proportion of each staff person's time allocated to HIV testing interventions; gross costing, where the New York State Medicaid payment for HIV testing was used to estimate program costs, and program budget, where the program cost was assumed to be the total funding provided by Centers for Disease Control and Prevention. Total program cost, cost per person tested, and cost per person notified of new HIV diagnosis. The median costs per person notified of a new HIV diagnosis were $12 475, $15 018, $2697, and $20 144 based on microcosting-direct measurement, microcosting-staff allocation, gross costing, and program budget methods, respectively. Compared with the microcosting-direct measurement method, the cost was 78% lower with gross costing, and 20% and 61% higher using the microcosting-staff allocation and program budget methods, respectively. Our analysis showed that HIV-testing program cost estimates vary widely by costing methods. However, the choice of a particular costing method may depend on the research question being addressed. Although program budget and gross-costing methods may be attractive because of their simplicity, only the microcosting-direct measurement method can identify important determinants of the program costs and provide guidance to improve

  18. Standard test method for the analysis of refrigerant 114, plus other carbon-containing and fluorine-containing compounds in uranium hexafluoride via fourier-transform infrared (FTIR) spectroscopy

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2004-01-01

    1.1 This test method covers determining the concentrations of refrigerant-114, other carbon-containing and fluorine-containing compounds, hydrocarbons, and partially or completely substituted halohydrocarbons that may be impurities in uranium hexafluoride. The two options are outlined for this test method. They are designated as Part A and Part B. 1.1.1 To provide instructions for performing Fourier-Transform Infrared (FTIR) spectroscopic analysis for the possible presence of Refrigerant-114 impurity in a gaseous sample of uranium hexafluoride, collected in a "2S" container or equivalent at room temperature. The all gas procedure applies to the analysis of possible Refrigerant-114 impurity in uranium hexafluoride, and to the gas manifold system used for FTIR applications. The pressure and temperatures must be controlled to maintain a gaseous sample. The concentration units are in mole percent. This is Part A. 1.2 Part B involves a high pressure liquid sample of uranium hexafluoride. This method can be appli...

  19. Comparison of testing methods for particulate filters

    International Nuclear Information System (INIS)

    Ullmann, W.; Przyborowski, S.

    1983-01-01

    Four testing methods for particulate filters were compared by using the test rigs of the National Board of Nuclear Safety and Radiation Protection: 1) Measurement of filter penetration P as a function of particle size d by using a polydisperse NaC1 test aerosol and a scintillation particle counter; 2) Modified sodium flame test for measurement of total filter penetration P for various polydisperse NaC1 test aerosols; 3) Measurement of total filter penetration P for a polydisperse NaC1 test aerosol labelled with short-lived radon daughter products; 4) Measurement of total filter penetration P for a special paraffin oil test aerosol (oil fog test used in FRG according DIN 24 184, test aerosol A). The investigations were carried out on sheets of glass fibre paper (five grades of paper). Detailed information about the four testing methods and the used particle size distributions is given. The different results of the various methods are the base for the discussion of the most important parameters which influence the filter penetration P. The course of the function P=f(d) shows the great influence of the particle size. As expected there was also found a great dependence both from the test aerosol as well as from the principle and the measuring range of the aerosol-measuring device. The differences between the results of the various test methods are greater the lower the penetration. The use of NaCl test aerosol with various particle size distributions gives great differences for the respective penetration values. On the basis of these results and the values given by Dorman conclusions are made about the investigation of particulate filters both for the determination of filter penetration P as well as for the leak test of installed filters

  20. Methods study of homogeneity and stability test from cerium oxide CRM candidate

    International Nuclear Information System (INIS)

    Samin; Susanna TS

    2016-01-01

    The methods study of homogeneity and stability test from cerium oxide CRM candidate has been studied based on ISO 13258 and KAN DP. 01. 34. The purpose of this study was to select the test method homogeneity and stability tough on making CRM cerium oxide. Prepared 10 sub samples of cerium oxide randomly selected types of analytes which represent two compounds, namely CeO_2 and La_2O_3. At 10 sub sample is analyzed CeO_2 and La_2O_3 contents in duplicate with the same analytical methods, by the same analyst, and in the same laboratory. Data analysis results calculated statistically based on ISO 13528 and KAN DP.01.34. According to ISO 13528 Cerium Oxide samples said to be homogeneous if Ss ≤ 0.3 σ and is stable if | Xr – Yr | ≤ 0.3 σ. In this study, the data of homogeneity test obtained CeO_2 is Ss = 2.073 x 10-4 smaller than 0.3 σ (0.5476) and the stability test obtained | Xr - Yr | = 0.225 and the price is < 0.3 σ. Whereas for La_2O_3, the price for homogeneity test obtained Ss = 1.649 x 10-4 smaller than 0.3 σ (0.4865) and test the stability of the price obtained | Xr - Yr | = 0.2185 where the price is < 0.3 σ. Compared with the method from KAN, a sample of cerium oxide has also been homogenized for Fcalc < Ftable and stable, because | Xi - Xhm | < 0.3 x n IQR. Provided that the results of the evaluation homogeneity and stability test from CeO_2 CRM candidate test data were processed using statistical methods ISO 13528 is not significantly different with statistical methods from KAN DP.01.34, which together meet the requirements of a homogeneous and stable. So the test method homogeneity and stability test based on ISO 13528 can be used to make CRM cerium oxide. (author)

  1. Application of software quality assurance methods in validation and maintenance of reactor analysis computer codes

    International Nuclear Information System (INIS)

    Reznik, L.

    1994-01-01

    Various computer codes employed at Israel Electricity Company for preliminary reactor design analysis and fuel cycle scoping calculations have been often subject to program source modifications. Although most changes were due to computer or operating system compatibility problems, a number of significant modifications were due to model improvement and enhancements of algorithm efficiency and accuracy. With growing acceptance of software quality assurance requirements and methods, a program of implementing extensive testing of modified software has been adopted within the regular maintenance activities. In this work survey has been performed of various software quality assurance methods of software testing which belong mainly to the two major categories of implementation ('white box') and specification-based ('black box') testing. The results of this survey exhibits a clear preference of specification-based testing. In particular the equivalence class partitioning method and the boundary value method have been selected as especially suitable functional methods for testing reactor analysis codes.A separate study of software quality assurance methods and techniques has been performed in this work objective to establish appropriate pre-test software specification methods. Two methods of software analysis and specification have been selected as the most suitable for this purpose: The method of data flow diagrams has been shown to be particularly valuable for performing the functional/procedural software specification while the entities - relationship diagrams has been approved to be efficient for specifying software data/information domain. Feasibility of these two methods has been analyzed in particular for software uncertainty analysis and overall code accuracy estimation. (author). 14 refs

  2. Error response test system and method using test mask variable

    Science.gov (United States)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  3. Standard test method for tension testing of structural alloys in liquid helium

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method describes procedures for the tension testing of structural alloys in liquid helium. The format is similar to that of other ASTM tension test standards, but the contents include modifications for cryogenic testing which requires special apparatus, smaller specimens, and concern for serrated yielding, adiabatic heating, and strain-rate effects. 1.2 To conduct a tension test by this standard, the specimen in a cryostat is fully submerged in normal liquid helium (He I) and tested using crosshead displacement control at a nominal strain rate of 10−3 s−1 or less. Tests using force control or high strain rates are not considered. 1.3 This standard specifies methods for the measurement of yield strength, tensile strength, elongation, and reduction of area. The determination of the elastic modulus is treated in Test Method E 111. Note 1—The boiling point of normal liquid helium (He I) at sea level is 4.2 K (−269°C or −452.1°F or 7.6°R). It decreases with geographic elevation and is...

  4. Application of the IPEBS method to dynamic contingency analysis

    Energy Technology Data Exchange (ETDEWEB)

    Martins, A C.B. [FURNAS, Rio de Janeiro, RJ (Brazil); Pedroso, A S [Centro de Pesquisas de Energia Eletrica (CEPEL), Rio de Janeiro, RJ (Brazil)

    1994-12-31

    Dynamic contingency analysis is certainly a demanding task in the context of dynamic performance evaluation. This paper presents the results of a test for checking the contingency screening capability of the IPEBS method. A brazilian 1100-bus, 112-gen system was used in the test; the ranking of the contingencies based on critical clearing times obtained with IPEBS, was compared with the ranking derived from detailed time-domain simulation. The results of this comparison encourages us to recommended the use of the method in industry applications, in a complementary basis to the current method of time domain simulation. (author) 5 refs., 1 fig., 2 tabs.

  5. Contribution to interplay between a delamination test and a sensory analysis of mid-range lipsticks.

    Science.gov (United States)

    Richard, C; Tillé-Salmon, B; Mofid, Y

    2016-02-01

    Lipstick is currently one of the most sold products of cosmetics industry, and the competition between the various manufacturers is significant. Customers mainly seek products with high spreadability, especially long-lasting or long wear on the lips. Evaluation tests of cosmetics are usually performed by sensory analysis. This can then represent a considerable cost. The object of this study was to develop a fast and simple test of delamination (objective method with calibrated instruments) and to interplay the obtained results with those of a discriminative sensory analysis (subjective method) in order to show the relevance of the instrumental test. Three mid-range lipsticks were randomly chosen and were tested. They were made of compositions as described by the International Nomenclature of Cosmetic Ingredients (INCI). Instrumental characterization was performed by texture profile analysis and by a special delamination test. The sensory analysis was voluntarily conducted with an untrained panel as blind test to confirm or reverse the possible interplay. The two approaches or methods gave the same type of classification. The high-fat lipstick had the worst behaviour with the delamination test and the worst notation of the intensity of descriptors with the sensory analysis. There is a high correlation between the sensory analysis and the instrumental measurements in this study. The delamination test carried out should permit to quickly determine the lasting (screening test) and in consequence optimize the basic formula of lipsticks. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  6. Cathodic Delamination Accelerated Life Test Method

    National Research Council Canada - National Science Library

    Ramotowski, Thomas S

    2007-01-01

    A method for conducting an accelerated life test of a polymer coated metallic sample includes placing the sample below the water surface in a test tank containing water and an oxygen containing gas...

  7. Standard practice for analysis and interpretation of physics dosimetry results for test reactors

    International Nuclear Information System (INIS)

    Anon.

    1984-01-01

    This practice describes the methodology summarized in Annex Al to be used in the analysis and interpretation of physics-dosimetry results from test reactors. This practice relies on, and ties together, the application of several supporting ASTM standard practices, guides, and methods that are in various stages of completion (see Fig. 1). Support subject areas that are discussed include reactor physics calculations, dosimeter selection and analysis, exposure units, and neutron spectrum adjustment methods. This practice is directed towards the development and application of physics-dosimetrymetallurgical data obtained from test reactor irradiation experiments that are performed in support of the operation, licensing, and regulation of LWR nuclear power plants. It specifically addresses the physics-dosimetry aspects of the problem. Procedures related to the analysis, interpretation, and application of both test and power reactor physics-dosimetry-metallurgy results are addressed in Practice E 853, Practice E 560, Matrix E 706(IE), Practice E 185, Matrix E 706(IG), Guide E 900, and Method E 646

  8. Methods Used in Economic Evaluations of Chronic Kidney Disease Testing — A Systematic Review

    Science.gov (United States)

    Sutton, Andrew J.; Breheny, Katie; Deeks, Jon; Khunti, Kamlesh; Sharpe, Claire; Ottridge, Ryan S.; Stevens, Paul E.; Cockwell, Paul; Kalra, Philp A.; Lamb, Edmund J.

    2015-01-01

    Background The prevalence of chronic kidney disease (CKD) is high in general populations around the world. Targeted testing and screening for CKD are often conducted to help identify individuals that may benefit from treatment to ameliorate or prevent their disease progression. Aims This systematic review examines the methods used in economic evaluations of testing and screening in CKD, with a particular focus on whether test accuracy has been considered, and how analysis has incorporated issues that may be important to the patient, such as the impact of testing on quality of life and the costs they incur. Methods Articles that described model-based economic evaluations of patient testing interventions focused on CKD were identified through the searching of electronic databases and the hand searching of the bibliographies of the included studies. Results The initial electronic searches identified 2,671 papers of which 21 were included in the final review. Eighteen studies focused on proteinuria, three evaluated glomerular filtration rate testing and one included both tests. The full impact of inaccurate test results was frequently not considered in economic evaluations in this setting as a societal perspective was rarely adopted. The impact of false positive tests on patients in terms of the costs incurred in re-attending for repeat testing, and the anxiety associated with a positive test was almost always overlooked. In one study where the impact of a false positive test on patient quality of life was examined in sensitivity analysis, it had a significant impact on the conclusions drawn from the model. Conclusion Future economic evaluations of kidney function testing should examine testing and monitoring pathways from the perspective of patients, to ensure that issues that are important to patients, such as the possibility of inaccurate test results, are properly considered in the analysis. PMID:26465773

  9. Tensile strength of concrete under static and intermediate strain rates: Correlated results from different testing methods

    International Nuclear Information System (INIS)

    Wu Shengxing; Chen Xudong; Zhou Jikai

    2012-01-01

    Highlights: ► Tensile strength of concrete increases with increase in strain rate. ► Strain rate sensitivity of tensile strength of concrete depends on test method. ► High stressed volume method can correlate results from various test methods. - Abstract: This paper presents a comparative experiment and analysis of three different methods (direct tension, splitting tension and four-point loading flexural tests) for determination of the tensile strength of concrete under low and intermediate strain rates. In addition, the objective of this investigation is to analyze the suitability of the high stressed volume approach and Weibull effective volume method to the correlation of the results of different tensile tests of concrete. The test results show that the strain rate sensitivity of tensile strength depends on the type of test, splitting tensile strength of concrete is more sensitive to an increase in the strain rate than flexural and direct tensile strength. The high stressed volume method could be used to obtain a tensile strength value of concrete, free from the influence of the characteristics of tests and specimens. However, the Weibull effective volume method is an inadequate method for describing failure of concrete specimens determined by different testing methods.

  10. Polynomial regression analysis and significance test of the regression function

    International Nuclear Information System (INIS)

    Gao Zhengming; Zhao Juan; He Shengping

    2012-01-01

    In order to analyze the decay heating power of a certain radioactive isotope per kilogram with polynomial regression method, the paper firstly demonstrated the broad usage of polynomial function and deduced its parameters with ordinary least squares estimate. Then significance test method of polynomial regression function is derived considering the similarity between the polynomial regression model and the multivariable linear regression model. Finally, polynomial regression analysis and significance test of the polynomial function are done to the decay heating power of the iso tope per kilogram in accord with the authors' real work. (authors)

  11. SU-F-J-177: A Novel Image Analysis Technique (center Pixel Method) to Quantify End-To-End Tests

    Energy Technology Data Exchange (ETDEWEB)

    Wen, N; Chetty, I [Henry Ford Health System, Detroit, MI (United States); Snyder, K [Henry Ford Hospital System, Detroit, MI (United States); Scheib, S [Varian Medical System, Barton (Switzerland); Qin, Y; Li, H [Henry Ford Health System, Detroit, Michigan (United States)

    2016-06-15

    Purpose: To implement a novel image analysis technique, “center pixel method”, to quantify end-to-end tests accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy was determined by delivering radiation to an end-to-end prototype phantom. The phantom was scanned with 0.8 mm slice thickness. The treatment isocenter was placed at the center of the phantom. In the treatment room, CBCT images of the phantom (kVp=77, mAs=1022, slice thickness 1 mm) were acquired to register to the reference CT images. 6D couch correction were applied based on the registration results. Electronic Portal Imaging Device (EPID)-based Winston Lutz (WL) tests were performed to quantify the errors of the targeting accuracy of the system at 15 combinations of gantry, collimator and couch positions. The images were analyzed using two different methods. a) The classic method. The deviation was calculated by measuring the radial distance between the center of the central BB and the full width at half maximum of the radiation field. b) The center pixel method. Since the imager projection offset from the treatment isocenter was known from the IsoCal calibration, the deviation was determined between the center of the BB and the central pixel of the imager panel. Results: Using the automatic registration method to localize the phantom and the classic method of measuring the deviation of the BB center, the mean and standard deviation of the radial distance was 0.44 ± 0.25, 0.47 ± 0.26, and 0.43 ± 0.13 mm for the jaw, MLC and cone defined field sizes respectively. When the center pixel method was used, the mean and standard deviation was 0.32 ± 0.18, 0.32 ± 0.17, and 0.32 ± 0.19 mm respectively. Conclusion: Our results demonstrated that the center pixel method accurately analyzes the WL images to evaluate the targeting accuracy of the radiosurgery system. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American

  12. Analysis and Testing of Mobile Wireless Networks

    Science.gov (United States)

    Alena, Richard; Evenson, Darin; Rundquist, Victor; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Wireless networks are being used to connect mobile computing elements in more applications as the technology matures. There are now many products (such as 802.11 and 802.11b) which ran in the ISM frequency band and comply with wireless network standards. They are being used increasingly to link mobile Intranet into Wired networks. Standard methods of analyzing and testing their performance and compatibility are needed to determine the limits of the technology. This paper presents analytical and experimental methods of determining network throughput, range and coverage, and interference sources. Both radio frequency (BE) domain and network domain analysis have been applied to determine wireless network throughput and range in the outdoor environment- Comparison of field test data taken under optimal conditions, with performance predicted from RF analysis, yielded quantitative results applicable to future designs. Layering multiple wireless network- sooners can increase performance. Wireless network components can be set to different radio frequency-hopping sequences or spreading functions, allowing more than one sooner to coexist. Therefore, we ran multiple 802.11-compliant systems concurrently in the same geographical area to determine interference effects and scalability, The results can be used to design of more robust networks which have multiple layers of wireless data communication paths and provide increased throughput overall.

  13. Methods to diagnose acute anterior cruciate ligament rupture: a meta-analysis of instrumented knee laxity tests

    NARCIS (Netherlands)

    van Eck, Carola F.; Loopik, Miette; van den Bekerom, Michel P.; Fu, Freddie H.; Kerkhoffs, Gino M. M. J.

    2013-01-01

    The aims of this meta-analysis were to determine the sensitivity and specificity of the KT 1000 Arthrometer, Stryker Knee Laxity Tester and Genucom Knee Analysis System for ACL rupture. It was hypothesized that the KT 1000 test is the most sensitive and specific. Secondly, it was hypothesized that

  14. Thermal stresses in the space shuttle orbiter: Analysis versus test

    International Nuclear Information System (INIS)

    Grooms, H.R.; Gibson, W.F. Jr.; Benson, P.L.

    1984-01-01

    Significant temperature differences occur between the internal structure and the outer skin of the Space Shuttle Orbiter as it returns from space. These temperature differences cause important thermal stresses. A finite element model containing thousands of degrees of freedom is used to predict these stresses. A ground test was performed to verify the prediction method. The analysis and test results compare favorably. (orig.)

  15. Seismic analysis of the mirror fusion test facility building

    International Nuclear Information System (INIS)

    Coats, D.W.

    1978-01-01

    This report describes a seismic analysis of the present Mirror Fusion Test Facility (MFTF) building at the Lawrence Livermore Laboratory. The analysis was conducted to evaluate how the structure would withstand the postulated design-basis earthquake (DBE). We discuss the methods of analysis used and results obtained. Also presented are a detailed description of the building, brief discussions of site geology, seismicity, and soil conditions, the approach used to postulate the DBE, and two methods for incorporating the effects of ductility. Floor spectra for the 2nd, 3rd, and 4th floors developed for preliminary equipment design are also included. The results of the analysis, based on best-estimate equipment loadings, indicate additional bracing and upgrading of connection details are required for the structure to survive the postulated design-basis earthquake. Specific recommendations are made

  16. Assessment of Technologies for the Space Shuttle External Tank Thermal Protection System and Recommendations for Technology Improvement - Part III: Material Property Characterization, Analysis, and Test Methods

    Science.gov (United States)

    Gates, Thomas S.; Johnson, Theodore F.; Whitley, Karen S.

    2005-01-01

    The objective of this report is to contribute to the independent assessment of the Space Shuttle External Tank Foam Material. This report specifically addresses material modeling, characterization testing, data reduction methods, and data pedigree. A brief description of the External Tank foam materials, locations, and standard failure modes is provided to develop suitable background information. A review of mechanics based analysis methods from the open literature is used to provide an assessment of the state-of-the-art in material modeling of closed cell foams. Further, this report assesses the existing material property database and investigates sources of material property variability. The report presents identified deficiencies in testing methods and procedures, recommendations for additional testing as required, identification of near-term improvements that should be pursued, and long-term capabilities or enhancements that should be developed.

  17. Synthetic methods in phase equilibria: A new apparatus and error analysis of the method

    DEFF Research Database (Denmark)

    Fonseca, José; von Solms, Nicolas

    2014-01-01

    of the equipment was confirmed through several tests, including measurements along the three phase co-existence line for the system ethane + methanol, the study of the solubility of methane in water, and of carbon dioxide in water. An analysis regarding the application of the synthetic isothermal method...

  18. Quantitative analysis of γ-oryzanol content in cold pressed rice bran oil by TLC-image analysis method.

    Science.gov (United States)

    Sakunpak, Apirak; Suksaeree, Jirapornchai; Monton, Chaowalit; Pathompak, Pathamaporn; Kraisintu, Krisana

    2014-02-01

    To develop and validate an image analysis method for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. TLC-densitometric and TLC-image analysis methods were developed, validated, and used for quantitative analysis of γ-oryzanol in cold pressed rice bran oil. The results obtained by these two different quantification methods were compared by paired t-test. Both assays provided good linearity, accuracy, reproducibility and selectivity for determination of γ-oryzanol. The TLC-densitometric and TLC-image analysis methods provided a similar reproducibility, accuracy and selectivity for the quantitative determination of γ-oryzanol in cold pressed rice bran oil. A statistical comparison of the quantitative determinations of γ-oryzanol in samples did not show any statistically significant difference between TLC-densitometric and TLC-image analysis methods. As both methods were found to be equal, they therefore can be used for the determination of γ-oryzanol in cold pressed rice bran oil.

  19. Analysis of slug tests in formations of high hydraulic conductivity.

    Science.gov (United States)

    Butler, James J; Garnett, Elizabeth J; Healey, John M

    2003-01-01

    A new procedure is presented for the analysis of slug tests performed in partially penetrating wells in formations of high hydraulic conductivity. This approach is a simple, spreadsheet-based implementation of existing models that can be used for analysis of tests from confined or unconfined aquifers. Field examples of tests exhibiting oscillatory and nonoscillatory behavior are used to illustrate the procedure and to compare results with estimates obtained using alternative approaches. The procedure is considerably simpler than recently proposed methods for this hydrogeologic setting. Although the simplifications required by the approach can introduce error into hydraulic-conductivity estimates, this additional error becomes negligible when appropriate measures are taken in the field. These measures are summarized in a set of practical field guidelines for slug tests in highly permeable aquifers.

  20. A novel method of sensitivity analysis testing by applying the DRASTIC and fuzzy optimization methods to assess groundwater vulnerability to pollution: the case of the Senegal River basin in Mali

    Science.gov (United States)

    Souleymane, Keita; Zhonghua, Tang

    2017-08-01

    Vulnerability to groundwater pollution in the Senegal River basin was studied by two different but complementary methods: the DRASTIC method (which evaluates the intrinsic vulnerability) and the fuzzy method (which assesses the specific vulnerability by taking into account the continuity of the parameters). The validation of this application has been tested by comparing the connection in groundwater and distribution of different established classes of vulnerabilities as well as the nitrate distribution in the study area. Three vulnerability classes (low, medium and high) have been identified by both the DRASTIC method and the fuzzy method (between which the normalized model was used). An integrated analysis reveals that high classes with 14.64 % (for the DRASTIC method), 21.68 % (for the normalized DRASTIC method) and 18.92 % (for the fuzzy method) are not the most dominant. In addition, a new method for sensitivity analysis was used to identify (and confirm) the main parameters which impact the vulnerability to pollution with fuzzy membership. The results showed that the vadose zone is the main parameter which impacts groundwater vulnerability to pollution while net recharge contributes least to pollution in the study area. It was also found that the fuzzy method better assesses the vulnerability to pollution with a coincidence rate of 81.13 % versus that of 77.35 % for the DRASTIC method. These results serve as a guide for policymakers to identify areas sensitive to pollution before such sites are used for socioeconomic infrastructures.

  1. MCPerm: a Monte Carlo permutation method for accurately correcting the multiple testing in a meta-analysis of genetic association studies.

    Directory of Open Access Journals (Sweden)

    Yongshuai Jiang

    Full Text Available Traditional permutation (TradPerm tests are usually considered the gold standard for multiple testing corrections. However, they can be difficult to complete for the meta-analyses of genetic association studies based on multiple single nucleotide polymorphism loci as they depend on individual-level genotype and phenotype data to perform random shuffles, which are not easy to obtain. Most meta-analyses have therefore been performed using summary statistics from previously published studies. To carry out a permutation using only genotype counts without changing the size of the TradPerm P-value, we developed a Monte Carlo permutation (MCPerm method. First, for each study included in the meta-analysis, we used a two-step hypergeometric distribution to generate a random number of genotypes in cases and controls. We then carried out a meta-analysis using these random genotype data. Finally, we obtained the corrected permutation P-value of the meta-analysis by repeating the entire process N times. We used five real datasets and five simulation datasets to evaluate the MCPerm method and our results showed the following: (1 MCPerm requires only the summary statistics of the genotype, without the need for individual-level data; (2 Genotype counts generated by our two-step hypergeometric distributions had the same distributions as genotype counts generated by shuffling; (3 MCPerm had almost exactly the same permutation P-values as TradPerm (r = 0.999; P<2.2e-16; (4 The calculation speed of MCPerm is much faster than that of TradPerm. In summary, MCPerm appears to be a viable alternative to TradPerm, and we have developed it as a freely available R package at CRAN: http://cran.r-project.org/web/packages/MCPerm/index.html.

  2. Personnel planning in general practices: development and testing of a skill mix analysis method.

    NARCIS (Netherlands)

    Eitzen-Strassel, J. von; Vrijhoef, H.J.M.; Derckx, E.W.C.C.; Bakker, D.H. de

    2014-01-01

    Background: General practitioners (GPs) have to match patients’ demands with the mix of their practice staff’s competencies. However, apart from some general principles, there is little guidance on recruiting new staff. The purpose of this study was to develop and test a method which would allow GPs

  3. Personnel planning in general practices : Development and testing of a skill mix analysis method

    NARCIS (Netherlands)

    von Eitzen-Strassel, J.; Vrijhoef, H.J.M.; Derckx, E.W.C.C.; de Bakker, D.H.

    2014-01-01

    Background General practitioners (GPs) have to match patients’ demands with the mix of their practice staff’s competencies. However, apart from some general principles, there is little guidance on recruiting new staff. The purpose of this study was to develop and test a method which would allow GPs

  4. Finite element analysis and fracture resistance testing of a new intraradicular post

    Directory of Open Access Journals (Sweden)

    Eron Toshio Colauto Yamamoto

    2012-08-01

    Full Text Available OBJECTIVES: The objective of the present study was to evaluate a prefabricated intraradicular threaded pure titanium post, designed and developed at the São José dos Campos School of Dentistry - UNESP, Brazil. This new post was designed to minimize stresses observed with prefabricated post systems and to improve cost-benefits. MATERIAL AND METHODS: Fracture resistance testing of the post/core/root complex, fracture analysis by microscopy and stress analysis by the finite element method were used for post evaluation. The following four prefabricated metal post systems were analyzed: group 1, experimental post; group 2, modification of the experimental post; group 3, Flexi Post, and group 4, Para Post. For the analysis of fracture resistance, 40 bovine teeth were randomly assigned to the four groups (n=10 and used for the fabrication of test specimens simulating the situation in the mouth. The test specimens were subjected to compressive strength testing until fracture in an EMIC universal testing machine. After fracture of the test specimens, their roots were sectioned and analyzed by microscopy. For the finite element method, specimens of the fracture resistance test were simulated by computer modeling to determine the stress distribution pattern in the post systems studied. RESULTS: The fracture test presented the following averages and standard deviation: G1 (45.63±8.77, G2 (49.98±7.08, G3 (43.84±5.52, G4 (47.61±7.23. Stress was homogenously distributed along the body of the intraradicular post in group 1, whereas high stress concentrations in certain regions were observed in the other groups. These stress concentrations in the body of the post induced the same stress concentration in root dentin. CONCLUSIONS: The experimental post (original and modified versions presented similar fracture resistance and better results in the stress analysis when compared with the commercial post systems tested (08/2008-PA/CEP.

  5. Two micro fatigue test methods for irradiated materials

    International Nuclear Information System (INIS)

    Nunomura, Shigetomo; Noguchi, Shinji; Okamura, Yuichi; Kumai, Shinji

    1993-01-01

    This paper demonstrates two miniature fatigue test methods in response to the requirements of the fusion reactor wall materials development program. It is known that the fatigue strength evaluated by the axial loading test is independent of the specimen size, while that evaluated by the bend test or torsion test is dependent upon the size of specimen. The new type of gripping system for the axial, tension-tension, fatigue testing of TEM disk-size specimens that has been developed is described in this paper. An alignment tool assists in gripping the miniature specimen. The miniature tension-tension fatigue test method seems to provide reliable S-N curves for SUS304 and SUS316L stainless steels. An indentation method has also been developed to determine fatigue properties. A hard steel ball or ceramic ball was used for cyclically loading the specimen, and an S-N curve was subsequently obtained. The merit of this method is primarily simple handling. S-N curves obtained from four materials by this indentation method compared well with those obtained from the rotary bend fatigue test employing a standard-size specimen

  6. Multipath interference test method for distributed amplifiers

    Science.gov (United States)

    Okada, Takahiro; Aida, Kazuo

    2005-12-01

    A method for testing distributed amplifiers is presented; the multipath interference (MPI) is detected as a beat spectrum between the multipath signal and the direct signal using a binary frequency shifted keying (FSK) test signal. The lightwave source is composed of a DFB-LD that is directly modulated by a pulse stream passing through an equalizer, and emits the FSK signal of the frequency deviation of about 430MHz at repetition rate of 80-100 kHz. The receiver consists of a photo-diode and an electrical spectrum analyzer (ESA). The base-band power spectrum peak appeared at the frequency of the FSK frequency deviation can be converted to amount of MPI using a calibration chart. The test method has improved the minimum detectable MPI as low as -70 dB, compared to that of -50 dB of the conventional test method. The detailed design and performance of the proposed method are discussed, including the MPI simulator for calibration procedure, computer simulations for evaluating the error caused by the FSK repetition rate and the fiber length under test and experiments on singlemode fibers and distributed Raman amplifier.

  7. A functional U-statistic method for association analysis of sequencing data.

    Science.gov (United States)

    Jadhav, Sneha; Tong, Xiaoran; Lu, Qing

    2017-11-01

    Although sequencing studies hold great promise for uncovering novel variants predisposing to human diseases, the high dimensionality of the sequencing data brings tremendous challenges to data analysis. Moreover, for many complex diseases (e.g., psychiatric disorders) multiple related phenotypes are collected. These phenotypes can be different measurements of an underlying disease, or measurements characterizing multiple related diseases for studying common genetic mechanism. Although jointly analyzing these phenotypes could potentially increase the power of identifying disease-associated genes, the different types of phenotypes pose challenges for association analysis. To address these challenges, we propose a nonparametric method, functional U-statistic method (FU), for multivariate analysis of sequencing data. It first constructs smooth functions from individuals' sequencing data, and then tests the association of these functions with multiple phenotypes by using a U-statistic. The method provides a general framework for analyzing various types of phenotypes (e.g., binary and continuous phenotypes) with unknown distributions. Fitting the genetic variants within a gene using a smoothing function also allows us to capture complexities of gene structure (e.g., linkage disequilibrium, LD), which could potentially increase the power of association analysis. Through simulations, we compared our method to the multivariate outcome score test (MOST), and found that our test attained better performance than MOST. In a real data application, we apply our method to the sequencing data from Minnesota Twin Study (MTS) and found potential associations of several nicotine receptor subunit (CHRN) genes, including CHRNB3, associated with nicotine dependence and/or alcohol dependence. © 2017 WILEY PERIODICALS, INC.

  8. Research on Visual Analysis Methods of Terrorism Events

    Science.gov (United States)

    Guo, Wenyue; Liu, Haiyan; Yu, Anzhu; Li, Jing

    2016-06-01

    Under the situation that terrorism events occur more and more frequency throughout the world, improving the response capability of social security incidents has become an important aspect to test governments govern ability. Visual analysis has become an important method of event analysing for its advantage of intuitive and effective. To analyse events' spatio-temporal distribution characteristics, correlations among event items and the development trend, terrorism event's spatio-temporal characteristics are discussed. Suitable event data table structure based on "5W" theory is designed. Then, six types of visual analysis are purposed, and how to use thematic map and statistical charts to realize visual analysis on terrorism events is studied. Finally, experiments have been carried out by using the data provided by Global Terrorism Database, and the results of experiments proves the availability of the methods.

  9. Similarity Analysis of Cable Insulations by Chemical Test

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Seog [Central Research Institute of Korea Hydro and Nuclear Power Co., Daejeon (Korea, Republic of)

    2013-10-15

    As result of this experiment, it was found that FT-IR test for material composition, TGA test for aging trend are applicable for similarity analysis of cable materials. OIT is recommended as option if TGA doesn't show good trend. Qualification of new insulation by EQ report of old insulation should be based on higher activation energy of new insulation than that of old one in the consideration of conservatism. In old nuclear power plant, it is easy to find black cable which has no marking of cable information such as manufacturer, material name and voltage. If a type test is required for qualification of these cables, how could I select representative cable? How could I determine the similarity of these cables? If manufacturer has qualified a cable for nuclear power plant more than a decade ago and composition of cable material is changed with similar one, is it acceptable to use the old EQ report for recently manufactured cable? It is well known to use FT-IR method to determine the similarity of cable materials. Infrared ray is easy tool to compare compositions of each material. But, it is not proper to compare aging trend of these materials. Study for similarity analysis of cable insulation by chemical test is described herein. To study a similarity evaluation method for polymer materials, FT-IR, TGA and OIT tests were performed for two cable insulation(old and new) which were supplied from same manufacturer. FT-IR shows good result to compare material compositions while TGA and OIT show good result to compare aging character of materials.

  10. Similarity Analysis of Cable Insulations by Chemical Test

    International Nuclear Information System (INIS)

    Kim, Jong Seog

    2013-01-01

    As result of this experiment, it was found that FT-IR test for material composition, TGA test for aging trend are applicable for similarity analysis of cable materials. OIT is recommended as option if TGA doesn't show good trend. Qualification of new insulation by EQ report of old insulation should be based on higher activation energy of new insulation than that of old one in the consideration of conservatism. In old nuclear power plant, it is easy to find black cable which has no marking of cable information such as manufacturer, material name and voltage. If a type test is required for qualification of these cables, how could I select representative cable? How could I determine the similarity of these cables? If manufacturer has qualified a cable for nuclear power plant more than a decade ago and composition of cable material is changed with similar one, is it acceptable to use the old EQ report for recently manufactured cable? It is well known to use FT-IR method to determine the similarity of cable materials. Infrared ray is easy tool to compare compositions of each material. But, it is not proper to compare aging trend of these materials. Study for similarity analysis of cable insulation by chemical test is described herein. To study a similarity evaluation method for polymer materials, FT-IR, TGA and OIT tests were performed for two cable insulation(old and new) which were supplied from same manufacturer. FT-IR shows good result to compare material compositions while TGA and OIT show good result to compare aging character of materials

  11. Comparative analysis of minor histocompatibility antigens genotyping methods

    Directory of Open Access Journals (Sweden)

    A. S. Vdovin

    2016-01-01

    Full Text Available The wide range of techniques could be employed to find mismatches in minor histocompatibility antigens between transplant recipients and their donors. In the current study we compared three genotyping methods based on polymerase chain reaction (PCR for four minor antigens. Three of the tested methods: allele-specific PCR, restriction fragment length polymorphism and real-time PCR with TaqMan probes demonstrated 100% reliability when compared to Sanger sequencing for all of the studied polymorphisms. High resolution melting analysis was unsuitable for genotyping of one of the tested minor antigens (HA-1 as it has linked synonymous polymorphism. Obtained data could be used to select the strategy for large-scale clinical genotyping.

  12. Standard test method for determining nodularity and nodule count in ductile iron using image analysis

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2011-01-01

    1.1 This test method is used to determine the percent nodularity and the nodule count per unit area (that is, number of nodules per mm2) using a light microscopical image of graphite in nodular cast iron. Images generated by other devices, such as a scanning electron microscope, are not specifically addressed, but can be utilized if the system is calibrated in both x and y directions. 1.2 Measurement of secondary or temper carbon in other types of cast iron, for example, malleable cast iron or in graphitic tool steels, is not specifically included in this standard because of the different graphite shapes and sizes inherent to such grades 1.3 This standard deals only with the recommended test method and nothing in it should be construed as defining or establishing limits of acceptability or fitness for purpose of the material tested. 1.4 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.5 This standard does not purport to address al...

  13. Fuel integrity project: analysis of light water reactor fuel rods test results

    International Nuclear Information System (INIS)

    Dallongeville, M.; Werle, J.; McCreesh, G.

    2004-01-01

    BNFL Nuclear Sciences and Technology Services and COGEMA LOGISTICS started in the year 2000 a joint project known as FIP (Fuel Integrity Project) with the aim of developing realistic methods by which the response of LWR fuel under impact accident conditions could be evaluated. To this end BNFL organised tests on both unirradiated and irradiated fuel pin samples and COGEMA LOGISTICS took responsibility for evaluating the test results. Interpretation of test results included simple mechanical analysis as well as simulation by Finite Element Analysis. The first tests that were available for analysis were an irradiated 3 point bending commissioning trial and a lateral irradiated hull compression test, both simulating the loading during a 9 m lateral regulatory drop. The bending test span corresponded roughly to a fuel pin intergrid distance. The outcome of the test was a failure starting at about 35 mm lateral deflection and a few percent of total deformation. Calculations were carried out using the ANSYS code employing a shell and brick model. The hull lateral compaction test corresponds to a conservative compression by neighbouring pins at the upper end of the fuel pin. In this pin region there are no pellets inside. The cladding broke initially into two and later into four parts, all of which were rather similar. Initial calculations were carried out with LS-DYNA3D models. The models used were optimised in meshing, boundary conditions and material properties. The calculation results compared rather well with the test data, in particular for the detailed ANSYS approach of the 3 point bending test, and allowed good estimations of stresses and deformations under mechanical loading as well as the derivation of material rupture criteria. All this contributed to the development of realistic numerical analysis methods for the evaluation of LWR fuel rod behaviour under both normal and accident transport conditions. This paper describes the results of the 3 point bending

  14. Fuel integrity project: analysis of light water reactor fuel rods test results

    Energy Technology Data Exchange (ETDEWEB)

    Dallongeville, M.; Werle, J. [COGEMA Logistics (AREVA Group) (France); McCreesh, G. [BNFL Nuclear Sciences and Technology Services (United Kingdom)

    2004-07-01

    BNFL Nuclear Sciences and Technology Services and COGEMA LOGISTICS started in the year 2000 a joint project known as FIP (Fuel Integrity Project) with the aim of developing realistic methods by which the response of LWR fuel under impact accident conditions could be evaluated. To this end BNFL organised tests on both unirradiated and irradiated fuel pin samples and COGEMA LOGISTICS took responsibility for evaluating the test results. Interpretation of test results included simple mechanical analysis as well as simulation by Finite Element Analysis. The first tests that were available for analysis were an irradiated 3 point bending commissioning trial and a lateral irradiated hull compression test, both simulating the loading during a 9 m lateral regulatory drop. The bending test span corresponded roughly to a fuel pin intergrid distance. The outcome of the test was a failure starting at about 35 mm lateral deflection and a few percent of total deformation. Calculations were carried out using the ANSYS code employing a shell and brick model. The hull lateral compaction test corresponds to a conservative compression by neighbouring pins at the upper end of the fuel pin. In this pin region there are no pellets inside. The cladding broke initially into two and later into four parts, all of which were rather similar. Initial calculations were carried out with LS-DYNA3D models. The models used were optimised in meshing, boundary conditions and material properties. The calculation results compared rather well with the test data, in particular for the detailed ANSYS approach of the 3 point bending test, and allowed good estimations of stresses and deformations under mechanical loading as well as the derivation of material rupture criteria. All this contributed to the development of realistic numerical analysis methods for the evaluation of LWR fuel rod behaviour under both normal and accident transport conditions. This paper describes the results of the 3 point bending

  15. A probabilistic method for testing and estimating selection differences between populations.

    Science.gov (United States)

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-12-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.

  16. Comparison of high efficiency particulate filter testing methods

    International Nuclear Information System (INIS)

    1985-01-01

    High Efficiency Particulate Air (HEPA) filters are used for the removal of submicron size particulates from air streams. In nuclear industry they are used as an important engineering safeguard to prevent the release of air borne radioactive particulates to the environment. HEPA filters used in the nuclear industry should therefore be manufactured and operated under strict quality control. There are three levels of testing HEPA filters: i) testing of the filter media; ii) testing of the assembled filter including filter media and filter housing; and iii) on site testing of the complete filter installation before putting into operation and later for the purpose of periodic control. A co-ordinated research programme on particulate filter testing methods was taken up by the Agency and contracts were awarded to the Member Countries, Belgium, German Democratic Republic, India and Hungary. The investigations carried out by the participants of the present co-ordinated research programme include the results of the nowadays most frequently used HEPA filter testing methods both for filter medium test, rig test and in-situ test purposes. Most of the experiments were carried out at ambient temperature and humidity, but indications were given to extend the investigations to elevated temperature and humidity in the future for the purpose of testing the performance of HEPA filter under severe conditions. A major conclusion of the co-ordinated research programme was that it was not possible to recommend one method as a reference method for in situ testing of high efficiency particulate air filters. Most of the present conventional methods are adequate for current requirements. The reasons why no method is to be recommended were multiple, ranging from economical aspects, through incompatibility of materials to national regulations

  17. A quality quantitative method of silicon direct bonding based on wavelet image analysis

    Science.gov (United States)

    Tan, Xiao; Tao, Zhi; Li, Haiwang; Xu, Tiantong; Yu, Mingxing

    2018-04-01

    The rapid development of MEMS (micro-electro-mechanical systems) has received significant attention from researchers in various fields and subjects. In particular, the MEMS fabrication process is elaborate and, as such, has been the focus of extensive research inquiries. However, in MEMS fabrication, component bonding is difficult to achieve and requires a complex approach. Thus, improvements in bonding quality are relatively important objectives. A higher quality bond can only be achieved with improved measurement and testing capabilities. In particular, the traditional testing methods mainly include infrared testing, tensile testing, and strength testing, despite the fact that using these methods to measure bond quality often results in low efficiency or destructive analysis. Therefore, this paper focuses on the development of a precise, nondestructive visual testing method based on wavelet image analysis that is shown to be highly effective in practice. The process of wavelet image analysis includes wavelet image denoising, wavelet image enhancement, and contrast enhancement, and as an end result, can display an image with low background noise. In addition, because the wavelet analysis software was developed with MATLAB, it can reveal the bonding boundaries and bonding rates to precisely indicate the bond quality at all locations on the wafer. This work also presents a set of orthogonal experiments that consist of three prebonding factors, the prebonding temperature, the positive pressure value and the prebonding time, which are used to analyze the prebonding quality. This method was used to quantify the quality of silicon-to-silicon wafer bonding, yielding standard treatment quantities that could be practical for large-scale use.

  18. Component evaluation testing and analysis algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  19. Alternative Testing Methods for Predicting Health Risk from Environmental Exposures

    Directory of Open Access Journals (Sweden)

    Annamaria Colacci

    2014-08-01

    Full Text Available Alternative methods to animal testing are considered as promising tools to support the prediction of toxicological risks from environmental exposure. Among the alternative testing methods, the cell transformation assay (CTA appears to be one of the most appropriate approaches to predict the carcinogenic properties of single chemicals, complex mixtures and environmental pollutants. The BALB/c 3T3 CTA shows a good degree of concordance with the in vivo rodent carcinogenesis tests. Whole-genome transcriptomic profiling is performed to identify genes that are transcriptionally regulated by different kinds of exposures. Its use in cell models representative of target organs may help in understanding the mode of action and predicting the risk for human health. Aiming at associating the environmental exposure to health-adverse outcomes, we used an integrated approach including the 3T3 CTA and transcriptomics on target cells, in order to evaluate the effects of airborne particulate matter (PM on toxicological complex endpoints. Organic extracts obtained from PM2.5 and PM1 samples were evaluated in the 3T3 CTA in order to identify effects possibly associated with different aerodynamic diameters or airborne chemical components. The effects of the PM2.5 extracts on human health were assessed by using whole-genome 44 K oligo-microarray slides. Statistical analysis by GeneSpring GX identified genes whose expression was modulated in response to the cell treatment. Then, modulated genes were associated with pathways, biological processes and diseases through an extensive biological analysis. Data derived from in vitro methods and omics techniques could be valuable for monitoring the exposure to toxicants, understanding the modes of action via exposure-associated gene expression patterns and to highlight the role of genes in key events related to adversity.

  20. Test equating methods and practices

    CERN Document Server

    Kolen, Michael J

    1995-01-01

    In recent years, many researchers in the psychology and statistical communities have paid increasing attention to test equating as issues of using multiple test forms have arisen and in response to criticisms of traditional testing techniques This book provides a practically oriented introduction to test equating which both discusses the most frequently used equating methodologies and covers many of the practical issues involved The main themes are - the purpose of equating - distinguishing between equating and related methodologies - the importance of test equating to test development and quality control - the differences between equating properties, equating designs, and equating methods - equating error, and the underlying statistical assumptions for equating The authors are acknowledged experts in the field, and the book is based on numerous courses and seminars they have presented As a result, educators, psychometricians, professionals in measurement, statisticians, and students coming to the subject for...

  1. Standard Test Method for Oxyacetylene Ablation Testing of Thermal Insulation Materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method covers the screening of ablative materials to determine the relative thermal insulation effectiveness when tested as a flat panel in an environment of a steady flow of hot gas provided by an oxyacetylene burner. 1.2 This test method should be used to measure and describe the properties of materials, products, or assemblies in response to heat and flame under controlled laboratory conditions and should not be used to describe or appraise the fire hazard of materials, products, or assemblies under actual fire conditions. However, results of this test method may be used as elements of a fire risk assessment which takes into account all of the factors which are pertinent to an assessment of the fire hazard of a particular end use. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limi...

  2. Attitudes towards genetic testing: analysis of contradictions

    DEFF Research Database (Denmark)

    Jallinoja, P; Hakonen, A; Aro, A R

    1998-01-01

    A survey study was conducted among 1169 people to evaluate attitudes towards genetic testing in Finland. Here we present an analysis of the contradictions detected in people's attitudes towards genetic testing. This analysis focuses on the approval of genetic testing as an individual choice and o...... studies on attitudes towards genetic testing as well as in the health care context, e.g. in genetic counselling.......A survey study was conducted among 1169 people to evaluate attitudes towards genetic testing in Finland. Here we present an analysis of the contradictions detected in people's attitudes towards genetic testing. This analysis focuses on the approval of genetic testing as an individual choice...... and on the confidence in control of the process of genetic testing and its implications. Our analysis indicated that some of the respondents have contradictory attitudes towards genetic testing. It is proposed that contradictory attitudes towards genetic testing should be given greater significance both in scientific...

  3. Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture

    Science.gov (United States)

    West, Phillip B [Idaho Falls, ID; Novascone, Stephen R [Idaho Falls, ID; Wright, Jerry P [Idaho Falls, ID

    2011-09-27

    Earth analysis methods, subsurface feature detection methods, earth analysis devices, and articles of manufacture are described. According to one embodiment, an earth analysis method includes engaging a device with the earth, analyzing the earth in a single substantially lineal direction using the device during the engaging, and providing information regarding a subsurface feature of the earth using the analysis.

  4. The Estimation of Knowledge Solidity Based on the Comparative Analysis of Different Test Results

    Directory of Open Access Journals (Sweden)

    Y. K. Khenner

    2012-01-01

    Full Text Available At present, the testing techniques of knowledge estimation are widely spread in educational system. However, this method is seriously criticized including its application to the Unified State Examinations. The research is aimed at studying the limitations of testing techniques. The authors recommend a new way of knowledge solid- ity estimation bases on the comparative results analysis of various kinds of tests. While testing the large group of students, the authors found out that the results of the closed and open tests substantially differ. The comparative analysis demonstrates that the open tests assessment of the knowledge solidity is more adequate than that of the closed ones. As the research is only based on a single experiment, the authors recommend using this method further, substantiating the findings concerning the differences in tests results, and analyzing the advantages and disadvantages of the tests in question. 

  5. Seismic Response Analysis and Test of 1/8 Scale Model for a Spent Fuel Storage Cask

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Han; Park, C. G.; Koo, G. H.; Seo, G. S. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Yeom, S. H. [Chungnam Univ., Daejeon (Korea, Republic of); Choi, B. I.; Cho, Y. D. [Korea Hydro and Nuclear Power Co. Ltd., Daejeon (Korea, Republic of)

    2005-07-15

    The seismic response tests of a spent fuel dry storage cask model of 1/8 scale are performed for an typical 1940 El-centro and Kobe earthquakes. This report firstly focuses on the data generation by seismic response tests of a free standing storage cask model to check the overturing possibility of a storage cask and the slipping displacement on concrete slab bed. The variations in seismic load magnitude and cask/bed interface friction are considered in tests. The test results show that the model gives an overturning response for an extreme condition only. A FEM model is built for the test model of 1/8 scale spent fuel dry storage cask using available 3D contact conditions in ABAQUS/Explicit. Input load for this analysis is El-centro earthquake, and the friction coefficients are obtained from the test result. Penalty and kinematic contact methods of ABAQUS are used for a mechanical contact formulation. The analysis methods was verified with the rocking angle obtained by seismic response tests. The kinematic contact method with an adequate normal contact stiffness showed a good agreement with tests. Based on the established analysis method for 1/8 scale model, the seismic response analyses of a full scale model are performed for design and beyond design seismic loads.

  6. MODERN METHODS OF FOOD INTOLERANCE TESTING

    Directory of Open Access Journals (Sweden)

    M. Yu. Rosensteyn

    2016-01-01

    Full Text Available Аn analytical review of modern methods of food intolerance diagnostics based on interpretation of markers used in the various tests is рresented. It is shown that tests based on observation of the reaction of specific antibodies of the immune system to food antigens tested, are the most accurate, reliable and representative for the diagnosis of food intolerance.

  7. Analysis and elimination method of the effects of cables on LVRT testing for offshore wind turbines

    Science.gov (United States)

    Jiang, Zimin; Liu, Xiaohao; Li, Changgang; Liu, Yutian

    2018-02-01

    The current state, characteristics and necessity of the low voltage ride through (LVRT) on-site testing for grid-connected offshore wind turbines are introduced firstly. Then the effects of submarine cables on the LVRT testing are analysed based on the equivalent circuit of the testing system. A scheme for eliminating the effects of cables on the proposed LVRT testing method is presented. The specified voltage dips are guaranteed to be in compliance with the testing standards by adjusting the ratio between the current limiting impedance and short circuit impedance according to the steady voltage relationship derived from the equivalent circuit. Finally, simulation results demonstrate that the voltage dips at the high voltage side of wind turbine transformer satisfy the requirements of testing standards.

  8. Standard test method for electrochemical critical pitting temperature testing of stainless steels

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 This test method covers a procedure for the evaluation of the resistance of stainless steel and related alloys to pitting corrosion based on the concept of the determination of a potential independent critical pitting temperature (CPT). 1.2 This test methods applies to wrought and cast products including but not restricted to plate, sheet, tubing, bar, forgings, and welds, (see Note 1). Note 1—Examples of CPT measurements on sheet, plate, tubing, and welded specimens for various stainless steels can be found in Ref (1). See the research reports (Section 14). 1.3 The standard parameters recommended in this test method are suitable for characterizing the CPT of austenitic stainless steels and other related alloys with a corrosion resistance ranging from that corresponding to solution annealed UNS S31600 (Type 316 stainless steel) to solution annealed UNS S31254 (6 % Mo stainless steel). 1.4 This test method may be extended to stainless steels and other alloys related to stainless steel that have a CPT...

  9. Monte Carlo based statistical power analysis for mediation models: methods and software.

    Science.gov (United States)

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  10. Medical microbiological analysis of Apollo-Soyuz test project crewmembers

    Science.gov (United States)

    Taylor, G. R.; Zaloguev, S. N.

    1976-01-01

    The procedures and results of the Microbial Exchange Experiment (AR-002) of the Apollo-Soyuz Test Project are described. Included in the discussion of procedural aspects are methods and materials, in-flight microbial specimen collection, and preliminary analysis of microbial specimens. Medically important microorganisms recovered from both Apollo and Soyuz crewmen are evaluated.

  11. The analysis of results of comparison test for radionuclides measurement through γ spectrum analysis from 2007 to 2012

    International Nuclear Information System (INIS)

    Wu Jialong; He Jian; Sun Wei; Wang Yun

    2013-01-01

    In order to test the capability of radionuclides measurement through γ spectrum analysis and improve the ability of the technicians by inter-laboratory comparison test, Gansu Center for Disease Prevention and Control participated in the comparison test organized by China Center for Disease Prevention and Control continuously from 2007 to 2012. All of the measured values are within the scope of qualified, and the relative deviation of measured value in the entire comparison tests is range from -16.31% to 11.83%.The results show that the equipment for γ spectrum measurement works normally, the analysis methods used for radioactive nuclide measuring is correct and the data in issued test report is accurate and reliable. The ability of the γ spectrum analysis satisfies the requirements of China Metrology Accreditation and the occupational health technical service. (authors)

  12. Extension of the pseudo dynamic method to test structures with distributed mass

    International Nuclear Information System (INIS)

    Renda, V.; Papa, L.; Bellorini, S.

    1993-01-01

    The PsD method is a mixed numerical and experimental procedure. At each time step the dynamic deformation of the structure, computed by solving the equation of the motion for a given input signal, is reproduced in the laboratory by means of actuators attached to the sample at specific points. The reaction forces at those points are measured and used to compute the deformation for the next time step. The reaction forces being known, knowledge of the stiffness of the structure is not needed, so that the method can be effective also for deformations leading to strong nonlinear behaviour of the structure. On the contrary, the mass matrix and the applied forces must be well known. For this reason the PsD method can be applied without approximations when the masses can be considered as lumped at the testing points of the sample. The present work investigates the possibility to extend the PsD method to test structures with distributed mass. A standard procedure is proposed to provide an equivalent mass matrix and force vector reduced to the testing points and to verify the reliability of the model. The verification is obtained comparing the results of multi-degrees of freedom dynamic analysis, done by means of a Finite Elements (FE) numerical program, with a simulation of the PsD method based on the reduced degrees of freedom mass matrix and external forces, assuming in place of the experimental reactions, those computed with the general FE model. The method has been applied to a numerical simulation of the behaviour of a realistic and complex structure with distributed mass consisting of a masonry building of two floors. The FE model consists of about two thousand degrees of freedom and the condensation has been made for four testing points. A dynamic analysis has been performed with the general FE model and the reactions of the structure have been recorded in a file and used as input for the PsD simulation with the four degree of freedom model. The comparison between

  13. Standard Test Method for Wet Insulation Integrity Testing of Photovoltaic Arrays

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method covers a procedure to determine the insulation resistance of a photovoltaic (PV) array (or its component strings), that is, the electrical resistance between the array's internal electrical components and is exposed, electrically conductive, non-current carrying parts and surfaces of the array. 1.2 This test method does not establish pass or fail levels. The determination of acceptable or unacceptable results is beyond the scope of this test method. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  14. A general first-order global sensitivity analysis method

    International Nuclear Information System (INIS)

    Xu Chonggang; Gertner, George Zdzislaw

    2008-01-01

    Fourier amplitude sensitivity test (FAST) is one of the most popular global sensitivity analysis techniques. The main mechanism of FAST is to assign each parameter with a characteristic frequency through a search function. Then, for a specific parameter, the variance contribution can be singled out of the model output by the characteristic frequency. Although FAST has been widely applied, there are two limitations: (1) the aliasing effect among parameters by using integer characteristic frequencies and (2) the suitability for only models with independent parameters. In this paper, we synthesize the improvement to overcome the aliasing effect limitation [Tarantola S, Gatelli D, Mara TA. Random balance designs for the estimation of first order global sensitivity indices. Reliab Eng Syst Safety 2006; 91(6):717-27] and the improvement to overcome the independence limitation [Xu C, Gertner G. Extending a global sensitivity analysis technique to models with correlated parameters. Comput Stat Data Anal 2007, accepted for publication]. In this way, FAST can be a general first-order global sensitivity analysis method for linear/nonlinear models with as many correlated/uncorrelated parameters as the user specifies. We apply the general FAST to four test cases with correlated parameters. The results show that the sensitivity indices derived by the general FAST are in good agreement with the sensitivity indices derived by the correlation ratio method, which is a non-parametric method for models with correlated parameters

  15. Three-Phase Harmonic Analysis Method for Unbalanced Distribution Systems

    Directory of Open Access Journals (Sweden)

    Jen-Hao Teng

    2014-01-01

    Full Text Available Due to the unbalanced features of distribution systems, a three-phase harmonic analysis method is essential to accurately analyze the harmonic impact on distribution systems. Moreover, harmonic analysis is the basic tool for harmonic filter design and harmonic resonance mitigation; therefore, the computational performance should also be efficient. An accurate and efficient three-phase harmonic analysis method for unbalanced distribution systems is proposed in this paper. The variations of bus voltages, bus current injections and branch currents affected by harmonic current injections can be analyzed by two relationship matrices developed from the topological characteristics of distribution systems. Some useful formulas are then derived to solve the three-phase harmonic propagation problem. After the harmonic propagation for each harmonic order is calculated, the total harmonic distortion (THD for bus voltages can be calculated accordingly. The proposed method has better computational performance, since the time-consuming full admittance matrix inverse employed by the commonly-used harmonic analysis methods is not necessary in the solution procedure. In addition, the proposed method can provide novel viewpoints in calculating the branch currents and bus voltages under harmonic pollution which are vital for harmonic filter design. Test results demonstrate the effectiveness and efficiency of the proposed method.

  16. On sine dwell or broadband methods for modal testing

    Science.gov (United States)

    Chen, Jay-Chung; Wada, Ben K.

    1987-01-01

    For large, complex spacecraft structural systems, the objectives of the modal test are outlined. Based on these objectives, the comparison criteria for the modal test methods, namely, the broadband excitation and the sine dwell methods are established. Using the Galileo spacecraft modal test and the Centaur G Prime upper stage vehicle modal test as examples, the relative advantages or disadvantages of each method are examined. The usefulness or shortcoming of the methods are given from a practicing engineer's view point.

  17. A statistical test for outlier identification in data envelopment analysis

    Directory of Open Access Journals (Sweden)

    Morteza Khodabin

    2010-09-01

    Full Text Available In the use of peer group data to assess individual, typical or best practice performance, the effective detection of outliers is critical for achieving useful results. In these ‘‘deterministic’’ frontier models, statistical theory is now mostly available. This paper deals with the statistical pared sample method and its capability of detecting outliers in data envelopment analysis. In the presented method, each observation is deleted from the sample once and the resulting linear program is solved, leading to a distribution of efficiency estimates. Based on the achieved distribution, a pared test is designed to identify the potential outlier(s. We illustrate the method through a real data set. The method could be used in a first step, as an exploratory data analysis, before using any frontier estimation.

  18. Performing Pumping Test Data Analysis Applying Cooper-Jacob’s Method for Estimating of the Aquifer Parameters

    Directory of Open Access Journals (Sweden)

    Dana Khider Mawlood

    2016-06-01

    Full Text Available Single well test is more common than aquifer test with having observation well, since the advantage of single well test is that the pumping test can be conducted on the production well with the absence of observation well. A kind of single well test, which is step-drawdown test used to determine the efficiency and specific capacity of the well, however in case of single well test it is possible to estimate Transmissivity, but the other parameter which is Storativity is overestimated, so the aim of this study is to analyze four pumping test data located in KAWRGOSK area by using cooper-Jacob’s (1946 time drawdown approximation of Theis method to estimate the aquifer parameters, also in order to determine the reasons which are affecting the reliability of the Storativity value and obtain the important aspect behind that in practice.

  19. Methods for testing transport models

    International Nuclear Information System (INIS)

    Singer, C.; Cox, D.

    1991-01-01

    Substantial progress has been made over the past year on six aspects of the work supported by this grant. As a result, we have in hand for the first time a fairly complete set of transport models and improved statistical methods for testing them against large databases. We also have initial results of such tests. These results indicate that careful application of presently available transport theories can reasonably well produce a remarkably wide variety of tokamak data

  20. Mathematical methods in time series analysis and digital image processing

    CERN Document Server

    Kurths, J; Maass, P; Timmer, J

    2008-01-01

    The aim of this volume is to bring together research directions in theoretical signal and imaging processing developed rather independently in electrical engineering, theoretical physics, mathematics and the computer sciences. In particular, mathematically justified algorithms and methods, the mathematical analysis of these algorithms, and methods as well as the investigation of connections between methods from time series analysis and image processing are reviewed. An interdisciplinary comparison of these methods, drawing upon common sets of test problems from medicine and geophysical/enviromental sciences, is also addressed. This volume coherently summarizes work carried out in the field of theoretical signal and image processing. It focuses on non-linear and non-parametric models for time series as well as on adaptive methods in image processing.

  1. Optimization of tensile method and specimen geometry in modified ring tensile test

    International Nuclear Information System (INIS)

    Kitano, Koji; Fuketa, Toyoshi; Sasajima, Hideo; Uetsuka, Hiroshi

    2001-03-01

    Several techniques in ring tensile test are proposed in order to evaluate mechanical properties of cladding under hoop loading condition caused by pellet/cladding mechanical interaction (PCMI). In the modified techniques, variety of tensile methods and specimen geometry are being proposed in order to limit deformation within the gauge section. However, the tensile method and the specimen geometry were not determined in the modified techniques. In the present study, we have investigated the tensile method and the specimen geometry through finite element method (FEM) analysis of specimen deformation and tensile test on specimens with various gauge section geometries. In using two-piece tensile tooling, the mechanical properties under hoop loading condition can be correctly evaluated when deformation part (gauge section) is put on the top of a half-mandrel, and friction between the specimen and the half-mandrel is reduced with Teflon tape. In addition, we have shown the optimum specimen geometry for PWR 17 by 17 type cladding. (author)

  2. Light scattering methods to test inorganic PCMs for application in buildings

    Science.gov (United States)

    De Paola, M. G.; Calabrò, V.; De Simone, M.

    2017-10-01

    Thermal performance and stability over time are key parameters for the characterization and application of PCMs in the building sector. Generally, inorganic PCMs are dispersions of hydrated salts and additives in water that counteract phase segregation phenomena and subcooling. Traditional methods or in “house” methods can be used for evaluating thermal properties, while stability can be estimated over time by using optical techniques. By considering this double approach, in this work thermal and structural analyses of Glauber salt based composite PCMs are conducted by means of non-conventional equipment: T-history method (thermal analysis) and Turbiscan (stability analysis). Three samples with the same composition (Glauber salt with additives) were prepared by using different sonication times and their thermal performances were compared by testing both the thermal cycling and the thermal properties. The stability of the mixtures was verified by the identification of destabilization phenomena, the evaluation of the migration velocities of particles and the estimation of variation of particle size.

  3. Final Test Analysis of Post Graduate Medical Residents

    Directory of Open Access Journals (Sweden)

    Maliheh Arab

    2009-04-01

    Full Text Available Background and purpose: Multiple choice questions are the most frequent test for medical students. It is important to analysis the overall response to individual  questions in the test.The aim of this study is to analyse questions of post graduate medical residency  tests.Methods: Final annual local (Ramadan medical school and national tests given to three Residency groups  including  17 Obstetrics  and gynecology testees,  7 pediatrics  and  12 internal  medicine  in 2004 were studied. In local tests residents answered to 148, 150 and 144 and in national  tests to ISO MCQS. Questions were  evaluated regarding cognitive domain level, Difficultly index and Discriminative index  and finally to evaluate  the optimal,  proper, acceptable and  ''must  omitted" questions.Results: Questions of local Obstetrics and gynecology, pediatrics and internal medicine tests evaluated the "recall" level in 72%, 72% and 51% and in national  tests 71%,  35% and 19%,  respectively. Questions  with  Discriminative indices  of 0.7 or more (proper  were 3 and  5% in  Obstetrics  and gynecology, 3.5% and 1% in pediatrics and 1% in local and national tests. Proper difficulty indices (30-70  were shown in 53% and 54% in Obstetrics  and gynecology, 34% and 43% in pediatrics and 40% and  42% in internal  medicine.  Generally  evaluating,  "must  omitted" questions in local and national tests were 76% in Obstetrics and gynecology, 81% and 79% in pediatrics and 91% and 85% in internal medicine. The most common causes making the questions to be considered  "must omitted" in studied tests were negative, zero or less than 0.2 Discriminative indices.Conclusion: Test analysis  of final  annual  local  (Ramadan medical  school  and national  tests  of Obstetrics  and gynecology, Pediatrics and internal medicine residency  programs  in 2004 revealed that most of the questions  are planned  in  "recall" level, harbor  improper

  4. Synthesis of Enterprise and Value-Based Methods for Multiattribute Risk Analysis

    International Nuclear Information System (INIS)

    Kenley, C. Robert; Collins, John W.; Beck, John M.; Heydt, Harold J.; Garcia, Chad B.

    2001-01-01

    This paper describes a method for performing multiattribute decision analysis to prioritize ap-proaches to handling risks during the development and operation of complex socio-technical systems. The method combines risk categorization based on enterprise views, risk prioritization of the categories based on the Analytic Hierarchy Process (AHP), and more standard probability-consequence ratings schemes. We also apply value-based testing me-thods used in software development to prioritize risk-handling approaches. We describe a tool that synthesizes the methods and performs a multiattribute analysis of the technical and pro-grammatic risks on the Next Generation Nuclear Plant (NGNP) enterprise.

  5. Comparative study of in-situ filter test methods

    International Nuclear Information System (INIS)

    Marshall, M.; Stevens, D.C.

    1981-01-01

    Available methods of testing high efficiency particulate aerosol (HEPA) filters in-situ have been reviewed. In order to understand the relationship between the results produced by different methods a selection has been compared. Various pieces of equipment for generating and detecting aerosols have been tested and their suitability assessed. Condensation-nuclei, DOP (di-octyl phthalate) and sodium-flame in-situ filter test methods have been studied, using the 500 cfm (9000 m 3 /h) filter test rig at Harwell and in the field. Both the sodium-flame and DOP methods measure the penetration through leaks and filter material. However the measured penetration through filtered leaks depends on the aerosol size distribution and the detection method. Condensation-nuclei test methods can only be used to measure unfiltered leaks since condensation nuclei have a very low penetration through filtered leaks. A combination of methods would enable filtered and unfiltered leaks to be measured. A condensation-nucleus counter using n-butyl alcohol as the working fluid has the advantage of being able to detect any particle up to 1 μm in diameter, including DOP, and so could be used for this purpose. A single-particle counter has not been satisfactory because of interference from particles leaking into systems under extract, particularly downstream of filters, and because the concentration of the input aerosol has to be severely limited. The sodium-flame method requires a skilled operator and may cause safety and corrosion problems. The DOP method using a total light scattering detector has so far been the most satisfactory. It is fairly easy to use, measures reasonably low values of penetration and gives rapid results. DOP has had no adverse effect on HEPA filters over a long series of tests

  6. Fluorescent penetration crack testing method

    International Nuclear Information System (INIS)

    Roth, A.

    1979-01-01

    The same cleaning, penetration, washing, development and evaluation agents are used in this method as for known methods. In order to accelerate or shorten the testing, the drying process is performed only to optical dryness of the material surface by blowing on it with pressurized air, the development by simple pressing into or dusting of the material with the developer and the temperature of the washing water is selected within room temperature range. (RW) [de

  7. Improved sensitivity testing of explosives using transformed Up-Down methods

    International Nuclear Information System (INIS)

    Brown, Geoffrey W

    2014-01-01

    Sensitivity tests provide data that help establish guidelines for the safe handling of explosives. Any sensitivity test is based on assumptions to simplify the method or reduce the number of individual sample evaluations. Two common assumptions that are not typically checked after testing are 1) explosive response follows a normal distribution as a function of the applied stimulus levels and 2) the chosen test level spacing is close to the standard deviation of the explosive response function (for Bruceton Up-Down testing for example). These assumptions and other limitations of traditional explosive sensitivity testing can be addressed using Transformed Up-Down (TUD) test methods. TUD methods have been developed extensively for psychometric testing over the past 50 years and generally use multiple tests at a given level to determine how to adjust the applied stimulus. In the context of explosive sensitivity we can use TUD methods that concentrate testing around useful probability levels. Here, these methods are explained and compared to Bruceton Up-Down testing using computer simulation. The results show that the TUD methods are more useful for many cases but that they do require more tests as a consequence. For non-normal distributions, however, the TUD methods may be the only accurate assessment method.

  8. In Silico Toxicology – Non-Testing Methods

    Science.gov (United States)

    Raunio, Hannu

    2011-01-01

    In silico toxicology in its broadest sense means “anything that we can do with a computer in toxicology.” Many different types of in silico methods have been developed to characterize and predict toxic outcomes in humans and environment. The term non-testing methods denote grouping approaches, structure–activity relationship, and expert systems. These methods are already used for regulatory purposes and it is anticipated that their role will be much more prominent in the near future. This Perspective will delineate the basic principles of non-testing methods and evaluate their role in current and future risk assessment of chemical compounds. PMID:21772821

  9. Flight Test Results of a GPS-Based Pitot-Static Calibration Method Using Output-Error Optimization for a Light Twin-Engine Airplane

    Science.gov (United States)

    Martos, Borja; Kiszely, Paul; Foster, John V.

    2011-01-01

    As part of the NASA Aviation Safety Program (AvSP), a novel pitot-static calibration method was developed to allow rapid in-flight calibration for subscale aircraft while flying within confined test areas. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. This method has been demonstrated in subscale flight tests and has shown small 2- error bounds with significant reduction in test time compared to other methods. The current research was motivated by the desire to further evaluate and develop this method for full-scale aircraft. A goal of this research was to develop an accurate calibration method that enables reductions in test equipment and flight time, thus reducing costs. The approach involved analysis of data acquisition requirements, development of efficient flight patterns, and analysis of pressure error models based on system identification methods. Flight tests were conducted at The University of Tennessee Space Institute (UTSI) utilizing an instrumented Piper Navajo research aircraft. In addition, the UTSI engineering flight simulator was used to investigate test maneuver requirements and handling qualities issues associated with this technique. This paper provides a summary of piloted simulation and flight test results that illustrates the performance and capabilities of the NASA calibration method. Discussion of maneuver requirements and data analysis methods is included as well as recommendations for piloting technique.

  10. Optimized Method for Knee Displacement Measurement in Vehicle Sled Crash Test

    Directory of Open Access Journals (Sweden)

    Sun Hang

    2017-01-01

    Full Text Available This paper provides an optimized method for measuring dummy’s knee displacement in vehicle sled crash test. The proposed method utilizes completely new elements for measurement, which are acceleration and angular velocity of dummy’s pelvis, as well as the rotational angle of its femur. Compared with the traditional measurement only using camera-based high-speed motion image analysis, the optimized one can not only maintain the measuring accuracy, but also avoid the disturbance caused by dummy movement, dashboard blocking and knee deformation during the crash. An experiment is made to verify the accuracy of the proposed method, which eliminates the strong dependence on single target tracing in traditional method. Moreover, it is very appropriate for calculating the penetration depth to the dashboard.

  11. Wavefront-error evaluation by mathematical analysis of experimental Foucault-test data

    Science.gov (United States)

    Wilson, R. G.

    1975-01-01

    The diffraction theory of the Foucault test provides an integral formula expressing the complex amplitude and irradiance distribution in the Foucault pattern of a test mirror (lens) as a function of wavefront error. Recent literature presents methods of inverting this formula to express wavefront error in terms of irradiance in the Foucault pattern. The present paper describes a study in which the inversion formulation was applied to photometric Foucault-test measurements on a nearly diffraction-limited mirror to determine wavefront errors for direct comparison with ones determined from scatter-plate interferometer measurements. The results affirm the practicability of the Foucault test for quantitative wavefront analysis of very small errors, and they reveal the fallacy of the prevalent belief that the test is limited to qualitative use only. Implications of the results with regard to optical testing and the potential use of the Foucault test for wavefront analysis in orbital space telescopes are discussed.

  12. Standard Test Method for Sizing and Counting Particulate Contaminant In and On Clean Room Garments

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2000-01-01

    1.1 This test method covers the determination of detachable particulate contaminant 5 m or larger, in and on the fabric of clean room garments. 1.2 This test method does not apply to nonporous fabrics such as Tyvek or Gortex. It only applies to fabrics that are porous such as cotton or polyester. 1.3 The values stated in SI units are to be regarded as the standard. The inch-pound values given in parentheses are for information only. 1.4 This test method provides not only the traditional optical microscopic analysis but also a size distribution and surface obscuration analysis for particles on a fine-textured membrane filter or in a tape lift sample. It utilizes transmitted illumination to render all particles darker than the background for gray level detection. Particles collected on opaque plates must be transferred to a suitable membrane filter. This standard may involve hazardous materials, operations, and equipment. This standard does not purport to address all of the safety concerns, if any, associat...

  13. Reduction Methods for Real-time Simulations in Hybrid Testing

    DEFF Research Database (Denmark)

    Andersen, Sebastian

    2016-01-01

    Hybrid testing constitutes a cost-effective experimental full scale testing method. The method was introduced in the 1960's by Japanese researchers, as an alternative to conventional full scale testing and small scale material testing, such as shake table tests. The principle of the method...... is performed on a glass fibre reinforced polymer composite box girder. The test serves as a pilot test for prospective real-time tests on a wind turbine blade. The Taylor basis is implemented in the test, used to perform the numerical simulations. Despite of a number of introduced errors in the real...... is to divide a structure into a physical substructure and a numerical substructure, and couple these in a test. If the test is conducted in real-time it is referred to as real time hybrid testing. The hybrid testing concept has developed significantly since its introduction in the 1960', both with respect...

  14. A Method to Analyze Threats and Vulnerabilities by Using a Cyber Security Test-bed of an Operating NPP

    International Nuclear Information System (INIS)

    Kim, Yong Sik; Son, Choul Woong; Lee, Soo Ill

    2016-01-01

    In order to implement cyber security controls for an Operating NPP, a security assessment should conduct in advance, and it is essential to analyze threats and vulnerabilities for a cyber security risk assessment phase. It might be impossible to perform a penetration test or scanning for a vulnerability analysis because the test may cause adverse effects on the inherent functions of ones. This is the reason why we develop and construct a cyber security test-bed instead of using real I and C systems in the operating NPP. In this paper, we propose a method to analyze threats and vulnerabilities of a specific target system by using a cyber security test-bed. The test-bed is being developed considering essential functions of the selected safety and non-safety system. This paper shows the method to analyze threats and vulnerabilities of a specific target system by using a cyber security test-bed. In order to develop the cyber security test-bed with both safety and non-safety functions, test-bed functions analysis and preliminary threats and vulnerabilities identification have been conducted. We will determine the attack scenarios and conduct the test-bed based vulnerability analysis

  15. A Method to Analyze Threats and Vulnerabilities by Using a Cyber Security Test-bed of an Operating NPP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yong Sik; Son, Choul Woong; Lee, Soo Ill [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    In order to implement cyber security controls for an Operating NPP, a security assessment should conduct in advance, and it is essential to analyze threats and vulnerabilities for a cyber security risk assessment phase. It might be impossible to perform a penetration test or scanning for a vulnerability analysis because the test may cause adverse effects on the inherent functions of ones. This is the reason why we develop and construct a cyber security test-bed instead of using real I and C systems in the operating NPP. In this paper, we propose a method to analyze threats and vulnerabilities of a specific target system by using a cyber security test-bed. The test-bed is being developed considering essential functions of the selected safety and non-safety system. This paper shows the method to analyze threats and vulnerabilities of a specific target system by using a cyber security test-bed. In order to develop the cyber security test-bed with both safety and non-safety functions, test-bed functions analysis and preliminary threats and vulnerabilities identification have been conducted. We will determine the attack scenarios and conduct the test-bed based vulnerability analysis.

  16. Test Method for Spalling of Fire Exposed Concrete

    DEFF Research Database (Denmark)

    Hertz, Kristian Dahl; Sørensen, Lars Schiøtt

    2005-01-01

    A new material test method is presented for determining whether or not an actual concrete may suffer from explosive spalling at a specified moisture level. The method takes into account the effect of stresses from hindered thermal expansion at the fire-exposed surface. Cylinders are used, which...... in many countries serve as standard specimens for testing the compressive strength. Consequently, the method is quick, cheap and easy to use in comparison to the alternative of testing full-scale or semi full-scale structures with correct humidity, load and boundary conditions. A number of concretes have...

  17. Climate-conscious architecture. Design and wind testing method for climates in change

    Energy Technology Data Exchange (ETDEWEB)

    Kuismanen, K.

    2008-07-01

    The main objective of this research was to develop practical tools with which it is possible to improve the environment, micro-climate and energy economy of buildings and plans in different climate zones, and take the climate change into account. The parts of the study are: State of art study into existing know-how about climate and planning. Study of the effects of climate change on the built environment. Development of simple micro-climate, nature and built environment analysis methods. Defining the criteria of an acceptable micro-climatic environment. Development of the wind test blower. Presenting ways to interpret test results and draw conclusions. Development of planning and design guidelines for different climate zones. An important part of the research is the development of the CASE wind test instrument, different wind simulation techniques, and the methods of observing the results. Bioclimatic planning and architectural design guidelines for different climate zones are produced. The analyse tools developed give a qualitative overall view, which can be deepened towards a quantitative analyse with wind testing measurements and roughness calculations. No mechanical rules are suggested, but complementary viewpoints and practices introduced to a normal planning process as well as improvement of consultative knowledge. The 'method' is that there is no strict mechanical method, but a deeper understanding of bioclimatic matters. Climate-conscious planning with the developed CASE method, make it possible to design a better micro-climate for new or old built-up areas. Winds can be used in to ventilate exhaust fumes and other pollutants, which improves the quality of air and the healthiness of the urban environment. The analyses and scale-model tests make it possible to shield cold windy areas and to diminish the cooling effect of wind on facades. According to studies in Scandinavian countries this will bring energy savings of 5-15 per cent. The method can

  18. Aquifer test interpretation using derivative analysis and diagnostic plots

    Science.gov (United States)

    Hernández-Espriú, Antonio; Real-Rangel, Roberto; Cortés-Salazar, Iván; Castro-Herrera, Israel; Luna-Izazaga, Gabriela; Sánchez-León, Emilio

    2017-04-01

    Pumping tests remain a method of choice to deduce fundamental aquifer properties and to assess well condition. In the oil and gas (O&G) industry, well testing has been the core technique in examining reservoir behavior over the last 50 years. The pressure derivative by Bourdet, it is perhaps, the most significant single development in the history of well test analysis. Recently, the so-called diagnostics plots (e.g. drawdown and drawdown derivative in a log-log plot) have been successfully tested in aquifers. However, this procedure is still underutilized by groundwater professionals. This research illustrates the applicability range, advantages and drawbacks (e.g. smoothing procedures) of diagnostic plots using field examples from a wide spectrum of tests (short/long tests, constant/variable flow rates, drawdown/buildup stages, pumping well/observation well) in dissimilar geological conditions. We analyze new and pre-existent aquifer tests in Mexico, USA, Canada, Germany, France and Saudi Arabia. In constant flow rate tests, our results show that derivative analysis is an easy, robust and powerful tool to assess near-borehole damage effects, formation heterogeneity, boundaries, flow regimes, infinite-acting radial stages, i.e., valid Theisian framework, and fracture-driven flow. In step tests, the effectiveness relies on high-frequency drawdown measurements. Moreover, we adapt O&G analytical solutions to cater for the conditions in groundwater systems. In this context, further parameters can be computed analytically from the plots, such as skin factor, head losses, wellbore storage, distance to the boundary, channel-aquifer and/or fracture zone width, among others. Therefore, diagnostic plots should be considered a mandatory tool for pumping tests analysis among hydrogeologists. This project has been supported by DGAPA (UNAM) under the research project PAPIIT IN-112815.

  19. Evaluation of auto-assessment method for C-D analysis based on support vector machine

    International Nuclear Information System (INIS)

    Takei, Takaaki; Ikeda, Mitsuru; Imai, Kuniharu; Kamihira, Hiroaki; Kishimoto, Tomonari; Goto, Hiroya

    2010-01-01

    Contrast-Detail (C-D) analysis is one of the visual quality assessment methods in medical imaging, and many auto-assessment methods for C-D analysis have been developed in recent years. However, for the auto-assessment method for C-D analysis, the effects of nonlinear image processing are not clear. So, we have made an auto-assessment method for C-D analysis using a support vector machine (SVM), and have evaluated its performance for the images processed with a noise reduction method. The feature indexes used in the SVM were the normalized cross correlation (NCC) coefficient on each signal between the noise-free and noised image, the contrast to noise ratio (CNR) on each signal, the radius of each signal, and the Student's t-test statistic for the mean difference between the signal and background pixel values. The results showed that the auto-assessment method for C-D analysis by using Student's t-test statistic agreed well with the visual assessment for the non-processed images, but disagreed for the images processed with the noise reduction method. Our results also showed that the auto-assessment method for C-D analysis by the SVM made of NCC and CNR agreed well with the visual assessment for the non-processed and noise-reduced images. Therefore, the auto-assessment method for C-D analysis by the SVM will be expected to have the robustness for the non-linear image processing. (author)

  20. 40 CFR Table 3 of Subpart Aaaaaaa... - Test Methods

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Test Methods 3 Table 3 of Subpart..., Subpt. AAAAAAA, Table 3 Table 3 of Subpart AAAAAAA of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in...

  1. Uncertainty Analysis of Resistance Tests in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University

    Directory of Open Access Journals (Sweden)

    Cihad DELEN

    2015-12-01

    Full Text Available In this study, some systematical resistance tests, where were performed in Ata Nutku Ship Model Testing Laboratory of Istanbul Technical University (ITU, have been included in order to determine the uncertainties. Experiments which are conducted in the framework of mathematical and physical rules for the solution of engineering problems, measurements, calculations include uncertainty. To question the reliability of the obtained values, the existing uncertainties should be expressed as quantities. The uncertainty of a measurement system is not known if the results do not carry a universal value. On the other hand, resistance is one of the most important parameters that should be considered in the process of ship design. Ship resistance during the design phase of a ship cannot be determined precisely and reliably due to the uncertainty resources in determining the resistance value that are taken into account. This case may cause negative effects to provide the required specifications in the latter design steps. The uncertainty arising from the resistance test has been estimated and compared for a displacement type ship and high speed marine vehicles according to ITTC 2002 and ITTC 2014 regulations which are related to the uncertainty analysis methods. Also, the advantages and disadvantages of both ITTC uncertainty analysis methods have been discussed.

  2. Comparison of test methods for mould growth in buildings

    DEFF Research Database (Denmark)

    Bonderup, Sirid; Gunnarsen, Lars Bo; Knudsen, Sofie Marie

    2016-01-01

    renovation needs. This is of importance when hidden surface testing would require destructive measures and subsequent renovation. After identifying available methods on the Danish market for assessing mould growth in dwellings, a case study was conducted to test the usefulness of the methods in four......The purpose of this work is to compare a range of test methods and kits for assessing whether a building structure is infested by mould fungi. A further purpose of this work is to evaluate whether air-based methods for sampling fungal emissions provide information qualifying decisions concerning...... methods measure different aspects relating to mould growth and vary in selectivity and precision. The two types of air samples indicated low levels of mould growth, even where the results of the other methods indicated high to moderate growth. With methods based on culture and DNA testing some differences...

  3. Detection of progression of glaucomatous visual field damage using the point-wise method with the binomial test.

    Science.gov (United States)

    Karakawa, Ayako; Murata, Hiroshi; Hirasawa, Hiroyo; Mayama, Chihiro; Asaoka, Ryo

    2013-01-01

    To compare the performance of newly proposed point-wise linear regression (PLR) with the binomial test (binomial PLR) against mean deviation (MD) trend analysis and permutation analyses of PLR (PoPLR), in detecting global visual field (VF) progression in glaucoma. 15 VFs (Humphrey Field Analyzer, SITA standard, 24-2) were collected from 96 eyes of 59 open angle glaucoma patients (6.0 ± 1.5 [mean ± standard deviation] years). Using the total deviation of each point on the 2(nd) to 16(th) VFs (VF2-16), linear regression analysis was carried out. The numbers of VF test points with a significant trend at various probability levels (pbinomial test (one-side). A VF series was defined as "significant" if the median p-value from the binomial test was binomial PLR method (0.14 to 0.86) was significantly higher than MD trend analysis (0.04 to 0.89) and PoPLR (0.09 to 0.93). The PIS of the proposed method (0.0 to 0.17) was significantly lower than the MD approach (0.0 to 0.67) and PoPLR (0.07 to 0.33). The PBNS of the three approaches were not significantly different. The binomial BLR method gives more consistent results than MD trend analysis and PoPLR, hence it will be helpful as a tool to 'flag' possible VF deterioration.

  4. 40 CFR Appendix A to Part 63 - Test Methods

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Test Methods A Appendix A to Part 63... to Part 63—Test Methods Method 301—Field Validation of Pollutant Measurement Methods from Various Waste Media 1. Applicability and principle 1.1Applicability. This method, as specified in the applicable...

  5. EPA flow reference method testing and analysis: Findings report

    International Nuclear Information System (INIS)

    1999-06-01

    This report describes an experimental program sponsored by the US Environmental Protection Agency (EPA) to evaluate potential improvements to the Agency's current reference method for measuring volumetric flow (Method 2, 40 CFR Part 60, Appendix B). Method 2 (Determination of Stack Gas Velocity and Volumetric Flow Rate (Type S Pitot Tube)) specifies measurements to determine volumetric flow, but does not prescribe specific procedures to account for yaw or pitch angles of flow when the flow in the stack is not axial. Method 2 also allows the use of only two probe types, the Type S and the Prandtl

  6. ASTM Validates Air Pollution Test Methods

    Science.gov (United States)

    Chemical and Engineering News, 1973

    1973-01-01

    The American Society for Testing and Materials (ASTM) has validated six basic methods for measuring pollutants in ambient air as the first part of its Project Threshold. Aim of the project is to establish nationwide consistency in measuring pollutants; determining precision, accuracy and reproducibility of 35 standard measuring methods. (BL)

  7. A statistical method for testing epidemiological results, as applied to the Hanford worker population

    International Nuclear Information System (INIS)

    Brodsky, A.

    1979-01-01

    Some recent reports of Mancuso, Stewart and Kneale claim findings of radiation-produced cancer in the Hanford worker population. These claims are based on statistical computations that use small differences in accumulated exposures between groups dying of cancer and groups dying of other causes; actual mortality and longevity were not reported. This paper presents a statistical method for evaluation of actual mortality and longevity longitudinally over time, as applied in a primary analysis of the mortality experience of the Hanford worker population. Although available, this method was not utilized in the Mancuso-Stewart-Kneale paper. The author's preliminary longitudinal analysis shows that the gross mortality experience of persons employed at Hanford during 1943-70 interval did not differ significantly from that of certain controls, when both employees and controls were selected from families with two or more offspring and comparison were matched by age, sex, race and year of entry into employment. This result is consistent with findings reported by Sanders (Health Phys. vol.35, 521-538, 1978). The method utilizes an approximate chi-square (1 D.F.) statistic for testing population subgroup comparisons, as well as the cumulation of chi-squares (1 D.F.) for testing the overall result of a particular type of comparison. The method is available for computer testing of the Hanford mortality data, and could also be adapted to morbidity or other population studies. (author)

  8. Finite test sets development method for test execution of safety critical software

    International Nuclear Information System (INIS)

    Shin, Sung Min; Kim, Hee Eun; Kang, Hyun Gook; Lee, Sung Jiun

    2014-01-01

    The V and V method has been utilized for this safety critical software, while SRGM has difficulties because of lack of failure occurrence data on developing phase. For the safety critical software, however, failure data cannot be gathered after installation in real plant when we consider the severe consequence. Therefore, to complement the V and V method, the test-based method need to be developed. Some studies on test-based reliability quantification method for safety critical software have been conducted in nuclear field. These studies provide useful guidance on generating test sets. An important concept of the guidance is that the test sets represent 'trajectories' (a series of successive values for the input variables of a program that occur during the operation of the software over time) in the space of inputs to the software.. Actually, the inputs to the software depends on the state of plant at that time, and these inputs form a new internal state of the software by changing values of some variables. In other words, internal state of the software at specific timing depends on the history of past inputs. Here the internal state of the software which can be changed by past inputs is named as Context of Software (CoS). In a certain CoS, a software failure occurs when a fault is triggered by some inputs. To cover the failure occurrence mechanism of a software, preceding researches insist that the inputs should be a trajectory form. However, in this approach, there are two critical problems. One is the length of the trajectory input. Input trajectory should long enough to cover failure mechanism, but the enough length is not clear. What is worse, to cover some accident scenario, one set of input should represent dozen hours of successive values. The other problem is number of tests needed. To satisfy a target reliability with reasonable confidence level, very large number of test sets are required. Development of this number of test sets is a herculean

  9. Efficiency Test Method for Electric Vehicle Chargers

    DEFF Research Database (Denmark)

    Kieldsen, Andreas; Thingvad, Andreas; Martinenas, Sergejus

    2016-01-01

    This paper investigates different methods for measuring the charger efficiency of mass produced electric vehicles (EVs), in order to compare the different models. The consumers have low attention to the loss in the charger though the impact on the driving cost is high. It is not a high priority...... different vehicles. A unified method for testing the efficiency of the charger in EVs, without direct access to the component, is presented. The method is validated through extensive tests of the models Renault Zoe, Nissan LEAF and Peugeot iOn. The results show a loss between 15 % and 40 %, which is far...

  10. Transcriptome analysis in non-model species: a new method for the analysis of heterologous hybridization on microarrays

    Directory of Open Access Journals (Sweden)

    Jouventin Pierre

    2010-05-01

    Full Text Available Abstract Background Recent developments in high-throughput methods of analyzing transcriptomic profiles are promising for many areas of biology, including ecophysiology. However, although commercial microarrays are available for most common laboratory models, transcriptome analysis in non-traditional model species still remains a challenge. Indeed, the signal resulting from heterologous hybridization is low and difficult to interpret because of the weak complementarity between probe and target sequences, especially when no microarray dedicated to a genetically close species is available. Results We show here that transcriptome analysis in a species genetically distant from laboratory models is made possible by using MAXRS, a new method of analyzing heterologous hybridization on microarrays. This method takes advantage of the design of several commercial microarrays, with different probes targeting the same transcript. To illustrate and test this method, we analyzed the transcriptome of king penguin pectoralis muscle hybridized to Affymetrix chicken microarrays, two organisms separated by an evolutionary distance of approximately 100 million years. The differential gene expression observed between different physiological situations computed by MAXRS was confirmed by real-time PCR on 10 genes out of 11 tested. Conclusions MAXRS appears to be an appropriate method for gene expression analysis under heterologous hybridization conditions.

  11. Antifungal susceptibility testing method for resource constrained laboratories

    Directory of Open Access Journals (Sweden)

    Khan S

    2006-01-01

    Full Text Available Purpose: In resource-constrained laboratories of developing countries determination of antifungal susceptibility testing by NCCLS/CLSI method is not always feasible. We describe herein a simple yet comparable method for antifungal susceptibility testing. Methods: Reference MICs of 72 fungal isolates including two quality control strains were determined by NCCLS/CLSI methods against fluconazole, itraconazole, voriconazole, amphotericin B and cancidas. Dermatophytes were also tested against terbinafine. Subsequently, on selection of optimum conditions, MIC was determined for all the fungal isolates by semisolid antifungal agar susceptibility method in Brain heart infusion broth supplemented with 0.5% agar (BHIA without oil overlay and results were compared with those obtained by reference NCCLS/CLSI methods. Results: Comparable results were obtained by NCCLS/CLSI and semisolid agar susceptibility (SAAS methods against quality control strains. MICs for 72 isolates did not differ by more than one dilution for all drugs by SAAS. Conclusions: SAAS using BHIA without oil overlay provides a simple and reproducible method for obtaining MICs against yeast, filamentous fungi and dermatophytes in resource-constrained laboratories.

  12. 40 CFR Table 3 of Subpart Bbbbbbb... - Test Methods

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 14 2010-07-01 2010-07-01 false Test Methods 3 Table 3 of Subpart... 3 Table 3 of Subpart BBBBBBB of Part 63—Test Methods For * * * You must use * * * 1. Selecting the sampling locations a and the number of traverse points EPA test method 1 or 1A in appendix A to part 60. 2...

  13. 26 CFR 1.401(a)(26)-7 - Testing methods.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 5 2010-04-01 2010-04-01 false Testing methods. 1.401(a)(26)-7 Section 1.401(a... (CONTINUED) INCOME TAXES Pension, Profit-Sharing, Stock Bonus Plans, Etc. § 1.401(a)(26)-7 Testing methods... the rules in § 1.401(a)(26)-5. (b) Simplified testing method. A plan is treated as satisfying the...

  14. Helium leak testing methods in nuclear applications

    International Nuclear Information System (INIS)

    Ahmad, Anis

    2004-01-01

    Helium mass-spectrometer leak test is the most sensitive leak test method. It gives very reliable and sensitive test results. In last few years application of helium leak testing has gained more importance due to increased public awareness of safety and environment pollution caused by number of growing chemical and other such industries. Helium leak testing is carried out and specified in most of the critical area applications like nuclear, space, chemical and petrochemical industries

  15. A P-value model for theoretical power analysis and its applications in multiple testing procedures

    Directory of Open Access Journals (Sweden)

    Fengqing Zhang

    2016-10-01

    Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.

  16. Cluster cosmological analysis with X ray instrumental observables: introduction and testing of AsPIX method

    International Nuclear Information System (INIS)

    Valotti, Andrea

    2016-01-01

    Cosmology is one of the fundamental pillars of astrophysics, as such it contains many unsolved puzzles. To investigate some of those puzzles, we analyze X-ray surveys of galaxy clusters. These surveys are possible thanks to the bremsstrahlung emission of the intra-cluster medium. The simultaneous fit of cluster counts as a function of mass and distance provides an independent measure of cosmological parameters such as Ω m , σ s , and the dark energy equation of state w0. A novel approach to cosmological analysis using galaxy cluster data, called top-down, was developed in N. Clerc et al. (2012). This top-down approach is based purely on instrumental observables that are considered in a two-dimensional X-ray color-magnitude diagram. The method self-consistently includes selection effects and scaling relationships. It also provides a means of bypassing the computation of individual cluster masses. My work presents an extension of the top-down method by introducing the apparent size of the cluster, creating a three-dimensional X-ray cluster diagram. The size of a cluster is sensitive to both the cluster mass and its angular diameter, so it must also be included in the assessment of selection effects. The performance of this new method is investigated using a Fisher analysis. In parallel, I have studied the effects of the intrinsic scatter in the cluster size scaling relation on the sample selection as well as on the obtained cosmological parameters. To validate the method, I estimate uncertainties of cosmological parameters with MCMC method Amoeba minimization routine and using two simulated XMM surveys that have an increasing level of complexity. The first simulated survey is a set of toy catalogues of 100 and 10000 deg 2 , whereas the second is a 1000 deg 2 catalogue that was generated using an Aardvark semi-analytical N-body simulation. This comparison corroborates the conclusions of the Fisher analysis. In conclusion, I find that a cluster diagram that accounts

  17. Using Operational Analysis to Improve Access to Pulmonary Function Testing

    Directory of Open Access Journals (Sweden)

    Ada Ip

    2016-01-01

    Full Text Available Background. Timely pulmonary function testing is crucial to improving diagnosis and treatment of pulmonary diseases. Perceptions of poor access at an academic pulmonary function laboratory prompted analysis of system demand and capacity to identify factors contributing to poor access. Methods. Surveys and interviews identified stakeholder perspectives on operational processes and access challenges. Retrospective data on testing demand and resource capacity was analyzed to understand utilization of testing resources. Results. Qualitative analysis demonstrated that stakeholder groups had discrepant views on access and capacity in the laboratory. Mean daily resource utilization was 0.64 (SD 0.15, with monthly average utilization consistently less than 0.75. Reserved testing slots for subspecialty clinics were poorly utilized, leaving many testing slots unfilled. When subspecialty demand exceeded number of reserved slots, there was sufficient capacity in the pulmonary function schedule to accommodate added demand. Findings were shared with stakeholders and influenced scheduling process improvements. Conclusion. This study highlights the importance of operational data to identify causes of poor access, guide system decision-making, and determine effects of improvement initiatives in a variety of healthcare settings. Importantly, simple operational analysis can help to improve efficiency of health systems with little or no added financial investment.

  18. Nuclear EMP: stripline test method for measuring transfer impedance

    International Nuclear Information System (INIS)

    Miller, J.S.

    1975-11-01

    A method for measuring the transfer impedance of flat metal joints for frequencies to 100 MHz has been developed which makes use of striplines. The stripline method, which has similarities to the quadraxial method used for cylindrical components, is described and sets of test results are given. The transfer impedance of a simple joint is modeled as a spurious hyperbolic curve, and a close curve fit to transfer impedance test data from various samples is demonstrated for both the stripline and the quadraxial methods. Validity checks of the test data are discussed using the curve model and other criteria. The method was developed for testing riveted joints which form the avionics bays on B-1s. The joints must provide shielding from EMP currents

  19. Alternative method of inservice hydraulic testing of difficult to test pumps

    International Nuclear Information System (INIS)

    Stockton, N.B.; Shangari, S.

    1994-01-01

    The pump test codes require that system resistance be varied until the independent variable (either the pump flow rate or differential pressure) equals its reference value. Variance from this fixed reference value is not specifically allowed. However, the design of many systems makes it impractical to set the independent variable to an exact value. Over a limited range of pump operation about the fixed reference value, linear interpolation between two points of pump operation can be used to accurately determine degradation at the reference value without repeating reference test conditions. This paper presents an overview of possible alternatives for hydraulic testing of pumps and a detailed discussion of the linear interpolation method. The approximation error associated with linear interpolation is analyzed. Methods to quantify and minimize approximation error are presented

  20. A Simple Method for Causal Analysis of Return on IT Investment

    Science.gov (United States)

    Alemi, Farrokh; Zargoush, Manaf; Oakes, James L.; Edrees, Hanan

    2011-01-01

    This paper proposes a method for examining the causal relationship among investment in information technology (IT) and the organization's productivity. In this method, first a strong relationship among (1) investment in IT, (2) use of IT and (3) organization's productivity is verified using correlations. Second, the assumption that IT investment preceded improved productivity is tested using partial correlation. Finally, the assumption of what may have happened in the absence of IT investment, the so called counterfactual, is tested through forecasting productivity at different levels of investment. The paper applies the proposed method to investment in the Veterans Health Information Systems and Technology Architecture (VISTA) system. Result show that the causal analysis can be done, even with limited data. Furthermore, because the procedure relies on overall organization's productivity, it might be more objective than when the analyst picks and chooses which costs and benefits should be included in the analysis. PMID:23019515

  1. Development of new testing methods for the numerical load analysis for the drop test of steel sheet containers for the final repository Konrad

    International Nuclear Information System (INIS)

    Protz, C.; Voelzke, H.; Zencker, U.; Hagenow, P.; Gruenewald, H.

    2011-01-01

    The qualification of steel sheet containers as intermediate-level waste container for the final repository is performed by the BAM (Bundeasmt fuer Materialpruefung) according to the BfS (Bundesamt fuer Strahlenschutz) requirements. The testing requirements include the stacking pressure tests, lifting tests, drop tests thermal tests (fire resistance) and tightness tests. Besides the verification using model or prototype tests and transferability considerations numerical safety analyses may be performed alternatively. The authors describe the internal BAM research project ConDrop aimed to develop extended testing methods for the drop test of steel sheet containers for the final repository Konrad using numerical load analyses. A finite element model was developed using The FE software LS-PrePost 3.0 and ANSYS 12.0 and the software FE-Code LS-DYNA for the simulation of the drop test (5 m height). The results were verified by experimental data from instrumented drop tests. The container preserves its integrity after the drop test, plastic deformation occurred at the bottom plate, the side walls, the cask cover and the lateral uprights.

  2. Experience with leaf analysis in smoke damage tests

    Energy Technology Data Exchange (ETDEWEB)

    Garber, K

    1960-01-01

    The role of chemical analysis in examining plant material for damage caused by smoke is discussed. Most difficult is the diagnosis of SO/sub 2/ damage, even in case of leaf discoloration, because this symptom is not specific. An increased content of sulfur in leaves can be an indication of damage by SO/sub 2/ but this proof is not reliable, since plants can manifest elevated sulfur levels through intake from the soil. A quantitative micromethod for the detection of SO/sub 2/ in fresh leaves has been developed by G. Bredemann and H. Radeloff. Hydrochloric acid and chlorine can also be detected by a micromethod (AgNO/sub 3/) but there is no proof of damage because the natural chloride content in plants fluctuates widely. The same holds for NO/sub 2/ and NO/sub 3/. Ammonia can be detected microchemically with great reliability; fluorine can also be detected microchemically and positive tests usually indicate reliably the cause of damage, but the fluorine test is not always reliable. A microchemical test also exists for asphalt and tar vapors. Thus, if the circumstances of the damage and local conditions are known, microchemical leaf analysis is useful as an auxiliary method in attributing damage to a specific agent. But leaf analysis by itself does not constitute conclusive proof. 12 references.

  3. 40 CFR 60.1300 - What test methods must I use to stack test?

    Science.gov (United States)

    2010-07-01

    ... to the Administrator for approval under § 60.8(b) to use a reference method with minor changes in... 40 Protection of Environment 6 2010-07-01 2010-07-01 false What test methods must I use to stack test? 60.1300 Section 60.1300 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR...

  4. 40 CFR 76.15 - Test methods and procedures.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Test methods and procedures. 76.15 Section 76.15 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES EMISSION REDUCTION PROGRAM § 76.15 Test methods and procedures. (a) The...

  5. Method for determining detailed rod worth profiles at low power in the fast test reactor

    International Nuclear Information System (INIS)

    Sevenich, R.A.

    1975-08-01

    A method for obtaining a detailed rod worth profile at low power for a slow control rod insertion is presented. The accuracy of the method depends on a preparatory experiment in which the test rod is dropped quickly to yield, upon analysis, the magnitude of the rod worth and an effective source value. These numbers are employed to initialize the inverse kinetics analysis for the slow insertion. Corrections for changes in detection efficiency are not included for the simulated experiments. (U.S.)

  6. Effects of test method and participant musical training on preference ratings of stimuli with different reverberation times.

    Science.gov (United States)

    Lawless, Martin S; Vigeant, Michelle C

    2017-10-01

    Selecting an appropriate listening test design for concert hall research depends on several factors, including listening test method and participant critical-listening experience. Although expert listeners afford more reliable data, their perceptions may not be broadly representative. The present paper contains two studies that examined the validity and reliability of the data obtained from two listening test methods, a successive and a comparative method, and two types of participants, musicians and non-musicians. Participants rated their overall preference of auralizations generated from eight concert hall conditions with a range of reverberation times (0.0-7.2 s). Study 1, with 34 participants, assessed the two methods. The comparative method yielded similar results and reliability as the successive method. Additionally, the comparative method was rated as less difficult and more preferable. For study 2, an additional 37 participants rated the stimuli using the comparative method only. An analysis of variance of the responses from both studies revealed that musicians are better than non-musicians at discerning their preferences across stimuli. This result was confirmed with a k-means clustering analysis on the entire dataset that revealed five preference groups. Four groups exhibited clear preferences to the stimuli, while the fifth group, predominantly comprising non-musicians, demonstrated no clear preference.

  7. Nondestructive testing methods for 55-gallon, waste storage drums

    International Nuclear Information System (INIS)

    Ferris, R.H.; Hildebrand, B.P.; Hockey, R.L.; Riechers, D.M.; Spanner, J.C.; Duncan, D.R.

    1993-06-01

    The Westinghouse Hanford Company (WHC) authorized Pacific Northwest Laboratory (PNL) to conduct a feasibility study to identify promising nondestructive testing (NDT) methods for detecting general and localized (both pitting and pinhole) corrosion in the 55-gal drums that are used to store solid waste materials at the Hanford Site. This document presents results obtained during a literature survey, identifies the relevant reference materials that were reviewed, provides a technical description of the methods that were evaluated, describes the laboratory tests that were conducted and their results, identifies the most promising candidate methods along with the rationale for these selections, and includes a work plan for recommended follow-on activities. This report contains a brief overview and technical description for each of the following NDT methods: magnetic testing techniques; eddy current testing; shearography; ultrasonic testing; radiographic computed tomography; thermography; and leak testing with acoustic detection

  8. Stopping test of iterative methods for solving PDE

    International Nuclear Information System (INIS)

    Wang Bangrong

    1991-01-01

    In order to assure the accuracy of the numerical solution of the iterative method for solving PDE (partial differential equation), the stopping test is very important. If the coefficient matrix of the system of linear algebraic equations is strictly diagonal dominant or irreducible weakly diagonal dominant, the stopping test formulas of the iterative method for solving PDE is proposed. Several numerical examples are given to illustrate the applications of the stopping test formulas

  9. Non-destructive Testing by Infrared Thermography Under Random Excitation and ARMA Analysis

    Science.gov (United States)

    Bodnar, J. L.; Nicolas, J. L.; Candoré, J. C.; Detalle, V.

    2012-11-01

    Photothermal thermography is a non-destructive testing (NDT) method, which has many applications in the field of control and characterization of thin materials. This technique is usually implemented under CW or flash excitation. Such excitations are not adapted for control of fragile materials or for multi-frequency analysis. To allow these analyses, in this article, the use of a new control mode is proposed: infrared thermography under random excitation and auto regressive moving average analysis. First, the principle of this NDT method is presented. Then, the method is shown to permit detection, with low energy constraints, of detachments situated in mural paintings.

  10. Methods of Usability Testing in Libraries Web Sites

    Directory of Open Access Journals (Sweden)

    Eman Fawzy

    2006-03-01

    Full Text Available A Study about libraries' web sites evaluation, that is the Usability, the study talking about methods of usability testing and define it, and its important in web sites evaluation, then details the methods of usability: questionnaire, core groups, testing experimental model, cards arrangement, and composed evaluation.

  11. 40 CFR Appendix A-2 to Part 60 - Test Methods 2G through 3C

    Science.gov (United States)

    2010-07-01

    ... Velocity Decay Near the Stack Wall Method 3—Gas analysis for the determination of dry molecular weight... a tester facing a test port in a vertical stack, the pitch component of flow is the vector of flow moving from the center of the stack toward or away from that test port. The pitch angle is the angle...

  12. Method validation and verification in liquid scintillation counting using the long-term uncertainty method (LTUM) on two decades of proficiency test data

    International Nuclear Information System (INIS)

    Verrezen, F.; Vasile, M.; Loots, H.; Bruggeman, M.

    2017-01-01

    Results from proficiency tests gathered over the past two decades by the laboratory for low level radioactivity measurements for liquid scintillation counting of 3 H (184 results) and 14 C (74 results) are used to verify the validated measurement methods used by the laboratory, in particular the estimated uncertainty budget of the method and its reproducibility and stability. A linear regression approach is used for the analysis of the results, described in the literature as the long term uncertainty in measurement method. The present study clearly indicates the advantages of using proficiency test results in identifying possible constant or proportional bias effects as well as the possibility to compare the laboratory performance with the performance of peer laboratories. (author)

  13. Empirically Testing Thematic Analysis (ETTA)

    DEFF Research Database (Denmark)

    Gildberg, Frederik Alkier; Bradley, Stephen K.; Tingleff, Elllen B.

    2015-01-01

    Text analysis is not a question of a right or wrong way to go about it, but a question of different traditions. These tend to not only give answers to how to conduct an analysis, but also to provide the answer as to why it is conducted in the way that it is. The problem however may be that the li...... for themselves. The advantage of utilizing the presented analytic approach is argued to be the integral empirical testing, which should assure systematic development, interpretation and analysis of the source textual material....... between tradition and tool is unclear. The main objective of this article is therefore to present Empirical Testing Thematic Analysis, a step by step approach to thematic text analysis; discussing strengths and weaknesses, so that others might assess its potential as an approach that they might utilize/develop...

  14. Methods for testing of geometrical down-scaled rotor blades

    DEFF Research Database (Denmark)

    Branner, Kim; Berring, Peter

    further developed since then. Structures in composite materials are generally difficult and time consuming to test for fatigue resistance. Therefore, several methods for testing of blades have been developed and exist today. Those methods are presented in [1]. Current experimental test performed on full...

  15. Post-test analysis for the MIDAS DVI tests using MARS

    International Nuclear Information System (INIS)

    Bae, K. H.; Lee, Y. J.; Kwon, T. S.; Lee, W. J.; Kim, H. C.

    2002-01-01

    Various DVI tests have been performed at MIDAS test facility which is a scaled facility of APR1400 applying a modified linear scale ratio. The evaluation results for the various void height tests and direct bypass tests using a multi-dimensional best-estimate analysis code MARS, show that; (a) MARS code has an advanced modeling capability of well predicting major multi-dimensional thermal-hydraulic phenomena occurring in the downcomer, (b) MARS code under-predicts the steam condensation rates, which in turn causes to over-predict the ECC bypass rates. However, the trend of decrease in steam condensation rate and increase in ECC bypass rate in accordance with the increase in steam flow rate, and the calculation results of the ECC bypass rates under the EM analysis conditions generally agree with the test data

  16. Phase analysis in duplex stainless steel: comparison of EBSD and quantitative metallography methods

    International Nuclear Information System (INIS)

    Michalska, J; Chmiela, B

    2014-01-01

    The purpose of the research was to work out the qualitative and quantitative analysis of phases in DSS in as-received state and after thermal aging. For quantitative purposes, SEM observations, EDS analyses and electron backscattered diffraction (EBSD) methods were employed. Qualitative analysis of phases was performed by two methods: EBSD and classical quantitative metallography. A juxtaposition of different etchants for the revealing of microstructure and brief review of sample preparation methods for EBSD studies were presented. Different ways of sample preparation were tested and based on these results a detailed methodology of DSS phase analysis was developed including: surface finishing, selective etching methods and image acquisition. The advantages and disadvantages of applied methods were pointed out and compared the accuracy of the analysis phase performed by both methods

  17. USFDA-GUIDELINE BASED VALIDATION OF TESTING METHOD FOR RIFAMPICIN IN INDONESIAN SERUM SPECIMEN

    Directory of Open Access Journals (Sweden)

    Tri Joko Raharjo

    2010-06-01

    Full Text Available Regarding a new regulation from Indonesia FDA (Badan POM-RI, all new non patent drugs should show bioequivalence with the originator drug prior to registration. Bioequivalence testing (BE-testing has to be performed to the people that represented of population to which the drug to be administrated. BE testing need a valid bio-analytical method for certain drug target and group of population. This research report specific validation of bio-analysis of Rifampicin in Indonesian serum specimen in order to be used for BE testing. The extraction was performed using acetonitrile while the chromatographic separation was accomplished on a RP 18 column (250 × 4.6 mm i.d., 5 µm, with a mobile phase composed of KH2PO4 10 mM-Acetonitrile (40:60, v/v and UV detection was set at 333 nm. The method shown specificity compared to blank serum specimen with retention time of rifampicin at 2.1 min. Lower limit of quantification (LLOQ was 0.06 µg/mL with dynamic range up to 20 µg/mL (R>0.990. Precision of the method was very good with coefficient of variance (CV 0.58; 7.40 and 5.56% for concentration at 0.06, 5, 15 µg/mL, respectively. Accuracies of the method were 3.22; 1.94; 1.90% for concentration 0.06, 5 and 15 µg/mL respectively. The average recoveries were 97.82, 95.50 and 97.31% for concentration of rifampicin 1, 5 and 5 µg/mL, respectively. The method was also shown reliable result on stability test on freezing-thawing, short-term and long-term stability as well as post preparation stability. Validation result shown that the method was ready to be used for Rifampicin BE testing with Indonesian subject.   Keywords: Rifampicin, Validation, USFDA-Guideline

  18. A fracture mechanics and reliability based method to assess non-destructive testings for pressure vessels

    International Nuclear Information System (INIS)

    Kitagawa, Hideo; Hisada, Toshiaki

    1979-01-01

    Quantitative evaluation has not been made on the effects of carrying out preservice and in-service nondestructive tests for securing the soundness, safety and maintainability of pressure vessels, spending large expenses and labor. Especially the problems concerning the time and interval of in-service inspections lack the reasonable, quantitative evaluation method. In this paper, the problems of pressure vessels are treated by having developed the analysis method based on reliability technology and probability theory. The growth of surface cracks in pressure vessels was estimated, using the results of previous studies. The effects of nondestructive inspection on the defects in pressure vessels were evaluated, and the influences of many factors, such as plate thickness, stress, the accuracy of inspection and so on, on the effects of inspection, and the method of evaluating the inspections at unequal intervals were investigated. The analysis of reliability taking in-service inspection into consideration, the evaluation of in-service inspection and other affecting factors through the typical examples of analysis, and the review concerning the time of inspection are described. The method of analyzing the reliability of pressure vessels, considering the growth of defects and preservice and in-service nondestructive tests, was able to be systematized so as to be practically usable. (Kako, I.)

  19. COMPUTER METHODS OF GENETIC ANALYSIS.

    Directory of Open Access Journals (Sweden)

    A. L. Osipov

    2017-02-01

    Full Text Available The basic statistical methods used in conducting the genetic analysis of human traits. We studied by segregation analysis, linkage analysis and allelic associations. Developed software for the implementation of these methods support.

  20. Analysis of a beryllium-copper diffusion joint after HHF test

    International Nuclear Information System (INIS)

    Guiniatouline, R.N.; Mazul, I.V.; Gorodetsky, A.E.; Zalavutdinov, R.Kh.; Rybakov, S.Yu.; Savenko, V.I.

    1996-01-01

    The development of beryllium-copper joints which can withstand relevant ITER divertor conditions is one of the important tasks at the present time. One of the main problems associated with these joints is the intermetallic layers. The strength and life of these joints significantly depend on the width and contents of the intermetallic layers. The objective of this work is to study the diffusion joint of TGP-56 beryllium to OFHC copper after thermal response and thermocyclic tests with a beryllium-copper mock-up. The HHF test was performed on the e-beam facility (EBTS, SNLA). The following methods were used for analysis: roentgenographic analysis, X-ray spectrum analysis and fracture analysis. During the investigation the following studies were undertaken: the analysis of the diffusion boundary layer, which was obtained at the cross-section of one of the tiles, the analysis of the debonded surfaces of several beryllium tiles and corresponding copper parts and the analysis of the upper surface of one of the tiles after HHF tests. The joint roentgenographic and element analyses revealed the following phases in the diffusion zone: Cu 2 Be (∝170 μm), CuBe (∝30 μm), CuBe 2 (∝1 μm) and a solid solution of copper in beryllium. The phases Cu 2 Be, CuBe and the solid solution of copper in beryllium were detected by the quantitative microanalysis and the phases CuBe, CuBe 2 and CuBe, by the roentgenographic analysis. The fracture (origin) is located in the central part of the tiles. This crack was caused by residual stresses and thermal fatigue testing. This analysis provides important data on the joint quality and may be used for all types of joints used for ITER applications. (orig.)

  1. Testing method for ceramic armour and bare ceramic tiles

    NARCIS (Netherlands)

    Carton, E.P.; Roebroeks, G.H.J.J.

    2016-01-01

    TNO developed an alternative, more configuration independent ceramic test method than the Depth-of-Penetration test method. In this alternative test ceramic tiles and ceramic based armour are evaluated as target without a semi-infinite backing layer. An energy approach is chosen to evaluate and rank

  2. Testing method for ceramic armor and bare ceramic tiles

    NARCIS (Netherlands)

    Carton, E.P.; Roebroeks, G.H.J.J.

    2014-01-01

    TNO has developed an alternative, more configuration independent ceramic test method than the standard Depth-of-Penetration test method. In this test ceramic tiles and ceramic based armor are evaluated as target without a semi-infinite backing layer. An energy approach is chosen to evaluate and rank

  3. Optimization of cooling tower performance analysis using Taguchi method

    Directory of Open Access Journals (Sweden)

    Ramkumar Ramakrishnan

    2013-01-01

    Full Text Available This study discuss the application of Taguchi method in assessing maximum cooling tower effectiveness for the counter flow cooling tower using expanded wire mesh packing. The experiments were planned based on Taguchi’s L27 orthogonal array .The trail was performed under different inlet conditions of flow rate of water, air and water temperature. Signal-to-noise ratio (S/N analysis, analysis of variance (ANOVA and regression were carried out in order to determine the effects of process parameters on cooling tower effectiveness and to identity optimal factor settings. Finally confirmation tests verified this reliability of Taguchi method for optimization of counter flow cooling tower performance with sufficient accuracy.

  4. The surface analysis methods

    International Nuclear Information System (INIS)

    Deville, J.P.

    1998-01-01

    Nowadays, there are a lot of surfaces analysis methods, each having its specificity, its qualities, its constraints (for instance vacuum) and its limits. Expensive in time and in investment, these methods have to be used deliberately. This article appeals to non specialists. It gives some elements of choice according to the studied information, the sensitivity, the use constraints or the answer to a precise question. After having recalled the fundamental principles which govern these analysis methods, based on the interaction between radiations (ultraviolet, X) or particles (ions, electrons) with matter, two methods will be more particularly described: the Auger electron spectroscopy (AES) and x-rays photoemission spectroscopy (ESCA or XPS). Indeed, they are the most widespread methods in laboratories, the easier for use and probably the most productive for the analysis of surface of industrial materials or samples submitted to treatments in aggressive media. (O.M.)

  5. STOCHASTIC METHODS IN RISK ANALYSIS

    Directory of Open Access Journals (Sweden)

    Vladimíra OSADSKÁ

    2017-06-01

    Full Text Available In this paper, we review basic stochastic methods which can be used to extend state-of-the-art deterministic analytical methods for risk analysis. We can conclude that the standard deterministic analytical methods highly depend on the practical experience and knowledge of the evaluator and therefore, the stochastic methods should be introduced. The new risk analysis methods should consider the uncertainties in input values. We present how large is the impact on the results of the analysis solving practical example of FMECA with uncertainties modelled using Monte Carlo sampling.

  6. Comparison of sine dwell and broadband methods for modal testing

    Science.gov (United States)

    Chen, Jay-Chung

    1989-01-01

    The objectives of modal tests for large complex spacecraft structural systems are outlined. The comparison criteria for the modal test methods, namely, the broadband excitation and the sine dwell methods, are established. Using the Galileo spacecraft modal test and the Centaur G Prime upper stage vehicle modal test as examples, the relative advantage or disadvantage of each method is examined. The usefulness or shortcomings of the methods are given from a practical engineering viewpoint.

  7. Racheting - experimental tests and practical method of analysis

    International Nuclear Information System (INIS)

    Cousseran, P.; Lebey, J.; Moulin, D.; Roche, R.; Clement, G.

    1980-09-01

    Ratcheting is the acceleration of deformation, under controlled load, due to imposed cyclic deformations. Attention is given to the increase of creep elongation in presence of cyclic deformations, such as thermal straining. Tests on stainless steel-304L and 316L are described. The aim of this paper is to bring a contribution for the establishment of a conservative design rule, with a wide field of application and an easy mode of utilization

  8. Standard test method for galling resistance of material couples

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method covers a laboratory test that ranks the galling resistance of material couples using a quantitative measure. Bare metals, alloys, nonmetallic materials, coatings, and surface modified materials may be evaluated by this test method. 1.2 This test method is not designed for evaluating the galling resistance of material couples sliding under lubricated conditions, because galling usually will not occur under lubricated sliding conditions using this test method. 1.3 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.4 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  9. Item Analysis in Introductory Economics Testing.

    Science.gov (United States)

    Tinari, Frank D.

    1979-01-01

    Computerized analysis of multiple choice test items is explained. Examples of item analysis applications in the introductory economics course are discussed with respect to three objectives: to evaluate learning; to improve test items; and to help improve classroom instruction. Problems, costs and benefits of the procedures are identified. (JMD)

  10. Application of risk-based methods to inservice testing of check valves

    Energy Technology Data Exchange (ETDEWEB)

    Closky, N.B.; Balkey, K.R.; McAllister, W.J. [and others

    1996-12-01

    Research efforts have been underway in the American Society of Mechanical Engineers (ASME) and industry to define appropriate methods for the application of risk-based technology in the development of inservice testing (IST) programs for pumps and valves in nuclear steam supply systems. This paper discusses a pilot application of these methods to the inservice testing of check valves in the emergency core cooling system of Georgia Power`s Vogtle nuclear power station. The results of the probabilistic safety assessment (PSA) are used to divide the check valves into risk-significant and less-risk-significant groups. This information is reviewed by a plant expert panel along with the consideration of appropriate deterministic insights to finally categorize the check valves into more safety-significant and less safety-significant component groups. All of the more safety-significant check valves are further evaluated in detail using a failure modes and causes analysis (FMCA) to assist in defining effective IST strategies. A template has been designed to evaluate how effective current and emerging tests for check valves are in detecting failures or in finding significant conditions that are precursors to failure for the likely failure causes. This information is then used to design and evaluate appropriate IST strategies that consider both the test method and frequency. A few of the less safety-significant check valves are also evaluated using this process since differences exist in check valve design, function, and operating conditions. Appropriate test strategies are selected for each check valve that has been evaluated based on safety and cost considerations. Test strategies are inferred from this information for the other check valves based on similar check valve conditions. Sensitivity studies are performed using the PSA model to arrive at an overall IST program that maintains or enhances safety at the lowest achievable cost.

  11. Systematic maintenance analysis with decision support method and tool for optimizing maintenance programme

    International Nuclear Information System (INIS)

    Laakso, K.; Simola, K.; Dorrepaal, J.; Skogberg, P.

    1999-01-01

    This report describes an approach to evaluate the effectiveness of test and maintenance programs of technical systems used during several years. The method combines an analysis of the historical data on faults and repairs with an analysis of the history of periodic testing and preventive maintenance action programs. The application of the maintenance analysis from the methodological point of view in the reliability centered maintenance (RCM) project for Barsebaeck nuclear power plant is described. In order to limit the analysis resources, a method for ranking of objects for maintenance analysis is needed. Preliminary suggestions for changes in maintenance action programs are based on signals from simple maintenance indicators and qualitative analysis of underlying data on failures and maintenance. To facilitate generation of maintenance indicators, and make the maintenance analysis more efficient, a powerful and suitable data treatment tool is needed for analysis of the work order history. In the final maintenance decisions, additional decision criteria must be taken into account, and thus a more formal decision analysis is often needed for decision support. (au)

  12. Computerized test versus personal interview as admission methods for graduate nursing studies: A retrospective cohort study.

    Science.gov (United States)

    Hazut, Koren; Romem, Pnina; Malkin, Smadar; Livshiz-Riven, Ilana

    2016-12-01

    The purpose of this study was to compare the predictive validity, economic efficiency, and faculty staff satisfaction of a computerized test versus a personal interview as admission methods for graduate nursing studies. A mixed method study was designed, including cross-sectional and retrospective cohorts, interviews, and cost analysis. One hundred and thirty-four students in the Master of Nursing program participated. The success of students in required core courses was similar in both admission method groups. The personal interview method was found to be a significant predictor of success, with cognitive variables the only significant contributors to the model. Higher satisfaction levels were reported with the computerized test compared with the personal interview method. The cost of the personal interview method, in annual hourly work, was 2.28 times higher than the computerized test. These findings may promote discussion regarding the cost benefit of the personal interview as an admission method for advanced academic studies in healthcare professions. © 2016 John Wiley & Sons Australia, Ltd.

  13. A Novel and Effective Multivariate Method for Compositional Analysis using Laser Induced Breakdown Spectroscopy

    International Nuclear Information System (INIS)

    Wang, W; Qi, H; Ayhan, B; Kwan, C; Vance, S

    2014-01-01

    Compositional analysis is important to interrogate spectral samples for direct analysis of materials in agriculture, environment and archaeology, etc. In this paper, multi-variate analysis (MVA) techniques are coupled with laser induced breakdown spectroscopy (LIBS) to estimate quantitative elemental compositions and determine the type of the sample. In particular, we present a new multivariate analysis method for composition analysis, referred to as s pectral unmixing . The LIBS spectrum of a testing sample is considered as a linear mixture with more than one constituent signatures that correspond to various chemical elements. The signature library is derived from regression analysis using training samples or is manually set up with the information from an elemental LIBS spectral database. A calibration step is used to make all the signatures in library to be homogeneous with the testing sample so as to avoid inhomogeneous signatures that might be caused by different sampling conditions. To demonstrate the feasibility of the proposed method, we compare it with the traditional partial least squares (PLS) method and the univariate method using a standard soil data set with elemental concentration measured a priori. The experimental results show that the proposed method holds great potential for reliable and effective elemental concentration estimation

  14. Evaluation of sample extraction methods for proteomics analysis of green algae Chlorella vulgaris.

    Science.gov (United States)

    Gao, Yan; Lim, Teck Kwang; Lin, Qingsong; Li, Sam Fong Yau

    2016-05-01

    Many protein extraction methods have been developed for plant proteome analysis but information is limited on the optimal protein extraction method from algae species. This study evaluated four protein extraction methods, i.e. direct lysis buffer method, TCA-acetone method, phenol method, and phenol/TCA-acetone method, using green algae Chlorella vulgaris for proteome analysis. The data presented showed that phenol/TCA-acetone method was superior to the other three tested methods with regards to shotgun proteomics. Proteins identified using shotgun proteomics were validated using sequential window acquisition of all theoretical fragment-ion spectra (SWATH) technique. Additionally, SWATH provides protein quantitation information from different methods and protein abundance using different protein extraction methods was evaluated. These results highlight the importance of green algae protein extraction method for subsequent MS analysis and identification. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Standard test method for guided bend test for ductility of welds

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2002-01-01

    1.1 This test method covers a guided bend test for the determination of soundness and ductility of welds in ferrous and nonferrous products. Defects, not shown by X rays, may appear in the surface of a specimen when it is subjected to progressive localized overstressing. This guided bend test has been developed primarily for plates and is not intended to be substituted for other methods of bend testing. 1.2 The values stated in inch-pound units are to be regarded as standard. The values given in parentheses are mathematical conversions to SI units that are provided for information only and are not considered standard. Note 1—For additional information see Terminology E 6, and American Welding Society Standard D 1.1. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  16. Review of design analysis and site installation method of ASME vessel of fuel test loop of Hanaro

    International Nuclear Information System (INIS)

    Cho, Sung Won; Kim, Jun Yeon.

    1997-02-01

    The major goal of this report is to take the advantages under control, and provide basic solutions for field installation. In order to access to technical availability, the scope, limitation, and possibilities of design requirements are carefully considered in this. The chapter 3 deals with seismic stress analysis of vessels, using manufacturers' finite element analysis data. The test of manufactured equipment is scheduled to hold near future. The evaluation criteria and inspection specifications for the quality release are well illustrated in the chapter IV to efficiently check out the items. The full considerations for field installation are also reviewed with that in terms of space limit. Following the inspection and test of manufacturer's shop, the results will be promptly reported. Based on installation principles and the analysis in the report, updated procedure and methodology for the installation will be applied to the field in more details and better breakdown precision in the coming years. (author). 14 refs., 23 tabs., 26 figs

  17. Review of design analysis and site installation method of ASME vessel of fuel test loop of Hanaro

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Sung Won; Kim, Jun Yeon

    1997-02-01

    The major goal of this report is to take the advantages under control, and provide basic solutions for field installation. In order to access to technical availability, the scope, limitation, and possibilities of design requirements are carefully considered in this. The chapter 3 deals with seismic stress analysis of vessels, using manufacturers` finite element analysis data. The test of manufactured equipment is scheduled to hold near future. The evaluation criteria and inspection specifications for the quality release are well illustrated in the chapter IV to efficiently check out the items. The full considerations for field installation are also reviewed with that in terms of space limit. Following the inspection and test of manufacturer`s shop, the results will be promptly reported. Based on installation principles and the analysis in the report, updated procedure and methodology for the installation will be applied to the field in more details and better breakdown precision in the coming years. (author). 14 refs., 23 tabs., 26 figs.

  18. Antimicrobial Testing Methods & Procedures Developed by EPA's Microbiology Laboratory

    Science.gov (United States)

    We develop antimicrobial testing methods and standard operating procedures to measure the effectiveness of hard surface disinfectants against a variety of microorganisms. Find methods and procedures for antimicrobial testing.

  19. Standard Test Methods for Constituent Content of Composite Materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 These test methods determine the constituent content of composite materials by one of two approaches. Method I physically removes the matrix by digestion or ignition by one of seven procedures, leaving the reinforcement essentially unaffected and thus allowing calculation of reinforcement or matrix content (by weight or volume) as well as percent void volume. Method II, applicable only to laminate materials of known fiber areal weight, calculates reinforcement or matrix content (by weight or volume), and the cured ply thickness, based on the measured thickness of the laminate. Method II is not applicable to the measurement of void volume. 1.1.1 These test methods are primarily intended for two-part composite material systems. However, special provisions can be made to extend these test methods to filled material systems with more than two constituents, though not all test results can be determined in every case. 1.1.2 The procedures contained within have been designed to be particularly effective for ce...

  20. A Simple Method for Causal Analysis of Return on IT Investment

    Directory of Open Access Journals (Sweden)

    Farrokh Alemi

    2011-01-01

    Full Text Available This paper proposes a method for examining the causal relationship among investment in information technology (IT and the organization's productivity. In this method, first a strong relationship among (1 investment in IT, (2 use of IT and (3 organization's productivity is verified using correlations. Second, the assumption that IT investment preceded improved productivity is tested using partial correlation. Finally, the assumption of what may have happened in the absence of IT investment, the so called counterfactual, is tested through forecasting productivity at different levels of investment. The paper applies the proposed method to investment in the Veterans Health Information Systems and Technology Architecture (VISTA system. Result show that the causal analysis can be done, even with limited data. Furthermore, because the procedure relies on overall organization's productivity, it might be more objective than when the analyst picks and chooses which costs and benefits should be included in the analysis.

  1. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    Energy Technology Data Exchange (ETDEWEB)

    Ronald Laurids Boring

    2010-11-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  2. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    International Nuclear Information System (INIS)

    Boring, Ronald Laurids

    2010-01-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  3. An evaluation of the whole effluent toxicity test method

    International Nuclear Information System (INIS)

    Osteen, D.V.

    1999-01-01

    Whole effluent toxicity (WET) testing has become increasingly more important to the Environmental Protection Agency (EPA) and the States in the permitting of wastewater discharges from industry and municipalities. The primary purpose of the WET test is to protect aquatic life by predicting the effect of an effluent on the receiving stream. However, there are both scientific and regulatory concerns that using WET tests to regulate industrial effluents may result in either false positives and/or false negatives. In order to realistically predict the effect of an effluent on the receiving stream, the test should be as representative as possible of the conditions in the receiving stream. Studies (Rand and Petrocelli 1985) suggested several criteria for an ideal aquatic toxicity test organism, one of which is that the organism be indigenous to, or representative of, the ecosystem receiving the effluent. The other component needed in the development of a predictive test is the use of the receiving stream water or similar synthetic water as the control and dilution water in the test method. Use of an indigenous species and receiving water in the test should help reduce the variability in the method and allow the test to predict the effect of the effluent on the receiving stream. The experience with toxicity testing at the Savannah River Site (SRS) has yielded inconclusive data because of the inconsistency and unreliability of the results. The SRS contention is that the WET method in its present form does not adequately mimic actual biological/chemical conditions of the receiving streams and is neither reasonable nor accurate. This paper discusses the rationale for such a position by SRS on toxicity testing in terms of historical permitting requirements, outfall effluent test results, standard test method evaluation, scientific review of alternate test species, and concerns over the test method expressed by other organizations. This paper presents the Savannah River Site

  4. Statistical testing and power analysis for brain-wide association study.

    Science.gov (United States)

    Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng

    2018-04-05

    The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Momentum integral network method for thermal-hydraulic transient analysis

    International Nuclear Information System (INIS)

    Van Tuyle, G.J.

    1983-01-01

    A new momentum integral network method has been developed, and tested in the MINET computer code. The method was developed in order to facilitate the transient analysis of complex fluid flow and heat transfer networks, such as those found in the balance of plant of power generating facilities. The method employed in the MINET code is a major extension of a momentum integral method reported by Meyer. Meyer integrated the momentum equation over several linked nodes, called a segment, and used a segment average pressure, evaluated from the pressures at both ends. Nodal mass and energy conservation determined nodal flows and enthalpies, accounting for fluid compression and thermal expansion

  6. Comments on the sequential probability ratio testing methods

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A. [Hungarian Academy of Sciences, Budapest (Hungary). Central Research Inst. for Physics

    1996-07-01

    In this paper the classical sequential probability ratio testing method (SPRT) is reconsidered. Every individual boundary crossing event of the SPRT is regarded as a new piece of evidence about the problem under hypothesis testing. The Bayes method is applied for belief updating, i.e. integrating these individual decisions. The procedure is recommended to use when the user (1) would like to be informed about the tested hypothesis continuously and (2) would like to achieve his final conclusion with high confidence level. (Author).

  7. Analysis of excess reactivity of JOYO MK-III performance test core

    International Nuclear Information System (INIS)

    Maeda, Shigetaka; Yokoyama, Kenji

    2003-10-01

    JOYO is currently being upgraded to the high performance irradiation bed JOYO MK-III core'. The MK-III core is divided into two fuel regions with different plutonium contents. To obtain a higher neutron flux, the active core height was reduced from 55 cm to 50 cm. The reflector subassemblies were replaced by shielding subassemblies in the outer two rows. Twenty of the MK-III outer core fuel subassemblies in the performance test core were partially burned in the transition core. Four irradiation test rigs, which do not contain any fuel material, were loaded in the center of the performance test core. In order to evaluate the excess reactivity of MK-III performance test core accurately, we evaluated it by applying not only the JOYO MK-II core management code system MAGI, but also the MK-III core management code system HESTIA, the JUPITER standard analysis method and the Monte Carlo method with JFS-3-J3.2R content set. The excess reactivity evaluations obtained by the JUPITER standard analysis method were corrected to results based on transport theory with zero mesh-size in space and angle. A bias factor based on the MK-II 35th core, which sensitivity was similar to MK-III performance test core's, was also applied, except in the case where an adjusted nuclear cross-section library was used. Exact three-dimensional, pin-by-pin geometry and continuous-energy cross sections were used in the Monte Carlo calculation. The estimated error components associated with cross-sections, methods correction factors and the bias factor were combined based on Takeda's theory. Those independently calculated values agree well and range from 2.8 to 3.4%Δk/kk'. The calculation result of the MK-III core management code system HESTLA was 3.13% Δk/kk'. The estimated errors for bias method range from 0.1 to 0.2%Δk/kk'. The error in the case using adjusted cross-section was 0.3%Δk/kk'. (author)

  8. Machine Learning Methods for Production Cases Analysis

    Science.gov (United States)

    Mokrova, Nataliya V.; Mokrov, Alexander M.; Safonova, Alexandra V.; Vishnyakov, Igor V.

    2018-03-01

    Approach to analysis of events occurring during the production process were proposed. Described machine learning system is able to solve classification tasks related to production control and hazard identification at an early stage. Descriptors of the internal production network data were used for training and testing of applied models. k-Nearest Neighbors and Random forest methods were used to illustrate and analyze proposed solution. The quality of the developed classifiers was estimated using standard statistical metrics, such as precision, recall and accuracy.

  9. Seismic test qualification of electrical equipment - testing methods in use to EDF

    International Nuclear Information System (INIS)

    Fabries, R.

    1981-01-01

    At the beginning, for the 900 MW Power Plant level, the testing method in application used the single axis test by sine beat of 10 cycles according to the specifications of the guide IEEE 344-71. When the french guide UTEC 20-420 came into force we have had to define another testing method (EDF standard: HN20 E52) which utilize the single-axis test either by a sine beat of 5-cycles or by a synthetized time history. We present here the mains criterions allowing to justify: -The single-axis test. The single frequency wave (when the Initial Response Spectrum (IRS) present a narrow band). The use of one sine beat of 5-cycles or one synthetized time history. The need of taking into account the high stress level. This oligocyclic stress fatigue explains why one beat of 5-cycles may be as severe as one time history of 20 seconds (with the same level of strong response spectrum). Then, we conclude that the durating of the testing wave applied to the equipment shall be considered as a relative parameter only. The weight of the SSE tests by respect to the OBE tests. The precautions to take in order to: generate and check accurately the synthetized time history, choice the test frequencies when the sine beat is used. (orig./HP)

  10. From the Kirsch-Kress potential method via the range test to the singular sources method

    International Nuclear Information System (INIS)

    Potthast, R; Schulz, J

    2005-01-01

    We review three reconstruction methods for inverse obstacle scattering problems. We will analyse the relation between the Kirsch-Kress potential method 1986, the range test of Kusiak, Potthast and Sylvester (2003) and the singular sources method of Potthast (2000). In particular, we show that the range test is a logical extension of the Kirsch-Kress method into the category of sampling methods employing the tool of domain sampling. Then we will show how a multi-wave version of the range test can be set up and we will work out its relation to the singular sources method. Numerical examples and demonstrations will be provided

  11. Analysis of the discontinuous Petrov-Galerkin method with optimal test functions for the Reissner-Mindlin plate bending model

    KAUST Repository

    Calo, Victor M.; Collier, Nathan; Niemi, Antti H.

    2014-01-01

    We analyze the discontinuous Petrov-Galerkin (DPG) method with optimal test functions when applied to solve the Reissner-Mindlin model of plate bending. We prove that the hybrid variational formulation underlying the DPG method is well-posed (stable

  12. Partial Discharge Tests using the Cigré II method

    DEFF Research Database (Denmark)

    Casale, M. Di Lorenzo del; Schifani, R.; Holbøll, Joachim

    2000-01-01

    In this paper, the results of an experimental project on insulating material aging, performed in both Denmark and Italy, are reported. This study was concerned with partial discharge (PD) behavior at temperatures between 30 and 80°C using CIGRE method II. The material tested was a commercial...... polymethylmethacrylate (PMMA) which was chosen not for its good dielectric properties but rather because much of its discharge resistance data at ambient temperature is already well documented. A description is given of the theoretical and experimental methodology followed in this work. Mixed Weibull analysis techniques...... in terms of the PD amplitude and phase distribution characteristics were employed to distinguish the presence of different aging mechanisms. Such a difference was observed at 30 and at 80°C. At 30°C the analysis inferred a single discharge aging process acting until breakdown, while at 80°C the results...

  13. Semen analysis and sperm function tests: How much to test?

    Directory of Open Access Journals (Sweden)

    S S Vasan

    2011-01-01

    Full Text Available Semen analysis as an integral part of infertility investigations is taken as a surrogate measure for male fecundity in clinical andrology, male fertility, and pregnancy risk assessments. Clearly, laboratory seminology is still very much in its infancy. In as much as the creation of a conventional semen profile will always represent the foundations of male fertility evaluation, the 5th edition of the World Health Organization (WHO manual is a definitive statement on how such assessments should be carried out and how the quality should be controlled. A major advance in this new edition of the WHO manual, resolving the most salient critique of previous editions, is the development of the first well-defined reference ranges for semen analysis based on the analysis of over 1900 recent fathers. The methodology used in the assessment of the usual variables in semen analysis is described, as are many of the less common, but very valuable, sperm function tests. Sperm function testing is used to determine if the sperm have the biologic capacity to perform the tasks necessary to reach and fertilize ova and ultimately result in live births. A variety of tests are available to evaluate different aspects of these functions. To accurately use these functional assays, the clinician must understand what the tests measure, what the indications are for the assays, and how to interpret the results to direct further testing or patient management.

  14. Non-contact method of search and analysis of pulsating vessels

    Science.gov (United States)

    Avtomonov, Yuri N.; Tsoy, Maria O.; Postnov, Dmitry E.

    2018-04-01

    Despite the variety of existing methods of recording the human pulse and a solid history of their development, there is still considerable interest in this topic. The development of new non-contact methods, based on advanced image processing, caused a new wave of interest in this issue. We present a simple but quite effective method for analyzing the mechanical pulsations of blood vessels lying close to the surface of the skin. Our technique is a modification of imaging (or remote) photoplethysmography (i-PPG). We supplemented this method with the addition of a laser light source, which made it possible to use other methods of searching for the proposed pulsation zone. During the testing of the method, several series of experiments were carried out with both artificial oscillating objects as well as with the target signal source (human wrist). The obtained results show that our method allows correct interpretation of complex data. To summarize, we proposed and tested an alternative method for the search and analysis of pulsating vessels.

  15. Standard test method for static leaching of monolithic waste forms for disposal of radioactive waste

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method provides a measure of the chemical durability of a simulated or radioactive monolithic waste form, such as a glass, ceramic, cement (grout), or cermet, in a test solution at temperatures <100°C under low specimen surface- area-to-leachant volume (S/V) ratio conditions. 1.2 This test method can be used to characterize the dissolution or leaching behaviors of various simulated or radioactive waste forms in various leachants under the specific conditions of the test based on analysis of the test solution. Data from this test are used to calculate normalized elemental mass loss values from specimens exposed to aqueous solutions at temperatures <100°C. 1.3 The test is conducted under static conditions in a constant solution volume and at a constant temperature. The reactivity of the test specimen is determined from the amounts of components released and accumulated in the solution over the test duration. A wide range of test conditions can be used to study material behavior, includin...

  16. Using non-parametric methods in econometric production analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    2012-01-01

    by investigating the relationship between the elasticity of scale and the farm size. We use a balanced panel data set of 371~specialised crop farms for the years 2004-2007. A non-parametric specification test shows that neither the Cobb-Douglas function nor the Translog function are consistent with the "true......Econometric estimation of production functions is one of the most common methods in applied economic production analysis. These studies usually apply parametric estimation techniques, which obligate the researcher to specify a functional form of the production function of which the Cobb...... parameter estimates, but also in biased measures which are derived from the parameters, such as elasticities. Therefore, we propose to use non-parametric econometric methods. First, these can be applied to verify the functional form used in parametric production analysis. Second, they can be directly used...

  17. Accelerated Test Method for Corrosion Protective Coatings Project

    Science.gov (United States)

    Falker, John; Zeitlin, Nancy; Calle, Luz

    2015-01-01

    This project seeks to develop a new accelerated corrosion test method that predicts the long-term corrosion protection performance of spaceport structure coatings as accurately and reliably as current long-term atmospheric exposure tests. This new accelerated test method will shorten the time needed to evaluate the corrosion protection performance of coatings for NASA's critical ground support structures. Lifetime prediction for spaceport structure coatings has a 5-year qualification cycle using atmospheric exposure. Current accelerated corrosion tests often provide false positives and negatives for coating performance, do not correlate to atmospheric corrosion exposure results, and do not correlate with atmospheric exposure timescales for lifetime prediction.

  18. Assessment of Random Assignment in Training and Test Sets using Generalized Cluster Analysis Technique

    Directory of Open Access Journals (Sweden)

    Sorana D. BOLBOACĂ

    2011-06-01

    Full Text Available Aim: The properness of random assignment of compounds in training and validation sets was assessed using the generalized cluster technique. Material and Method: A quantitative Structure-Activity Relationship model using Molecular Descriptors Family on Vertices was evaluated in terms of assignment of carboquinone derivatives in training and test sets during the leave-many-out analysis. Assignment of compounds was investigated using five variables: observed anticancer activity and four structure descriptors. Generalized cluster analysis with K-means algorithm was applied in order to investigate if the assignment of compounds was or not proper. The Euclidian distance and maximization of the initial distance using a cross-validation with a v-fold of 10 was applied. Results: All five variables included in analysis proved to have statistically significant contribution in identification of clusters. Three clusters were identified, each of them containing both carboquinone derivatives belonging to training as well as to test sets. The observed activity of carboquinone derivatives proved to be normal distributed on every. The presence of training and test sets in all clusters identified using generalized cluster analysis with K-means algorithm and the distribution of observed activity within clusters sustain a proper assignment of compounds in training and test set. Conclusion: Generalized cluster analysis using the K-means algorithm proved to be a valid method in assessment of random assignment of carboquinone derivatives in training and test sets.

  19. Evaluation of the use of nodal methods for MTR neutronic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Reitsma, F.; Mueller, E.Z.

    1997-08-01

    Although modern nodal methods are used extensively in the nuclear power industry, their use for research reactor analysis has been very limited. The suitability of nodal methods for material testing reactor analysis is investigated with the emphasis on the modelling of the core region (fuel assemblies). The nodal approach`s performance is compared with that of the traditional finite-difference fine mesh approach. The advantages of using nodal methods coupled with integrated cross section generation systems are highlighted, especially with respect to data preparation, simplicity of use and the possibility of performing a great variety of reactor calculations subject to strict time limitations such as are required for the RERTR program.

  20. Generating and testing methods for consumer-oriented product development

    International Nuclear Information System (INIS)

    2001-10-01

    In order to obtain a good insight into various design methods that can be used by product developers to enable them to develop and test useful domotics products (domotics: intelligent systems for the home), an inventory has been made of the methods used in the Netherlands. The inventory is directed at two categories of methods: (1) Methods of getting better acquainted with the user and/or the problem, and of generating novel solutions: generative methods; and (2) Methods of assessing solutions (through various phases of the designing process): testing methods. The first category of methods concentrates on the designing process. In other words: how can the designer realise as much as possible of the workability of (domotics) products during the designing process? The second category aims at testing a design (in whatever shape: drawing, prototype, functional computer animation, etc.) through its users. These are methods of assessing a design at various stages of the designing process [nl

  1. Applying Hierarchical Task Analysis Method to Discovery Layer Evaluation

    Directory of Open Access Journals (Sweden)

    Marlen Promann

    2015-03-01

    Full Text Available Libraries are implementing discovery layers to offer better user experiences. While usability tests have been helpful in evaluating the success or failure of implementing discovery layers in the library context, the focus has remained on its relative interface benefits over the traditional federated search. The informal site- and context specific usability tests have offered little to test the rigor of the discovery layers against the user goals, motivations and workflow they have been designed to support. This study proposes hierarchical task analysis (HTA as an important complementary evaluation method to usability testing of discovery layers. Relevant literature is reviewed for the discovery layers and the HTA method. As no previous application of HTA to the evaluation of discovery layers was found, this paper presents the application of HTA as an expert based and workflow centered (e.g. retrieving a relevant book or a journal article method to evaluating discovery layers. Purdue University’s Primo by Ex Libris was used to map eleven use cases as HTA charts. Nielsen’s Goal Composition theory was used as an analytical framework to evaluate the goal carts from two perspectives: a users’ physical interactions (i.e. clicks, and b user’s cognitive steps (i.e. decision points for what to do next. A brief comparison of HTA and usability test findings is offered as a way of conclusion.

  2. Feasibility analysis of EDXRF method to detect heavy metal pollution in ecological environment

    Science.gov (United States)

    Hao, Zhixu; Qin, Xulei

    2018-02-01

    The change of heavy metal content in water environment, soil and plant can reflect the change of heavy metal pollution in ecological environment, and it is important to monitor the trend of heavy metal pollution in eco-environment by using water environment, soil and heavy metal content in plant. However, the content of heavy metals in nature is very low, the background elements of water environment, soil and plant samples are complex, and there are many interfering factors in the EDXRF system that will affect the spectral analysis results and reduce the detection accuracy. Through the contrastive analysis of several heavy metal elements detection methods, it is concluded that the EDXRF method is superior to other chemical methods in testing accuracy and method feasibility when the heavy metal pollution in soil is tested in ecological environment.

  3. Analyse Factorielle d'une Batterie de Tests de Comprehension Orale et Ecrite (Factor Analysis of a Battery of Tests of Listening and Reading Comprehension). Melanges Pedagogiques, 1971.

    Science.gov (United States)

    Lonchamp, F.

    This is a presentation of the results of a factor analysis of a battery of tests intended to measure listening and reading comprehension in English as a second language. The analysis sought to answer the following questions: (1) whether the factor analysis method yields results when applied to tests which are not specifically designed for this…

  4. An unconventional method of quantitative microstructural analysis

    International Nuclear Information System (INIS)

    Rastani, M.

    1995-01-01

    The experiment described here introduces a simple methodology which could be used to replace the time-consuming and expensive conventional methods of metallographic and quantitative analysis of thermal treatment effect on microstructure. The method is ideal for the microstructural evaluation of tungsten filaments and other wire samples such as copper wire which can be conveniently coiled. Ten such samples were heat treated by ohmic resistance at temperatures which were expected to span the recrystallization range. After treatment, the samples were evaluated in the elastic recovery test. The normalized elastic recovery factor was defined in terms of these deflections. Experimentally it has shown that elastic recovery factor depends on the degree of recrystallization. In other words this factor is used to determine the fraction of unrecrystallized material. Because the elastic recovery method examines the whole filament rather than just one section through the filament as in metallographical method, it more accurately measures the degree of recrystallization. The method also takes a considerably shorter time and cost compared to the conventional method

  5. Analysis of Peach Bottom turbine trip tests

    International Nuclear Information System (INIS)

    Cheng, H.S.; Lu, M.S.; Hsu, C.J.; Shier, W.G.; Diamond, D.J.; Levine, M.M.; Odar, F.

    1979-01-01

    Current interest in the analysis of turbine trip transients has been generated by the recent tests performed at the Peach Bottom (Unit 2) reactor. Three tests, simulating turbine trip transients, were performed at different initial power and coolant flow conditions. The data from these tests provide considerable information to aid qualification of computer codes that are currently used in BWR design analysis. The results are presented of an analysis of a turbine trip transient using the RELAP-3B and the BNL-TWIGL computer codes. Specific results are provided comparing the calculated reactor power and system pressures with the test data. Excellent agreement for all three test transients is evident from the comparisons

  6. Co-ordinated research programme on benchmark study for the seismic analysis and testing of WWER-type nuclear power plants. V. 4D. Paks NPP: Analysis and testing. Working material

    International Nuclear Information System (INIS)

    1996-01-01

    The Co-ordinated research programme on the benchmark study for the seismic analysis and testing of WWER-type nuclear power plants was initiated subsequent to the request from representatives of Member States. The conclusions adopted at the Technical Committee Meeting on Seismic Issues related to existing nuclear power plants held in Tokyo in 1991 called for the harmonization of methods and criteria used in Member States in issues related to seismic safety. The Consulltants' Meeting which followed resulted in producing a working document for CRP. It was decided that a benchmark study is the most effective way to achieve the principal objective. Two types of WWER reactors (WWER-440/213 and WWER-1000) were selected as prototypes for the benchmark exercise to be tested on a full scale using explosions and/or vibration generators. The two prototypes are Kozloduy Units 5/6 for WWER-1000 and Paks for WWER-440/213 nuclear power plants. This volume of Working material contains reports on seismic margin assessment and earthquake experience based methods for WWER-440/213 type NPPs; structural analysis and site inspection for site requalification; structural response of Paks NPP reactor building; analysis and testing of model worm type tanks on shaking table; vibration test of a worm tank model; evaluation of potential hazard for operating WWER control rods under seismic excitation

  7. Standard Test Method for Saltwater Pressure Immersion and Temperature Testing of Photovoltaic Modules for Marine Environments

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This test method provides a procedure for determining the ability of photovoltaic modules to withstand repeated immersion or splash exposure by seawater as might be encountered when installed in a marine environment, such as a floating aid-to-navigation. A combined environmental cycling exposure with modules repeatedly submerged in simulated saltwater at varying temperatures and under repetitive pressurization provides an accelerated basis for evaluation of aging effects of a marine environment on module materials and construction. 1.2 This test method defines photovoltaic module test specimens and requirements for positioning modules for test, references suitable methods for determining changes in electrical performance and characteristics, and specifies parameters which must be recorded and reported. 1.3 This test method does not establish pass or fail levels. The determination of acceptable or unacceptable results is beyond the scope of this test method. 1.4 The values stated in SI units are to be ...

  8. A new method for pressure test analysis of a vertically fractured well producing commingled zones in bounded square reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Osman, Mohammed E.; Abou-Kassem, J.H. [Chemical and Petroleum Engineering Department, UAE University, Al-Ain (United Arab Emirates)

    1997-07-15

    Although hydraulically or naturally fractured wells located in stratified bounded reservoirs are common, reliable techniques available to analyze the pressure test data for such reservoirs are lacking. This paper presents a mathematical model that describes the pressure behavior of a vertically fractured well located in a stratified, bounded, square reservoir. The fracture can be either a uniform flux or an infinite conductivity fracture. It was found that the dimensionless pressure function and its derivative and the fractional production rate from the different layers are mainly controlled by the fracture penetration into the formation, and that transmissibility and storativity affect the fractional production rate and the pressure derivative but have little effect on the dimensionless pressure function. Type curves of dimensionless pressure and dimensionless pressure derivative can be used to evaluate the reservoir characteristics. The selection of the appropriate type curve is guided by the behavior of the layer fractional production rate obtained from flow rate survey carried out during well testing. Type curves for uniform flux and infinite conductivity fractures exhibit similar features. Two examples are presented to demonstrate the application of the new method of analysis presented in this paper

  9. A survey of residual analysis and a new test of residual trend.

    Science.gov (United States)

    McDowell, J J; Calvin, Olivia L; Klapes, Bryan

    2016-05-01

    A survey of residual analysis in behavior-analytic research reveals that existing methods are problematic in one way or another. A new test for residual trends is proposed that avoids the problematic features of the existing methods. It entails fitting cubic polynomials to sets of residuals and comparing their effect sizes to those that would be expected if the sets of residuals were random. To this end, sampling distributions of effect sizes for fits of a cubic polynomial to random data were obtained by generating sets of random standardized residuals of various sizes, n. A cubic polynomial was then fitted to each set of residuals and its effect size was calculated. This yielded a sampling distribution of effect sizes for each n. To test for a residual trend in experimental data, the median effect size of cubic-polynomial fits to sets of experimental residuals can be compared to the median of the corresponding sampling distribution of effect sizes for random residuals using a sign test. An example from the literature, which entailed comparing mathematical and computational models of continuous choice, is used to illustrate the utility of the test. © 2016 Society for the Experimental Analysis of Behavior.

  10. Development of standard testing methods for nuclear-waste forms

    International Nuclear Information System (INIS)

    Mendel, J.E.; Nelson, R.D.

    1981-11-01

    Standard test methods for waste package component development and design, safety analyses, and licensing are being developed for the Nuclear Waste Materials Handbook. This paper describes mainly the testing methods for obtaining waste form materials data

  11. Parameter estimation for hydrogen analysis by using transport method

    International Nuclear Information System (INIS)

    Selvi, S.; Can, N.

    1992-01-01

    A transport method is described which reduces greatly the number of calibration standards needed for hydrogen analysis by neutron induced prompt γ-rays. The counts in the photopeaks from neutron capture in hydrogen for various standard concentrations, the distribution of the source neutron rate entering the thermal group and the reaction rates in the samples are investigated theoretically using 100 energy group cross sections and experimental 252 Cf spectra for a test configuration. Comparison of theoretical results with those measured from the test configuration shows good agreement. (author)

  12. Distance-based microfluidic quantitative detection methods for point-of-care testing.

    Science.gov (United States)

    Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James

    2016-04-07

    Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.

  13. A method for evaluating horizontal well pumping tests.

    Science.gov (United States)

    Langseth, David E; Smyth, Andrew H; May, James

    2004-01-01

    Predicting the future performance of horizontal wells under varying pumping conditions requires estimates of basic aquifer parameters, notably transmissivity and storativity. For vertical wells, there are well-established methods for estimating these parameters, typically based on either the recovery from induced head changes in a well or from the head response in observation wells to pumping in a test well. Comparable aquifer parameter estimation methods for horizontal wells have not been presented in the ground water literature. Formation parameter estimation methods based on measurements of pressure in horizontal wells have been presented in the petroleum industry literature, but these methods have limited applicability for ground water evaluation and are based on pressure measurements in only the horizontal well borehole, rather than in observation wells. This paper presents a simple and versatile method by which pumping test procedures developed for vertical wells can be applied to horizontal well pumping tests. The method presented here uses the principle of superposition to represent the horizontal well as a series of partially penetrating vertical wells. This concept is used to estimate a distance from an observation well at which a vertical well that has the same total pumping rate as the horizontal well will produce the same drawdown as the horizontal well. This equivalent distance may then be associated with an observation well for use in pumping test algorithms and type curves developed for vertical wells. The method is shown to produce good results for confined aquifers and unconfined aquifers in the absence of delayed yield response. For unconfined aquifers, the presence of delayed yield response increases the method error.

  14. Evaluation of methods to leak test sealed radiation sources

    International Nuclear Information System (INIS)

    Arbeau, N.D.; Scott, C.K.

    1987-04-01

    The methods for the leak testing of sealed radiation sources were reviewed. One hundred and thirty-one equipment vendors were surveyed to identify commercially available leak test instruments. The equipment is summarized in tabular form by radiation type and detector type for easy reference. The radiation characteristics of the licensed sources were reviewed and summarized in a format that can be used to select the most suitable detection method. A test kit is proposed for use by inspectors when verifying a licensee's test procedures. The general elements of leak test procedures are discussed

  15. Testing and Validation of the Dynamic Inertia Measurement Method

    Science.gov (United States)

    Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David

    2015-01-01

    The Dynamic Inertia Measurement (DIM) method uses a ground vibration test setup to determine the mass properties of an object using information from frequency response functions. Most conventional mass properties testing involves using spin tables or pendulum-based swing tests, which for large aerospace vehicles becomes increasingly difficult and time-consuming, and therefore expensive, to perform. The DIM method has been validated on small test articles but has not been successfully proven on large aerospace vehicles. In response, the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) conducted mass properties testing on an "iron bird" test article that is comparable in mass and scale to a fighter-type aircraft. The simple two-I-beam design of the "iron bird" was selected to ensure accurate analytical mass properties. Traditional swing testing was also performed to compare the level of effort, amount of resources, and quality of data with the DIM method. The DIM test showed favorable results for the center of gravity and moments of inertia; however, the products of inertia showed disagreement with analytical predictions.

  16. Strain measurement in and analysis for hydraulic test of CPR1000 reactor pressure vessel

    International Nuclear Information System (INIS)

    Zhou Dan; Zhuang Dongzhen

    2013-01-01

    The strain measurement in hydraulic test of CPR1000 reactor pressure vessel performed in Dongfang Heavy Machinery Co., Ltd. is introduced. The detail test scheme and method was introduced and the measurement results of strain and stress was given. Meanwhile the finite element analysis was performed for the pressure vessel, which was generally matched with the measurement results. The reliability of strain measurement was verified and the high strength margin of vessel was shown, which would give a good reference value for the follow-up hydraulic tests and strength analysis of reactor pressure vessel. (authors)

  17. Comparison of multianalyte proficiency test results by sum of ranking differences, principal component analysis, and hierarchical cluster analysis.

    Science.gov (United States)

    Škrbić, Biljana; Héberger, Károly; Durišić-Mladenović, Nataša

    2013-10-01

    Sum of ranking differences (SRD) was applied for comparing multianalyte results obtained by several analytical methods used in one or in different laboratories, i.e., for ranking the overall performances of the methods (or laboratories) in simultaneous determination of the same set of analytes. The data sets for testing of the SRD applicability contained the results reported during one of the proficiency tests (PTs) organized by EU Reference Laboratory for Polycyclic Aromatic Hydrocarbons (EU-RL-PAH). In this way, the SRD was also tested as a discriminant method alternative to existing average performance scores used to compare mutlianalyte PT results. SRD should be used along with the z scores--the most commonly used PT performance statistics. SRD was further developed to handle the same rankings (ties) among laboratories. Two benchmark concentration series were selected as reference: (a) the assigned PAH concentrations (determined precisely beforehand by the EU-RL-PAH) and (b) the averages of all individual PAH concentrations determined by each laboratory. Ranking relative to the assigned values and also to the average (or median) values pointed to the laboratories with the most extreme results, as well as revealed groups of laboratories with similar overall performances. SRD reveals differences between methods or laboratories even if classical test(s) cannot. The ranking was validated using comparison of ranks by random numbers (a randomization test) and using seven folds cross-validation, which highlighted the similarities among the (methods used in) laboratories. Principal component analysis and hierarchical cluster analysis justified the findings based on SRD ranking/grouping. If the PAH-concentrations are row-scaled, (i.e., z scores are analyzed as input for ranking) SRD can still be used for checking the normality of errors. Moreover, cross-validation of SRD on z scores groups the laboratories similarly. The SRD technique is general in nature, i.e., it can

  18. ADM guidance-Ceramics: Fracture toughness testing and method selection.

    Science.gov (United States)

    Cesar, Paulo Francisco; Della Bona, Alvaro; Scherrer, Susanne S; Tholey, Michael; van Noort, Richard; Vichi, Alessandro; Kelly, Robert; Lohbauer, Ulrich

    2017-06-01

    The objective is within the scope of the Academy of Dental Materials Guidance Project, which is to provide dental materials researchers with a critical analysis of fracture toughness (FT) tests such that the assessment of the FT of dental ceramics is conducted in a reliable, repeatable and reproducible way. Fracture mechanics theory and FT methodologies were critically reviewed to introduce basic fracture principles and determine the main advantages and disadvantages of existing FT methods from the standpoint of the dental researcher. The recommended methods for FT determination of dental ceramics were the Single Edge "V" Notch Beam (SEVNB), Single Edge Precracked Beam (SEPB), Chevron Notch Beam (CNB), and Surface Crack in Flexure (SCF). SEVNB's main advantage is the ease of producing the notch via a cutting disk, SEPB allows for production of an atomically sharp crack generated by a specific precracking device, CNB is technically difficult, but based on solid fracture mechanics solutions, and SCF involves fracture from a clinically sized precrack. The IF test should be avoided due to heavy criticism that has arisen in the engineering field regarding the empirical nature of the calculations used for FT determination. Dental researchers interested in FT measurement of dental ceramics should start with a broad review of fracture mechanics theory to understand the underlying principles involved in fast fracture of ceramics. The choice of FT methodology should be based on the pros and cons of each test, as described in this literature review. Copyright © 2017 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  19. Coloured Petri Nets: Basic Concepts, Analysis Methods and Practical Use. Vol. 2, Analysis Methods

    DEFF Research Database (Denmark)

    Jensen, Kurt

    ideas behind the analysis methods are described as well as the mathematics on which they are based and also how the methods are supported by computer tools. Some parts of the volume are theoretical while others are application oriented. The purpose of the volume is to teach the reader how to use......This three-volume work presents a coherent description of the theoretical and practical aspects of coloured Petri nets (CP-nets). The second volume contains a detailed presentation of the analysis methods for CP-nets. They allow the modeller to investigate dynamic properties of CP-nets. The main...... the formal analysis methods, which does not require a deep understanding of the underlying mathematical theory....

  20. Development of dissolution test method for a telmisartan/amlodipine besylate combination using synchronous derivative spectrofluorimetry

    Directory of Open Access Journals (Sweden)

    Panikumar Durga Anumolu

    2014-04-01

    Full Text Available The dissolution process is considered an important in vitro tool to evaluate product quality and drug release behavior. Single dissolution methods for the analysis of combined dosage forms are preferred to simplify quality control testing. The objective of the present work was to develop and validate a single dissolution test for a telmisartan (TEL and amlodipine besylate (AML combined tablet dosage form. The sink conditions, stability and specificity of both drugs in different dissolution media were tested to choose a discriminatory dissolution method, which uses an USP type-II apparatus with a paddle rotating at 75 rpm, with 900 mL of simulated gastric fluid (SGF without enzymes as the dissolution medium. This dissolution methodology provided good dissolution profiles for both TEL and AML and was able to discriminate changes in the composition and manufacturing process. To quantify both drugs simultaneously, a synchronous first derivative spectrofluorimetric method was developed and validated. Drug release was analyzed by a fluorimetric method at 458 nm and 675 nm for AML and TEL, respectively. The dissolution method was validated as per ICH guidance.

  1. Methods of Multivariate Analysis

    CERN Document Server

    Rencher, Alvin C

    2012-01-01

    Praise for the Second Edition "This book is a systematic, well-written, well-organized text on multivariate analysis packed with intuition and insight . . . There is much practical wisdom in this book that is hard to find elsewhere."-IIE Transactions Filled with new and timely content, Methods of Multivariate Analysis, Third Edition provides examples and exercises based on more than sixty real data sets from a wide variety of scientific fields. It takes a "methods" approach to the subject, placing an emphasis on how students and practitioners can employ multivariate analysis in real-life sit

  2. Periodic tests: a human factors analysis of documentary aspects

    International Nuclear Information System (INIS)

    Perinet, Romuald; Rousseau, Jean-Marie

    2007-01-01

    Periodic tests are technical inspections aimed at verifying the availability of the safety-related systems during operation. The French licensee, Electricite de France (EDF), manages periodic tests according to procedures, methods of examination and a frequency, which were defined when the systems were designed. These requirements are defined by national authorities of EDF in a reference document composed of rules of testing and tables containing the reference values to be respected. This reference document is analyzed and transformed by each 'Centre Nucleaire de Production d'Electricite' (CNPE) into station-specific operating ranges of periodic tests. In 2003, the IRSN noted that significant events for safety (ESS) involving periodic tests represented more than 20% of ESS between 2000 and 2002. Thus, 340 ESS were related to non-compliance with the conditions of the test and errors in the implementation of the procedures. A first analysis showed that almost 26% of all ESSs from 2000 to 2002 were related to periodic tests. For many of them, the national reference document and the operating ranges of tests were involved. In this context, the 'Direction Generale de la Surete Nucleaire' (DGSNR), requested the 'Institut de Radioprotection et de Surete Nucleaire' (IRSN) to examine the process of definition and implementation of the periodic tests. The IRSN analyzed about thirty French Licensee event reports occurring during the considered period (2000-2002). The IRSN also interviewed the main persons responsible for the processes and observed the performance of 3 periodic tests. The results of this analysis were presented to a group of experts ('Groupe Permanent') charged with delivering advice to the DGSNR about the origin of the problems identified and the improvements to be implemented. The main conclusions of the IRSN addressed the quality of the prescriptive documents. In this context, EDF decided to carry out a thorough analysis of the whole process. The first

  3. Standard Test Method for Measuring Reaction Rates by Analysis of Barium-140 From Fission Dosimeters

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2008-01-01

    1.1 This test method describes two procedures for the measurement of reaction rates by determining the amount of the fission product 140Ba produced by the non-threshold reactions 235U(n,f), 241Am(n,f), and 239Pu(n,f), and by the threshold reactions 238U(n,f), 237Np(n,f), and 232Th(n,f). 1.2 These reactions produce many fission products, among which is 140Ba, having a half-life of 12.752 days. 140Ba emits gamma rays of several energies; however, these are not easily detected in the presence of other fission products. Competing activity from other fission products requires that a chemical separation be employed or that the 140Ba activity be determined indirectly by counting its daughter product 140La. This test method describes both procedure (a), the nondestructive determination of 140Ba by the direct counting of 140La several days after irradiation, and procedure (b), the chemical separation of 140Ba and the subsequent counting of 140Ba or its daughter 140La. 1.3 With suitable techniques, fission neutron fl...

  4. Application of the X-in-the-Loop Testing Method in the FCV Hybrid Degree Test

    Directory of Open Access Journals (Sweden)

    Haiyu Gao

    2018-02-01

    Full Text Available With the development of fuel cell vehicle technology, an effective testing method that can be applied to develop and verify the fuel cell vehicle powertrain system is urgently required. This paper presents the X-in-the-Loop (XiL testing method in the fuel cell vehicle (FCV hybrid degree test to resolve the first and key issues for the powertrain system design, and the test process and scenarios were designed. The hybrid degree is redefined into the static hybrid degree for system architecture design and the dynamic hybrid degree for vehicle control strategy design, and an integrated testing platform was introduced and a testing application was implemented by following the designed testing flowchart with two loops. Experimental validations show that the sizing of the FCE (Fuel Cell Engine, battery pack, and traction motor with the powertrain architecture can be determined, the control strategy can be evaluated seamlessly, and a systematic powertrain testing solution can be achieved through the whole development process. This research has developed a new testing platform and proposed a novel testing method on the fuel cell vehicle powertrain system, which will be a contribution to fuel cell vehicle technology and its industrialization.

  5. Multivariate analysis: models and method

    International Nuclear Information System (INIS)

    Sanz Perucha, J.

    1990-01-01

    Data treatment techniques are increasingly used since computer methods result of wider access. Multivariate analysis consists of a group of statistic methods that are applied to study objects or samples characterized by multiple values. A final goal is decision making. The paper describes the models and methods of multivariate analysis

  6. A robust statistical method for association-based eQTL analysis.

    Directory of Open Access Journals (Sweden)

    Ning Jiang

    Full Text Available It has been well established that theoretical kernel for recently surging genome-wide association study (GWAS is statistical inference of linkage disequilibrium (LD between a tested genetic marker and a putative locus affecting a disease trait. However, LD analysis is vulnerable to several confounding factors of which population stratification is the most prominent. Whilst many methods have been proposed to correct for the influence either through predicting the structure parameters or correcting inflation in the test statistic due to the stratification, these may not be feasible or may impose further statistical problems in practical implementation.We propose here a novel statistical method to control spurious LD in GWAS from population structure by incorporating a control marker into testing for significance of genetic association of a polymorphic marker with phenotypic variation of a complex trait. The method avoids the need of structure prediction which may be infeasible or inadequate in practice and accounts properly for a varying effect of population stratification on different regions of the genome under study. Utility and statistical properties of the new method were tested through an intensive computer simulation study and an association-based genome-wide mapping of expression quantitative trait loci in genetically divergent human populations.The analyses show that the new method confers an improved statistical power for detecting genuine genetic association in subpopulations and an effective control of spurious associations stemmed from population structure when compared with other two popularly implemented methods in the literature of GWAS.

  7. Importance Analysis of In-Service Testing Components for Ulchin Unit 3

    International Nuclear Information System (INIS)

    Dae-Il Kan; Kil-Yoo Kim; Jae-Joo Ha

    2002-01-01

    We performed an importance analysis of In-Service Testing (IST) components for Ulchin Unit 3 using the integrated evaluation method for categorizing component safety significance developed in this study. The importance analysis using the developed method is initiated by ranking the component importance using quantitative PSA information. The importance analysis of the IST components not modeled in the PSA is performed through the engineering judgment, based on the expertise of PSA, and the quantitative and qualitative information for the IST components. The PSA scope for importance analysis includes not only Level 1 and 2 internal PSA but also Level 1 external and shutdown/low power operation PSA. The importance analysis results of valves show that 167 (26.55%) of the 629 IST valves are HSSCs and 462 (73.45%) are LSSCs. Those of pumps also show that 28 (70%) of the 40 IST pumps are HSSCs and 12 (30%) are LSSCs. (authors)

  8. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    International Nuclear Information System (INIS)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-01

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  9. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-15

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  10. An experimental-numerical method for comparative analysis of joint prosthesis

    International Nuclear Information System (INIS)

    Claramunt, R.; Rincon, E.; Zubizarreta, V.; Ros, A.

    2001-01-01

    The difficulty that exists in the analysis of mechanical stresses in bones is high due to its complex mechanical and morphological characteristics. This complexity makes generalists modelling and conclusions derived from prototype tests very questionable. In this article a relatively simple comparative analysis systematic method that allow us to establish some behaviour differences in different kind of prosthesis is presented. The method, applicable in principle to any joint problem, is based on analysing perturbations produced in natural stress states of a bone after insertion of a joint prosthesis and combines numerical analysis using a 3-D finite element model and experimental studies based on photoelastic coating and electric extensometry. The experimental method is applied to compare two total hip prosthesis cement-free femoral stems of different philosophy. One anatomic of new generation, being of oblique setting over cancellous bone and the other madreporique of trochantero-diaphyseal support over cortical bone. (Author) 4 refs

  11. Standard test method for isotopic analysis of hydrolyzed uranium hexafluoride and uranyl nitrate solutions by thermal ionization mass spectrometry

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2005-01-01

    1.1 This method applies to the determination of isotopic composition in hydrolyzed nuclear grade uranium hexafluoride. It covers isotopic abundance of 235U between 0.1 and 5.0 % mass fraction, abundance of 234U between 0.0055 and 0.05 % mass fraction, and abundance of 236U between 0.0003 and 0.5 % mass fraction. This test method may be applicable to other isotopic abundance providing that corresponding standards are available. 1.2 This test method can apply to uranyl nitrate solutions. This can be achieved either by transforming the uranyl nitrate solution to a uranyl fluoride solution prior to the deposition on the filaments or directly by depositing the uranyl nitrate solution on the filaments. In the latter case, a calibration with uranyl nitrate standards must be performed. 1.3 This test method can also apply to other nuclear grade matrices (for example, uranium oxides) by providing a chemical transformation to uranyl fluoride or uranyl nitrate solution. 1.4 This standard does not purport to address al...

  12. Preliminary study of elemental analysis of hydroxyapatite used neutron activation analysis method

    International Nuclear Information System (INIS)

    Yustinus Purwamargapratala; Rina Mulyaningsih

    2010-01-01

    Preliminary study has been carried out elemental analysis of hydroxyapatite synthesized using the method of neutron activation analysis. Hydroxyapatite is the main component constituent of bones and teeth which can be synthesized from limestone and phosphoric acid. Hydroxyapatite can be used as a bone substitute material and human and animal teeth. Tests on the metal content is necessary to prevent the risk of damage to bones and teeth due to contamination. Results of analysis using neutron activation analysis method with samples irradiated at the neutron flux 10"3 n.det"-"1cm"-"2 for one minute, the impurities of Al (48.60±6.47 mg/kg), CI (38.00±7.47 mg/kg), Mn (1.05±0.19 mg/kg), and Mg (2095.30±203.66 mg/kg), were detected, whereas with irradiation time for 10 minutes and 40 minutes with a time decay of three days there were K (103.89 ± 26.82 mg/kg), Br (1617.06 ± 193.66 mg/kg), and Na (125.10±9.57 mg/kg). These results indicate that there is impurity Al, CI, Mn, Mg, Br, K and Na, although in very small amounts and do not cause damage to bones and teeth. (author)

  13. The comparison of System 1000 analysis and type testing for neoprene gasket environmental qualification

    International Nuclear Information System (INIS)

    Park, Kyung Heum; Kim, Jong Seok; Jeong, Sun Chul; Jang, Kyung Nam; Hwang, Sung Phil

    2010-01-01

    The typical environmental qualification is to ensure that equipment will operate on demand to meet system performance requirements during normal and abnormal service conditions. There are four environmental qualification methods, type testing, operating experience, analysis and combined method. Generally, the American EQ do not contain the mechanical equipment like pumps and valves in their EQ equipment list because their EQ standard 10CFR50.49 limits EQ equipment as electrical equipment. On the other hand, Canadian EQ contain the mechanical equipment like pumps and valves in their EQ components list, Canadians usually call American 'equipment' as 'components', because their EQ standard CSA N290.13-05 do not limits EQ equipment as electrical equipment. System 1000 program is typical Canadian EQ analysis method using mathematical modeling and comparison with established engineering information and manufacturers' data. Most of Canadian nuclear power utilities like NB Power, Hydro Quebec and OPG use the System 1000 program to evaluate the design life for their EQ components. To qualify a pump, I had to list all the non-metallic parts in the pump and found there are lots of gaskets made by neoprene material. I tried to qualify these neoprene gaskets by analysis using System 1000 program and by type testing. In this paper, I'd like to introduce the qualification results of neoprene gasket both type testing and analysis using System 1000 program

  14. Analysis of possible systematic errors in the Oslo method

    International Nuclear Information System (INIS)

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-01-01

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  15. Analysis apparatus and method of analysis

    International Nuclear Information System (INIS)

    1976-01-01

    A continuous streaming method developed for the excution of immunoassays is described in this patent. In addition, a suitable apparatus for the method was developed whereby magnetic particles are automatically employed for the consecutive analysis of a series of liquid samples via the RIA technique

  16. Some methods of analysis and diagnostics of corroded components from nuclear power plant

    International Nuclear Information System (INIS)

    Mogosan, S.; Radulescu, M.; Fulger, M.; Stefanescu, D.

    2010-01-01

    In Nuclear Power Plants (NPP) it is necessary to ensure a longer and safe operation as difficult and expensive it is the maintenance of these very complex installations and equipment. In this regard, The Analysis and Diagnostic Laboratory Corroded Metal Components in Nuclear Facilities-LADICON; was authorized RENAR and CNCAN (National Commission for Nuclear Activities Control) notified as a testing laboratory for nuclear-grade materials. As part of the investigation and evaluation of corrosion behavior for these materials two types of test methods are used i.e. longer corrosion tests such as: autoclaving at high temperature and pressure in different chemical media-specific patterns in NPP and accelerated methods like: electrochemical techniques, accelerated chemical tests, etc. This paper presents some methods of analysis for materials corrosion; methods of assessment of corrosion of structural materials exposed to specific operating conditions and environment in NPPs. The electrochemical measurements show the following advantages: a) Allowing a direct method to accelerate the corrosion processes without altering the environment, b) It can be used as an nondestructive tool for assessing the rate of corrosion and c) Offers the possibility of conducting such investigations in - situ and ex- situ. Corroborating the environmental chemistry that was born on samples movies investigation results obtained by the methods above, it is possible to identify the types of corrosion of the materials and sometimes even those processes and mechanisms of corrosion. (authors)

  17. Reliability Analysis and Test Planning using CAPO-Test for Existing Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Engelund, S.; Faber, Michael Havbro

    2000-01-01

    Evaluation of the reliability of existing concrete structures often requires that the compressive strength of the concrete is estimated on the basis of tests performed with concrete samples from the structure considered. In this paper the CAPO-test method is considered. The different sources...... of uncertainty related to this method are described. It is shown how the uncertainty in the transformation from the CAPO-test results to estimates of the concrete strength can be modeled. Further, the statistical uncertainty is modeled using Bayesian statistics. Finally, it is shown how reliability-based optimal...... planning of CAPO-tests can be performed taking into account the expected costs due to the CAPO-tests and possible repair or failure of the structure considered. An illustrative example is presented where the CAPO-test is compared with conventional concrete cylinder compression tests performed on cores...

  18. Performance Analysis of Unsupervised Clustering Methods for Brain Tumor Segmentation

    Directory of Open Access Journals (Sweden)

    Tushar H Jaware

    2013-10-01

    Full Text Available Medical image processing is the most challenging and emerging field of neuroscience. The ultimate goal of medical image analysis in brain MRI is to extract important clinical features that would improve methods of diagnosis & treatment of disease. This paper focuses on methods to detect & extract brain tumour from brain MR images. MATLAB is used to design, software tool for locating brain tumor, based on unsupervised clustering methods. K-Means clustering algorithm is implemented & tested on data base of 30 images. Performance evolution of unsupervised clusteringmethods is presented.

  19. Spectral analysis of surface waves method to assess shear wave velocity within centrifuge models

    Science.gov (United States)

    Murillo, Carol Andrea; Thorel, Luc; Caicedo, Bernardo

    2009-06-01

    The method of the spectral analysis of surface waves (SASW) is tested out on reduced scale centrifuge models, with a specific device, called the mini Falling Weight, developed for this purpose. Tests are performed on layered materials made of a mixture of sand and clay. The shear wave velocity VS determined within the models using the SASW is compared with the laboratory measurements carried out using the bender element test. The results show that the SASW technique applied to centrifuge testing is a relevant method to characterize VS near the surface.

  20. Regional frequency analysis of extreme rainfalls using partial L moments method

    Science.gov (United States)

    Zakaria, Zahrahtul Amani; Shabri, Ani

    2013-07-01

    An approach based on regional frequency analysis using L moments and LH moments are revisited in this study. Subsequently, an alternative regional frequency analysis using the partial L moments (PL moments) method is employed, and a new relationship for homogeneity analysis is developed. The results were then compared with those obtained using the method of L moments and LH moments of order two. The Selangor catchment, consisting of 37 sites and located on the west coast of Peninsular Malaysia, is chosen as a case study. PL moments for the generalized extreme value (GEV), generalized logistic (GLO), and generalized Pareto distributions were derived and used to develop the regional frequency analysis procedure. PL moment ratio diagram and Z test were employed in determining the best-fit distribution. Comparison between the three approaches showed that GLO and GEV distributions were identified as the suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation used for performance evaluation shows that the method of PL moments would outperform L and LH moments methods for estimation of large return period events.

  1. Comparison and transfer testing of multiplex ligation detection methods for GM plants

    Directory of Open Access Journals (Sweden)

    Ujhelyi Gabriella

    2012-01-01

    Full Text Available Abstract Background With the increasing number of GMOs on the global market the maintenance of European GMO regulations is becoming more complex. For the analysis of a single food or feed sample it is necessary to assess the sample for the presence of many GMO-targets simultaneously at a sensitive level. Several methods have been published regarding DNA-based multidetection. Multiplex ligation detection methods have been described that use the same basic approach: i hybridisation and ligation of specific probes, ii amplification of the ligated probes and iii detection and identification of the amplified products. Despite they all have this same basis, the published ligation methods differ radically. The present study investigated with real-time PCR whether these different ligation methods have any influence on the performance of the probes. Sensitivity and the specificity of the padlock probes (PLPs with the ligation protocol with the best performance were also tested and the selected method was initially validated in a laboratory exchange study. Results Of the ligation protocols tested in this study, the best results were obtained with the PPLMD I and PPLMD II protocols and no consistent differences between these two protocols were observed. Both protocols are based on padlock probe ligation combined with microarray detection. Twenty PLPs were tested for specificity and the best probes were subjected to further evaluation. Up to 13 targets were detected specifically and simultaneously. During the interlaboratory exchange study similar results were achieved by the two participating institutes (NIB, Slovenia, and RIKILT, the Netherlands. Conclusions From the comparison of ligation protocols it can be concluded that two protocols perform equally well on the basis of the selected set of PLPs. Using the most ideal parameters the multiplicity of one of the methods was tested and 13 targets were successfully and specifically detected. In the

  2. Uniform approximation is more appropriate for Wilcoxon Rank-Sum Test in gene set analysis.

    Directory of Open Access Journals (Sweden)

    Zhide Fang

    Full Text Available Gene set analysis is widely used to facilitate biological interpretations in the analyses of differential expression from high throughput profiling data. Wilcoxon Rank-Sum (WRS test is one of the commonly used methods in gene set enrichment analysis. It compares the ranks of genes in a gene set against those of genes outside the gene set. This method is easy to implement and it eliminates the dichotomization of genes into significant and non-significant in a competitive hypothesis testing. Due to the large number of genes being examined, it is impractical to calculate the exact null distribution for the WRS test. Therefore, the normal distribution is commonly used as an approximation. However, as we demonstrate in this paper, the normal approximation is problematic when a gene set with relative small number of genes is tested against the large number of genes in the complementary set. In this situation, a uniform approximation is substantially more powerful, more accurate, and less intensive in computation. We demonstrate the advantage of the uniform approximations in Gene Ontology (GO term analysis using simulations and real data sets.

  3. State of the art in non-animal approaches for skin sensitization testing: from individual test methods towards testing strategies.

    Science.gov (United States)

    Ezendam, Janine; Braakhuis, Hedwig M; Vandebriel, Rob J

    2016-12-01

    The hazard assessment of skin sensitizers relies mainly on animal testing, but much progress is made in the development, validation and regulatory acceptance and implementation of non-animal predictive approaches. In this review, we provide an update on the available computational tools and animal-free test methods for the prediction of skin sensitization hazard. These individual test methods address mostly one mechanistic step of the process of skin sensitization induction. The adverse outcome pathway (AOP) for skin sensitization describes the key events (KEs) that lead to skin sensitization. In our review, we have clustered the available test methods according to the KE they inform: the molecular initiating event (MIE/KE1)-protein binding, KE2-keratinocyte activation, KE3-dendritic cell activation and KE4-T cell activation and proliferation. In recent years, most progress has been made in the development and validation of in vitro assays that address KE2 and KE3. No standardized in vitro assays for T cell activation are available; thus, KE4 cannot be measured in vitro. Three non-animal test methods, addressing either the MIE, KE2 or KE3, are accepted as OECD test guidelines, and this has accelerated the development of integrated or defined approaches for testing and assessment (e.g. testing strategies). The majority of these approaches are mechanism-based, since they combine results from multiple test methods and/or computational tools that address different KEs of the AOP to estimate skin sensitization potential and sometimes potency. Other approaches are based on statistical tools. Until now, eleven different testing strategies have been published, the majority using the same individual information sources. Our review shows that some of the defined approaches to testing and assessment are able to accurately predict skin sensitization hazard, sometimes even more accurate than the currently used animal test. A few defined approaches are developed to provide an

  4. Nuclear-physical methods in macro- and microanalytical investigations of contamination with radionuclides at Semipalatinsk Nuclear Test Site

    International Nuclear Information System (INIS)

    Solodukhin, V.P.

    2005-01-01

    A complex of nuclear-physical methods developed in the Institute of Nuclear Physics of Kazakhstan National Nuclear Center for the investigations of the rate, character and peculiarities of contamination with radionuclides of the Semipalatinsk Nuclear Test Site (SNTS) is presented. The developed method combines both macroinvestigations (radionuclide analysis, NAA, XRFA, ESR- and NGR-spectroscopy) and microinvestigations (MS, micro-PIXE, electron microscopy). The results of the investigations at the main SNTS test sites 'Opytnoye pole' and 'Degelen' are presented. (author)

  5. Evaluation of core physics analysis methods for conversion of the INL advanced test reactor to low-enrichment fuel

    International Nuclear Information System (INIS)

    DeHart, M. D.; Chang, G. S.

    2012-01-01

    Computational neutronics studies to support the possible conversion of the ATR to LEU are underway. Simultaneously, INL is engaged in a physics methods upgrade project to put into place modern computational neutronics tools for future support of ATR fuel cycle and experiment analysis. A number of experimental measurements have been performed in the ATRC in support of the methods upgrade project, and are being used to validate the new core physics methods. The current computational neutronics work is focused on performance of scoping calculations for the ATR core loaded with a candidate LEU fuel design. This will serve as independent confirmation of analyses that have been performed previously, and will evaluate some of the new computational methods for analysis of a candidate LEU fuel for ATR. (authors)

  6. The Fusion of Financial Analysis and Seismology: Statistical Methods from Financial Market Analysis Applied to Earthquake Data

    Science.gov (United States)

    Ohyanagi, S.; Dileonardo, C.

    2013-12-01

    As a natural phenomenon earthquake occurrence is difficult to predict. Statistical analysis of earthquake data was performed using candlestick chart and Bollinger Band methods. These statistical methods, commonly used in the financial world to analyze market trends were tested against earthquake data. Earthquakes above Mw 4.0 located on shore of Sanriku (37.75°N ~ 41.00°N, 143.00°E ~ 144.50°E) from February 1973 to May 2013 were selected for analysis. Two specific patterns in earthquake occurrence were recognized through the analysis. One is a spread of candlestick prior to the occurrence of events greater than Mw 6.0. A second pattern shows convergence in the Bollinger Band, which implies a positive or negative change in the trend of earthquakes. Both patterns match general models for the buildup and release of strain through the earthquake cycle, and agree with both the characteristics of the candlestick chart and Bollinger Band analysis. These results show there is a high correlation between patterns in earthquake occurrence and trend analysis by these two statistical methods. The results of this study agree with the appropriateness of the application of these financial analysis methods to the analysis of earthquake occurrence.

  7. Experimental Validation of the Dynamic Inertia Measurement Method to Find the Mass Properties of an Iron Bird Test Article

    Science.gov (United States)

    Chin, Alexander W.; Herrera, Claudia Y.; Spivey, Natalie D.; Fladung, William A.; Cloutier, David

    2015-01-01

    The mass properties of an aerospace vehicle are required by multiple disciplines in the analysis and prediction of flight behavior. Pendulum oscillation methods have been developed and employed for almost a century as a means to measure mass properties. However, these oscillation methods are costly, time consuming, and risky. The NASA Armstrong Flight Research Center has been investigating the Dynamic Inertia Measurement, or DIM method as a possible alternative to oscillation methods. The DIM method uses ground test techniques that are already applied to aerospace vehicles when conducting modal surveys. Ground vibration tests would require minimal additional instrumentation and time to apply the DIM method. The DIM method has been validated on smaller test articles, but has not yet been fully proven on large aerospace vehicles.

  8. Application of case analysis teaching method in nursing teaching in Department of Internal Medicine

    Directory of Open Access Journals (Sweden)

    Zhang-xiu SHENG

    2014-04-01

    Full Text Available Objective:In order to adapt to the modern occupation education teaching idea, to stimulate students’ interest in learning, training students' comprehensive quality, improve the students' active participation, understanding, analysis and problem solving skills. Methods: I In the course of different stages using teaching methods of case analysis: case introduction before class teaching method, case analysis during and after class teaching method, and case analysis of the whole chapter after class teaching method.  Results and Conclusion: Through the course of different stages of using case analysis teaching method, we can launch the students’ active learning, stimulate the students' interest in learning, activate classroom atmosphere, train students' independent thinking, strengthen the problems solving ability, improve the self-learning ability of students, activate their participation and awareness, analysis, judgment, introduction, and strengthen students' exam ability, improve the test scores of students and the teaching effect of nursing in Department of internal medicine.

  9. Testing of the Defense Waste Processing Facility Cold Chemical Dissolution Method in Sludge Batch 9 Qualification

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Pareizs, J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Coleman, C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Young, J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Brown, L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-05-10

    For each sludge batch that is processed in the Defense Waste Processing Facility (DWPF), the Savannah River National Laboratory (SRNL) tests the applicability of the digestion methods used by the DWPF Laboratory for elemental analysis of Sludge Receipt and Adjustment Tank (SRAT) Receipt samples and SRAT Product process control samples. DWPF SRAT samples are typically dissolved using a method referred to as the DWPF Cold Chemical or Cold Chem Method (CC), (see DWPF Procedure SW4- 15.201). Testing indicates that the CC method produced mixed results. The CC method did not result in complete dissolution of either the SRAT Receipt or SRAT Product with some fine, dark solids remaining. However, elemental analyses did not reveal extreme biases for the major elements in the sludge when compared with analyses obtained following dissolution by hot aqua regia (AR) or sodium peroxide fusion (PF) methods. The CC elemental analyses agreed with the AR and PF methods well enough that it should be adequate for routine process control analyses in the DWPF after much more extensive side-by-side tests of the CC method and the PF method are performed on the first 10 SRAT cycles of the Sludge Batch 9 (SB9) campaign. The DWPF Laboratory should continue with their plans for further tests of the CC method during these 10 SRAT cycles.

  10. A method for express estimation of the octane number of gasoline using a portable spectroimpedance meter and statistical analysis methods

    Directory of Open Access Journals (Sweden)

    Mamykin A. V.

    2017-10-01

    Full Text Available The authors propose a method for determination of the electro-physical characteristics of electrical insulating liquids on the example of different types of gasoline. The method is based on the spectral impedance measurements of a capacitor electrochemical cell filled with the liquid under study. The application of sinusoidal test voltage in the frequency range of 0,1—10 Hz provides more accurate measurements in comparison with known traditional methods. A portable device for measuring total electrical resistance (impedance of dielectric liquids was designed and constructed. An approach for express estimation of octane number of automobile gasoline using spectroimpedance measurements and statistical multi variation methods of data analysis has been proposed and tested.

  11. Multivariate analysis methods in physics

    International Nuclear Information System (INIS)

    Wolter, M.

    2007-01-01

    A review of multivariate methods based on statistical training is given. Several multivariate methods useful in high-energy physics analysis are discussed. Selected examples from current research in particle physics are discussed, both from the on-line trigger selection and from the off-line analysis. Also statistical training methods are presented and some new application are suggested [ru

  12. Analysis of selected Halden overpressure tests using the FALCON code

    Energy Technology Data Exchange (ETDEWEB)

    Khvostov, G., E-mail: grigori.khvostov@psi.ch [Paul Scherrer Institut, CH 5232 Villigen PSI (Switzerland); Wiesenack, W. [Institute for Energy Technology – OECD Halden Reactor Project, P.O. Box 173, N-1751 Halden (Norway)

    2016-12-15

    Highlights: • We analyse four Halden overpressure tests. • We determine a critical overpressure value for lift-off in a BWR fuel sample. • We show the role of bonding in over-pressurized rod behaviour. • We analytically quantify the degree of bonding via its impact on cladding elongation. • We hypothesize on an effect of circumferential cracks on thermal fuel response to overpressure. • We estimate a thermal effect of circumferential cracks based on interpretation of the data. - Abstract: Four Halden overpressure (lift-off) tests using samples with uranium dioxide fuel pre-irradiated in power reactors to a burnup of 60 MWd/kgU are analyzed. The FALCON code coupled to a mechanistic model, GRSW-A for fission gas release and gaseous-bubble swelling is used for the calculation. The advanced version of the FALCON code is shown to be applicable to best-estimate predictive analysis of overpressure tests using rods without, or weak pellet-cladding bonding, as well as scoping analysis of tests with fuels where stronger pellet-cladding bonding occurs. Significant effects of bonding and fuel cracking/relocation on the thermal and mechanical behaviour of highly over-pressurized rods are shown. The effect of bonding is particularly pronounced in the tests with the PWR samples. The present findings are basically consistent with an earlier analysis based on a direct interpretation of the experimental data. Additionally, in this paper, the specific effects are quantified based on the comparison of the data with the results of calculation. It is concluded that the identified effects are largely beyond the current traditional fuel-rod licensing analysis methods.

  13. ASTM test methods for composite characterization and evaluation

    Science.gov (United States)

    Masters, John E.

    1994-01-01

    A discussion of the American Society for Testing and Materials is given. Under the topic of composite materials characterization and evaluation, general industry practice and test methods for textile composites are presented.

  14. Standard test methods for bend testing of material for ductility

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 These test methods cover bend testing for ductility of materials. Included in the procedures are four conditions of constraint on the bent portion of the specimen; a guided-bend test using a mandrel or plunger of defined dimensions to force the mid-length of the specimen between two supports separated by a defined space; a semi-guided bend test in which the specimen is bent, while in contact with a mandrel, through a specified angle or to a specified inside radius (r) of curvature, measured while under the bending force; a free-bend test in which the ends of the specimen are brought toward each other, but in which no transverse force is applied to the bend itself and there is no contact of the concave inside surface of the bend with other material; a bend and flatten test, in which a transverse force is applied to the bend such that the legs make contact with each other over the length of the specimen. 1.2 After bending, the convex surface of the bend is examined for evidence of a crack or surface irregu...

  15. Multiple predictor smoothing methods for sensitivity analysis: Example results

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Helton, Jon C.

    2008-01-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present

  16. Direct methods of soil-structure interaction analysis for earthquake loadings (III)

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, J B; Lee, S R; Kim, J M; Park, K R; Choi, J S; Oh, S B [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1995-06-15

    In this study, direct methods for seismic analysis of soil-structure interaction system have been studied. A computer program 'KIESSI-QK' has been developed based on the finite element technique coupled with infinite element formulation. A substructuring method isolating the displacement solution of near field soil region was adopted. The computer program developed was verified using a free-field site response problem. The post-correlation analysis for the forced vibration tests after backfill of the Hualien LSST project has been carried out. The seismic analyses for the Hualien and Lotung LSST structures have been also performed utilizing the developed computer program 'KIESSI-QK'.

  17. Development of an Automated LIBS Analytical Test System Integrated with Component Control and Spectrum Analysis Capabilities

    International Nuclear Information System (INIS)

    Ding Yu; Tian Di; Chen Feipeng; Chen Pengfei; Qiao Shujun; Yang Guang; Li Chunsheng

    2015-01-01

    The present paper proposes an automated Laser-Induced Breakdown Spectroscopy (LIBS) analytical test system, which consists of a LIBS measurement and control platform based on a modular design concept, and a LIBS qualitative spectrum analysis software and is developed in C#. The platform provides flexible interfacing and automated control; it is compatible with different manufacturer component models and is constructed in modularized form for easy expandability. During peak identification, a more robust peak identification method with improved stability in peak identification has been achieved by applying additional smoothing on the slope obtained by calculation before peak identification. For the purpose of element identification, an improved main lines analysis method, which detects all elements on the spectral peak to avoid omission of certain elements without strong spectral lines, is applied to element identification in the tested LIBS samples. This method also increases the identification speed. In this paper, actual applications have been carried out. According to tests, the analytical test system is compatible with components of various models made by different manufacturers. It can automatically control components to get experimental data and conduct filtering, peak identification and qualitative analysis, etc. on spectral data. (paper)

  18. Influence of Specimen Preparation and Test Methods on the Flexural Strength Results of Monolithic Zirconia Materials.

    Science.gov (United States)

    Schatz, Christine; Strickstrock, Monika; Roos, Malgorzata; Edelhoff, Daniel; Eichberger, Marlis; Zylla, Isabella-Maria; Stawarczyk, Bogna

    2016-03-09

    The aim of this work was to evaluate the influence of specimen preparation and test method on the flexural strength results of monolithic zirconia. Different monolithic zirconia materials (Ceramill Zolid (Amann Girrbach, Koblach, Austria), Zenostar ZrTranslucent (Wieland Dental, Pforzheim, Germany), and DD Bio zx² (Dental Direkt, Spenge, Germany)) were tested with three different methods: 3-point, 4-point, and biaxial flexural strength. Additionally, different specimen preparation methods were applied: either dry polishing before sintering or wet polishing after sintering. Each subgroup included 40 specimens. The surface roughness was assessed using scanning electron microscopy (SEM) and a profilometer whereas monoclinic phase transformation was investigated with X-ray diffraction. The data were analyzed using a three-way Analysis of Variance (ANOVA) with respect to the three factors: zirconia, specimen preparation, and test method. One-way ANOVA was conducted for the test method and zirconia factors within the combination of two other factors. A 2-parameter Weibull distribution assumption was applied to analyze the reliability under different testing conditions. In general, values measured using the 4-point test method presented the lowest flexural strength values. The flexural strength findings can be grouped in the following order: 4-point strength values than prepared before sintering. The Weibull moduli ranged from 5.1 to 16.5. Specimens polished before sintering showed higher surface roughness values than specimens polished after sintering. In contrast, no strong impact of the polishing procedures on the monoclinic surface layer was observed. No impact of zirconia material on flexural strength was found. The test method and the preparation method significantly influenced the flexural strength values.

  19. 49 CFR Appendix B to Part 178 - Alternative Leakproofness Test Methods

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Alternative Leakproofness Test Methods B Appendix... FOR PACKAGINGS Pt. 178, App. B Appendix B to Part 178—Alternative Leakproofness Test Methods In addition to the method prescribed in § 178.604 of this subchapter, the following leakproofness test methods...

  20. Modal testing and analysis of NOVA laser structures

    International Nuclear Information System (INIS)

    Burdick, R.B.; Weaver, H.J.; Pastrnak, J.W.

    1984-09-01

    NOVA, currently the world's most powerful laser system, is an ongoing project at the Lawrence Livermore National Laboratory in California. The project seeks to develop a feasible method of achieving controlled fusion reaction, initiated by multiple laser beams targeted on a tiny fuel pellet. The NOVA system consists of several large steel framed structures, the largest of which is the Target Chamber Tower. In conjunction with design engineers, the tower was first modelled and analyzed by sophisticated finite element techniques. A modal test was then conducted on the tower structure to evaluate its vibrational characteristics and seismic integrity as well as for general comparison to the finite element results. This paper will discuss the procedure used in the experimental modal analysis and the results obtained from that test

  1. Choreographer Pre-Testing Code Analysis and Operational Testing.

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, David J. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Harrison, Christopher B. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Perr, C. W. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Hurd, Steven A [Sandia National Laboratories (SNL-CA), Livermore, CA (United States)

    2014-07-01

    Choreographer is a "moving target defense system", designed to protect against attacks aimed at IP addresses without corresponding domain name system (DNS) lookups. It coordinates actions between a DNS server and a Network Address Translation (NAT) device to regularly change which publicly available IP addresses' traffic will be routed to the protected device versus routed to a honeypot. More details about how Choreographer operates can be found in Section 2: Introducing Choreographer. Operational considerations for the successful deployment of Choreographer can be found in Section 3. The Testing & Evaluation (T&E) for Choreographer involved 3 phases: Pre-testing, Code Analysis, and Operational Testing. Pre-testing, described in Section 4, involved installing and configuring an instance of Choreographer and verifying it would operate as expected for a simple use case. Our findings were that it was simple and straightforward to prepare a system for a Choreographer installation as well as configure Choreographer to work in a representative environment. Code Analysis, described in Section 5, consisted of running a static code analyzer (HP Fortify) and conducting dynamic analysis tests using the Valgrind instrumentation framework. Choreographer performed well, such that only a few errors that might possibly be problematic in a given operating situation were identified. Operational Testing, described in Section 6, involved operating Choreographer in a representative environment created through EmulyticsTM . Depending upon the amount of server resources dedicated to Choreographer vis-á-vis the amount of client traffic handled, Choreographer had varying degrees of operational success. In an environment with a poorly resourced Choreographer server and as few as 50-100 clients, Choreographer failed to properly route traffic over half the time. Yet, with a well-resourced server, Choreographer handled over 1000 clients without missrouting. Choreographer

  2. Failure analysis on false call probe pins of microprocessor test equipment

    Science.gov (United States)

    Tang, L. W.; Ong, N. R.; Mohamad, I. S. B.; Alcain, J. B.; Retnasamy, V.

    2017-09-01

    A study has been conducted to investigate failure analysis on probe pins of test modules for microprocessor. The `health condition' of the probe pin is determined by the resistance value. A test module of 5V power supplied from Arduino UNO with "Four-wire Ohm measurement" method is implemented in this study to measure the resistance of the probe pins of a microprocessor. The probe pins from a scrapped computer motherboard is used as the test sample in this study. The functionality of the test module was validated with the pre-measurement experiment via VEE Pro software. Lastly, the experimental work have demonstrated that the implemented test module have the capability to identify the probe pin's `health condition' based on the measured resistance value.

  3. Bridging the Gap Between Validation and Implementation of Non-Animal Veterinary Vaccine Potency Testing Methods

    Directory of Open Access Journals (Sweden)

    Alistair Currie

    2011-11-01

    Full Text Available In recent years, technologically advanced high-throughput techniques have been developed that replace, reduce or refine animal use in vaccine quality control tests. Following validation, these tests are slowly being accepted for use by international regulatory authorities. Because regulatory acceptance itself has not guaranteed that approved humane methods are adopted by manufacturers, various organizations have sought to foster the preferential use of validated non-animal methods by interfacing with industry and regulatory authorities. After noticing this gap between regulation and uptake by industry, we began developing a paradigm that seeks to narrow the gap and quicken implementation of new replacement, refinement or reduction guidance. A systematic analysis of our experience in promoting the transparent implementation of validated non-animal vaccine potency assays has led to the refinement of our paradigmatic process, presented here, by which interested parties can assess the local regulatory acceptance of methods that reduce animal use and integrate them into quality control testing protocols, or ensure the elimination of peripheral barriers to their use, particularly for potency and other tests carried out on production batches.

  4. Bridging the Gap Between Validation and Implementation of Non-Animal Veterinary Vaccine Potency Testing Methods.

    Science.gov (United States)

    Dozier, Samantha; Brown, Jeffrey; Currie, Alistair

    2011-11-29

    In recent years, technologically advanced high-throughput techniques have been developed that replace, reduce or refine animal use in vaccine quality control tests. Following validation, these tests are slowly being accepted for use by international regulatory authorities. Because regulatory acceptance itself has not guaranteed that approved humane methods are adopted by manufacturers, various organizations have sought to foster the preferential use of validated non-animal methods by interfacing with industry and regulatory authorities. After noticing this gap between regulation and uptake by industry, we began developing a paradigm that seeks to narrow the gap and quicken implementation of new replacement, refinement or reduction guidance. A systematic analysis of our experience in promoting the transparent implementation of validated non-animal vaccine potency assays has led to the refinement of our paradigmatic process, presented here, by which interested parties can assess the local regulatory acceptance of methods that reduce animal use and integrate them into quality control testing protocols, or ensure the elimination of peripheral barriers to their use, particularly for potency and other tests carried out on production batches.

  5. New methods of testing nonlinear hypothesis using iterative NLLS estimator

    Science.gov (United States)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper discusses the method of testing nonlinear hypothesis using iterative Nonlinear Least Squares (NLLS) estimator. Takeshi Amemiya [1] explained this method. However in the present research paper, a modified Wald test statistic due to Engle, Robert [6] is proposed to test the nonlinear hypothesis using iterative NLLS estimator. An alternative method for testing nonlinear hypothesis using iterative NLLS estimator based on nonlinear hypothesis using iterative NLLS estimator based on nonlinear studentized residuals has been proposed. In this research article an innovative method of testing nonlinear hypothesis using iterative restricted NLLS estimator is derived. Pesaran and Deaton [10] explained the methods of testing nonlinear hypothesis. This paper uses asymptotic properties of nonlinear least squares estimator proposed by Jenrich [8]. The main purpose of this paper is to provide very innovative methods of testing nonlinear hypothesis using iterative NLLS estimator, iterative NLLS estimator based on nonlinear studentized residuals and iterative restricted NLLS estimator. Eakambaram et al. [12] discussed least absolute deviation estimations versus nonlinear regression model with heteroscedastic errors and also they studied the problem of heteroscedasticity with reference to nonlinear regression models with suitable illustration. William Grene [13] examined the interaction effect in nonlinear models disused by Ai and Norton [14] and suggested ways to examine the effects that do not involve statistical testing. Peter [15] provided guidelines for identifying composite hypothesis and addressing the probability of false rejection for multiple hypotheses.

  6. Systematic review and meta-analysis of studies evaluating diagnostic test accuracy: A practical review for clinical researchers-Part II. general guidance and tips

    International Nuclear Information System (INIS)

    Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi; Park, Seong Ho; Lee, June Young

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies

  7. Verification of the analytical fracture assessments methods by a large scale pressure vessel test

    Energy Technology Data Exchange (ETDEWEB)

    Keinanen, H; Oberg, T; Rintamaa, R; Wallin, K

    1988-12-31

    This document deals with the use of fracture mechanics for the assessment of reactor pressure vessel. Tests have been carried out to verify the analytical fracture assessment methods. The analysis is focused on flaw dimensions and the scatter band of material characteristics. Results are provided and are compared to experimental ones. (TEC).

  8. Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Myoung Keon [Agency for Defense Development, Daejeon (Korea, Republic of); Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2016-10-15

    This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)

  9. Development of Compressive Failure Strength for Composite Laminate Using Regression Analysis Method

    International Nuclear Information System (INIS)

    Lee, Myoung Keon; Lee, Jeong Won; Yoon, Dong Hyun; Kim, Jae Hoon

    2016-01-01

    This paper provides the compressive failure strength value of composite laminate developed by using regression analysis method. Composite material in this document is a Carbon/Epoxy unidirection(UD) tape prepreg(Cycom G40-800/5276-1) cured at 350°F(177°C). The operating temperature is –60°F~+200°F(-55°C - +95°C). A total of 56 compression tests were conducted on specimens from eight (8) distinct laminates that were laid up by standard angle layers (0°, +45°, –45° and 90°). The ASTM-D-6484 standard was used for test method. The regression analysis was performed with the response variable being the laminate ultimate fracture strength and the regressor variables being two ply orientations (0° and ±45°)

  10. Reliability Analysis Of Fire System On The Industry Facility By Use Fameca Method

    International Nuclear Information System (INIS)

    Sony T, D.T.; Situmorang, Johnny; Ismu W, Puradwi; Demon H; Mulyanto, Dwijo; Kusmono, Slamet; Santa, Sigit Asmara

    2000-01-01

    FAMECA is one of the analysis method to determine system reliability on the industry facility. Analysis is done by some procedure that is identification of component function, determination of failure mode, severity level and effect of their failure. Reliability value is determined by three combinations that is severity level, component failure value and critical component. Reliability of analysis has been done for fire system on the industry by FAMECA method. Critical component which identified is pump, air release valve, check valve, manual test valve, isolation valve, control system etc

  11. Improvement in post test accident analysis results prediction for the test no. 2 in PSB test facility by applying UMAE methodology

    International Nuclear Information System (INIS)

    Dubey, S.K.; Petruzzi, A.; Giannotti, W.; D'Auria, F.

    2006-01-01

    This paper mainly deals with the improvement in the post test accident analysis results prediction for the test no. 2, 'Total loss of feed water with failure of HPIS pumps and operator actions on primary and secondary circuit depressurization', carried-out on PSB integral test facility in May 2005. This is one the most complicated test conducted in PSB test facility. The prime objective of this test is to provide support for the verification of the accident management strategies for NPPs and also to verify the correctness of some safety systems operating only during accident. The objective of this analysis is to assess the capability to reproduce the phenomena occurring during the selected tests and to quantify the accuracy of the code calculation qualitatively and quantitatively for the best estimate code Relap5/mod3.3 by systematically applying all the procedures lead by Uncertainty Methodology based on Accuracy Extrapolation (UMAE), developed at University of Pisa. In order to achieve these objectives test facility nodalisation qualification for both 'steady state level' and 'on transient level' are demonstrated. For the 'steady state level' qualification compliance to acceptance criteria established in UMAE has been checked for geometrical details and thermal hydraulic parameters. The following steps have been performed for evaluation of qualitative qualification of 'on transient level': visual comparisons between experimental and calculated relevant parameters time trends; list of comparison between experimental and code calculation resulting time sequence of significant events; identification/verification of CSNI phenomena validation matrix; use of the Phenomenological Windows (PhW), identification of Key Phenomena and Relevant Thermal-hydraulic Aspects (RTA). A successful application of the qualitative process constitutes a prerequisite to the application of the quantitative analysis. For quantitative accuracy of code prediction Fast Fourier Transform Based

  12. Test-based approach to cable tray support system analysis and design: Behavior and test methods

    Energy Technology Data Exchange (ETDEWEB)

    Reigles, Damon G., E-mail: dreigles@engnovex.com [engNoveX, Inc., 19C Trolley Square, Wilmington, DE 19806 (United States); Brachmann, Ingo; Johnson, William H. [Bechtel Nuclear, Security & Environmental, 12011 Sunset Hills Rd, Suite 110, Reston, VA 20190 (United States); Gürbüz, Orhan [Tobolski Watkins Engineering, Inc., 4125 Sorrento Valley Blvd, Suite B, San Diego, CA 92121 (United States)

    2016-06-15

    Highlights: • Describes dynamic response behavior of unistrut type cable tray supports. • Summarizes observations from past full-scale shake table test programs. • Outlines testing methodologies necessary to identify key system parameters. - Abstract: Nuclear power plant safety-related cable tray support systems subjected to seismic loadings were originally understood and designed to behave as linear elastic systems. This behavioral paradigm persisted until the early 1980s when, due to evolution of regulatory criteria, some as-installed systems needed to be qualified to higher seismic motions than originally designed for. This requirement prompted a more in-depth consideration of the true seismic response behavior of support systems. Several utilities initiated extensive test programs, which demonstrated that trapeze strut-type cable tray support systems exhibited inelastic and nonlinear response behaviors with plastic hinging at the connections together with high damping due to bouncing of cables in the trays. These observations were used to demonstrate and justify the seismic adequacy of the aforementioned as-installed systems. However, no formalized design methodology or criteria were ever established to facilitate use of these test data for future evaluations. This paper assimilates and reviews the various test data and conclusions for the purpose of developing a design methodology for the seismic qualification of safety-related cable tray support systems.

  13. The analysis of beryllium-copper diffusion joint after HHF test

    International Nuclear Information System (INIS)

    Guiniatouline, R.N.; Mazul, I.V.; Rubkin, S.Y.

    1995-01-01

    The development of beryllium-copper joints which can withstand to relevant ITER divertor conditions is one of the important tasks at present time. One of the main problem for beryllium-copperjoints, is the inter-metallic layers, the strength and life time of joints significantly depends from the width and contents of the intermetallic layers. The objective of this work is to study the diffusion joint of TGP-56 beryllium to OFHC copper after thermal response and thermocyclic tests with beryllium-copper mockup. The BEY test were performed at e-beam facility (EBTS, SNLA). The following methods were used for analyses: the roentgenographic analysis; X-ray spectrum analysis; the fracture graphic analysis. During the investigation the followed studies were done: the analysis of diffusion boundary Be-Cu, which was obtained at the crossection of one of the tiles, the analysis of the debonded surfaces of a few beryllium tiles and corresponding copper parts; the analysis of upper surface of one of the tiles after HHF tests. The results of this work have showed that: the joint roentgenographic and elements analyses indicated the following phases in the diffusion zone: Cu 2 Be (∼170 μm), CuBe (∼30μm), CuBe 2 (∼1 μm) and solid solution of copper in beryllium. The phases Cu 2 Be, CuBe and solid solution of copper in beryllium were indicated using quantitative microanalysis and phases CuBe, CuBe 2 , Cu, Be - by roentgenographic analysis; the source of fracture (initial crack) is located in the central part of the tiles, the crack caused by the influence of residual stresses during cooling of a mock-up after fabrication and developed under the conditions of slow elastic-plastic growing during the process of thermal fatigue testing. The analysis gives the important data about joint's quality and also may be used for any type of joints and its comparison for ITER applications

  14. Development testing of the chemical analysis automation polychlorinated biphenyl standard analysis method during surface soils sampling at the David Witherspoon 1630 site

    International Nuclear Information System (INIS)

    Hunt, M.A.; Klatt, L.N.; Thompson, D.H.

    1998-02-01

    The Chemical Analysis Automation (CAA) project is developing standardized, software-driven, site-deployable robotic laboratory systems with the objective of lowering the per-sample analysis cost, decreasing sample turnaround time, and minimizing human exposure to hazardous and radioactive materials associated with DOE remediation projects. The first integrated system developed by the CAA project is designed to determine polychlorinated biphenyls (PCB) content in soil matrices. A demonstration and development testing of this system was conducted in conjuction with surface soil characterization activities at the David Witherspoon 1630 Site in Knoxville, Tennessee. The PCB system consists of five hardware standard laboratory modules (SLMs), one software SLM, the task sequence controller (TSC), and the human-computer interface (HCI). Four of the hardware SLMs included a four-channel Soxhlet extractor, a high-volume concentrator, a column cleanup, and a gas chromatograph. These SLMs performed the sample preparation and measurement steps within the total analysis protocol. The fifth hardware module was a robot that transports samples between the SLMs and the required consumable supplies to the SLMs. The software SLM is an automated data interpretation module that receives raw data from the gas chromatograph SLM and analyzes the data to yield the analyte information. The TSC is a software system that provides the scheduling, management of system resources, and the coordination of all SLM activities. The HCI is a graphical user interface that presents the automated laboratory to the analyst in terms of the analytical procedures and methods. Human control of the automated laboratory is accomplished via the HCI. Sample information required for processing by the automated laboratory is entered through the HCI. Information related to the sample and the system status is presented to the analyst via graphical icons

  15. Accident analysis of HANARO fuel test loop

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J. Y.; Chi, D. Y

    1998-03-01

    Steady state fuel test loop will be equipped in HANARO to obtain the development and betterment of advanced fuel and materials through the irradiation tests. The HANARO fuel test loop was designed to match the CANDU and PWR fuel operating conditions. The accident analysis was performed by RELAP5/MOD3 code based on FTL system designs and determined the detail engineering specification of in-pile test section and out-pile systems. The accident analysis results of FTL system could be used for the fuel and materials designer to plan the irradiation testing programs. (author). 23 refs., 20 tabs., 178 figs.

  16. 77 FR 8865 - Recent Postings of Broadly Applicable Alternative Test Methods

    Science.gov (United States)

    2012-02-15

    ... Applicable Alternative Test Methods AGENCY: Environmental Protection Agency (EPA). ACTION: Notice of availability. SUMMARY: This notice announces the broadly applicable alternative test method approval decisions... INFORMATION CONTACT: An electronic copy of each alternative test method approval document is available on the...

  17. Study of the location of testing area in residual stress measurement by Moiré interferometry combined with hole-drilling method

    Science.gov (United States)

    Qin, Le; Xie, HuiMin; Zhu, RongHua; Wu, Dan; Che, ZhiGang; Zou, ShiKun

    2014-04-01

    This paper investigates the effect of the location of testing area in residual stress measurement by Moiré interferometry combined with hole-drilling method. The selection of the location of the testing area is analyzed from theory and experiment. In the theoretical study, the factors which affect the surface released radial strain ɛ r were analyzed on the basis of the formulae of the hole-drilling method, and the relations between those factors and ɛ r were established. By combining Moiré interferometry with the hole-drilling method, the residual stress of interference-fit specimen was measured to verify the theoretical analysis. According to the analysis results, the testing area for minimizing the error of strain measurement is determined. Moreover, if the orientation of the maximum principal stress is known, the value of strain will be measured with higher precision by the Moiré interferometry method.

  18. Effect of the absolute statistic on gene-sampling gene-set analysis methods.

    Science.gov (United States)

    Nam, Dougu

    2017-06-01

    Gene-set enrichment analysis and its modified versions have commonly been used for identifying altered functions or pathways in disease from microarray data. In particular, the simple gene-sampling gene-set analysis methods have been heavily used for datasets with only a few sample replicates. The biggest problem with this approach is the highly inflated false-positive rate. In this paper, the effect of absolute gene statistic on gene-sampling gene-set analysis methods is systematically investigated. Thus far, the absolute gene statistic has merely been regarded as a supplementary method for capturing the bidirectional changes in each gene set. Here, it is shown that incorporating the absolute gene statistic in gene-sampling gene-set analysis substantially reduces the false-positive rate and improves the overall discriminatory ability. Its effect was investigated by power, false-positive rate, and receiver operating curve for a number of simulated and real datasets. The performances of gene-set analysis methods in one-tailed (genome-wide association study) and two-tailed (gene expression data) tests were also compared and discussed.

  19. A standard analysis method (SAM) for the automated analysis of polychlorinated biphenyls (PCBs) in soils using the chemical analysis automation (CAA) paradigm: validation and performance

    International Nuclear Information System (INIS)

    Rzeszutko, C.; Johnson, C.R.; Monagle, M.; Klatt, L.N.

    1997-10-01

    The Chemical Analysis Automation (CAA) program is developing a standardized modular automation strategy for chemical analysis. In this automation concept, analytical chemistry is performed with modular building blocks that correspond to individual elements of the steps in the analytical process. With a standardized set of behaviors and interactions, these blocks can be assembled in a 'plug and play' manner into a complete analysis system. These building blocks, which are referred to as Standard Laboratory Modules (SLM), interface to a host control system that orchestrates the entire analytical process, from sample preparation through data interpretation. The integrated system is called a Standard Analysis Method (SAME). A SAME for the automated determination of Polychlorinated Biphenyls (PCB) in soils, assembled in a mobile laboratory, is undergoing extensive testing and validation. The SAME consists of the following SLMs: a four channel Soxhlet extractor, a High Volume Concentrator, column clean up, a gas chromatograph, a PCB data interpretation module, a robot, and a human- computer interface. The SAME is configured to meet the requirements specified in U.S. Environmental Protection Agency's (EPA) SW-846 Methods 3541/3620A/8082 for the analysis of pcbs in soils. The PCB SAME will be described along with the developmental test plan. Performance data obtained during developmental testing will also be discussed

  20. Overview of multi-input frequency domain modal testing methods with an emphasis on sine testing

    Science.gov (United States)

    Rost, Robert W.; Brown, David L.

    1988-01-01

    An overview of the current state of the art multiple-input, multiple-output modal testing technology is discussed. A very brief review of the current time domain methods is given. A detailed review of frequency and spatial domain methods is presented with an emphasis on sine testing.

  1. Solution identification and quantitative analysis of fiber-capacitive drop analyzer based on multivariate statistical methods

    Science.gov (United States)

    Chen, Zhe; Qiu, Zurong; Huo, Xinming; Fan, Yuming; Li, Xinghua

    2017-03-01

    A fiber-capacitive drop analyzer is an instrument which monitors a growing droplet to produce a capacitive opto-tensiotrace (COT). Each COT is an integration of fiber light intensity signals and capacitance signals and can reflect the unique physicochemical property of a liquid. In this study, we propose a solution analytical and concentration quantitative method based on multivariate statistical methods. Eight characteristic values are extracted from each COT. A series of COT characteristic values of training solutions at different concentrations compose a data library of this kind of solution. A two-stage linear discriminant analysis is applied to analyze different solution libraries and establish discriminant functions. Test solutions can be discriminated by these functions. After determining the variety of test solutions, Spearman correlation test and principal components analysis are used to filter and reduce dimensions of eight characteristic values, producing a new representative parameter. A cubic spline interpolation function is built between the parameters and concentrations, based on which we can calculate the concentration of the test solution. Methanol, ethanol, n-propanol, and saline solutions are taken as experimental subjects in this paper. For each solution, nine or ten different concentrations are chosen to be the standard library, and the other two concentrations compose the test group. By using the methods mentioned above, all eight test solutions are correctly identified and the average relative error of quantitative analysis is 1.11%. The method proposed is feasible which enlarges the applicable scope of recognizing liquids based on the COT and improves the concentration quantitative precision, as well.

  2. Standard Test Method for Contamination Outgassing Characteristics of Spacecraft Materials

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2009-01-01

    1.1 This test method covers a technique for generating data to characterize the kinetics of the release of outgassing products from materials. This technique will determine both the total mass flux evolved by a material when exposed to a vacuum environment and the deposition of this flux on surfaces held at various specified temperatures. 1.2 This test method describes the test apparatus and related operating procedures for evaluating the total mass flux that is evolved from a material being subjected to temperatures that are between 298 and 398 K. Pressures external to the sample effusion cell are less than 7 × 10−3 Pa (5 × 10−5 torr). Deposition rates are measured during material outgassing tests. A test procedure for collecting data and a test method for processing and presenting the collected data are included. 1.3 This test method can be used to produce the data necessary to support mathematical models used for the prediction of molecular contaminant generation, migration, and deposition. 1.4 Al...

  3. Nonlinear analysis of rc members using hardening plasticity and arc-length method

    International Nuclear Information System (INIS)

    Memon, B.A.; Su, X.

    2005-01-01

    A general framework for three-dimensional nonlinear finite element analysis of reinforced concrete is done. To make computations robust, reliable and make analysis more realistic hardening plasticity with arc-length method as path following technique is used to model material-nonlinear behavior of reinforced concrete. Hardening plasticity has the advantage over other plasticity formulations that it allows extension of framework for the analysis of softening region. Concrete is treated as eight-node isoparametric element and reinforcement is modeled as line element embedded in the body of isoparametric concrete element. Different methods of stress-scaling back to yield surfaces are tested and their performance is compared. Severe convergence problems are encountered as solution process approaches singularity points; specially limit points; along load displacement curve in nonlinear analysis. To overcome the problem, cylindrical arc-length method is used. The use of the method not only tackles the issue of singularity points but also deals with load-step size problem. While marching along load-displacement path identification of singularity points is done by using singularity indicator, for the purpose various singularity test functions are implemented. Although most of the individual techniques are already well established, the framework is completely new one. A computer implementation of the proposed frame work is written in FORTRAN. Numerical examples are solved to illustrate the validity of proposed framework. Comparison of the outcome of proposed framework is made with experimental observations. two sets of the results are found in good agreement. (author)

  4. 252Cf-source-driven neutron noise analysis method

    International Nuclear Information System (INIS)

    Mihalczo, J.T.; King, W.T.; Blakeman, E.D.

    1985-01-01

    The 252 Cf-source-driven neutron noise analysis method has been tested in a a wide variety of experiments that have indicated the broad range of applicability of the method. The neutron multiplication factor, k/sub eff/ has been satisfactorily determined for a variety of materials including uranium metal, light water reactor fuel pins, fissile solutions, fuel plates in water, and interacting cylinders. For a uranyl nitrate solution tank which is typical of a fuel processing or reprocessing plant, the k/sub eff/ values were satisfactorily determined for values between 0.92 and 0.5 using a simple point kinetics interpretation of the experimental data. The short measurement times, in several cases as low as 1 min, have shown that the development of this method can lead to a practical subcriticality monitor for many in-plant applications. The further development of the method will require experiments and the development of theoretical methods to predict the experimental observables

  5. 252Cf-source-driven neutron noise analysis method

    International Nuclear Information System (INIS)

    Mihalczo, J.T.; King, W.T.; Blakeman, E.D.

    1985-01-01

    The 252 Cf-source-driven neutron noise analysis method has been tested in a wide variety of experiments that have indicated the broad range of applicability of the method. The neutron multiplication factor k/sub eff/ has been satisfactorily detemined for a variety of materials including uranium metal, light water reactor fuel pins, fissile solutions, fuel plates in water, and interacting cylinders. For a uranyl nitrate solution tank which is typical of a fuel processing or reprocessing plant, the k/sub eff/ values were satisfactorily determined for values between 0.92 and 0.5 using a simple point kinetics interpretation of the experimental data. The short measurement times, in several cases as low as 1 min, have shown that the development of this method can lead to a practical subcriticality monitor for many in-plant applications. The further development of the method will require experiments oriented toward particular applications including dynamic experiments and the development of theoretical methods to predict the experimental observables

  6. Testing for Granger Causality in the Frequency Domain: A Phase Resampling Method.

    Science.gov (United States)

    Liu, Siwei; Molenaar, Peter

    2016-01-01

    This article introduces phase resampling, an existing but rarely used surrogate data method for making statistical inferences of Granger causality in frequency domain time series analysis. Granger causality testing is essential for establishing causal relations among variables in multivariate dynamic processes. However, testing for Granger causality in the frequency domain is challenging due to the nonlinear relation between frequency domain measures (e.g., partial directed coherence, generalized partial directed coherence) and time domain data. Through a simulation study, we demonstrate that phase resampling is a general and robust method for making statistical inferences even with short time series. With Gaussian data, phase resampling yields satisfactory type I and type II error rates in all but one condition we examine: when a small effect size is combined with an insufficient number of data points. Violations of normality lead to slightly higher error rates but are mostly within acceptable ranges. We illustrate the utility of phase resampling with two empirical examples involving multivariate electroencephalography (EEG) and skin conductance data.

  7. Cost-Effectiveness Analysis of Helicobacter pylori Diagnostic Methods in Patients with Atrophic Gastritis

    Directory of Open Access Journals (Sweden)

    Fumio Omata

    2017-01-01

    Full Text Available Background. There are several diagnostic methods for Helicobacter pylori (H. pylori infection. A cost-effective analysis is needed to decide on the optimal diagnostic method. The aim of this study was to determine a cost-effective diagnostic method in patients with atrophic gastritis (AG. Methods. A decision-analysis model including seven diagnostic methods was constructed for patients with AG diagnosed by esophagogastroduodenoscopy. Expected values of cost and effectiveness were calculated for each test. Results. If the prevalence of H. pylori in the patients with AG is 85% and CAM-resistant H. pylori is 30%, histology, stool H. pylori antigen (SHPAg, bacterial culture (BC, and urine H. pylori antibody (UHPAb were dominated by serum H. pylori IgG antibody (SHPAb, rapid urease test (RUT, and urea breath test (UBT. Among three undominated methods, the incremental cost-effective ratios (ICER of RUT versus SHPAb and UBT versus RUT were $214 and $1914, respectively. If the prevalence of CAM-sensitive H. pylori was less than 55%, BC was not dominated, but its H. pylori eradication success rate was 0.86. Conclusions. RUT was the most cost-effective at the current prevalence of CAM-resistant H. pylori. BC could not be selected due to its poor effectiveness even if CAM-resistant H. pylori was more than 45%.

  8. DWPF Sample Vial Insert Study-Statistical Analysis of DWPF Mock-Up Test Data

    Energy Technology Data Exchange (ETDEWEB)

    Harris, S.P. [Westinghouse Savannah River Company, AIKEN, SC (United States)

    1997-09-18

    This report is prepared as part of Technical/QA Task Plan WSRC-RP-97-351 which was issued in response to Technical Task Request HLW/DWPF/TTR-970132 submitted by DWPF. Presented in this report is a statistical analysis of DWPF Mock-up test data for evaluation of two new analytical methods which use insert samples from the existing HydragardTM sampler. The first is a new hydrofluoric acid based method called the Cold Chemical Method (Cold Chem) and the second is a modified fusion method.Either new DWPF analytical method could result in a two to three fold improvement in sample analysis time.Both new methods use the existing HydragardTM sampler to collect a smaller insert sample from the process sampling system. The insert testing methodology applies to the DWPF Slurry Mix Evaporator (SME) and the Melter Feed Tank (MFT) samples.The insert sample is named after the initial trials which placed the container inside the sample (peanut) vials. Samples in small 3 ml containers (Inserts) are analyzed by either the cold chemical method or a modified fusion method. The current analytical method uses a HydragardTM sample station to obtain nearly full 15 ml peanut vials. The samples are prepared by a multi-step process for Inductively Coupled Plasma (ICP) analysis by drying, vitrification, grinding and finally dissolution by either mixed acid or fusion. In contrast, the insert sample is placed directly in the dissolution vessel, thus eliminating the drying, vitrification and grinding operations for the Cold chem method. Although the modified fusion still requires drying and calcine conversion, the process is rapid due to the decreased sample size and that no vitrification step is required.A slurry feed simulant material was acquired from the TNX pilot facility from the test run designated as PX-7.The Mock-up test data were gathered on the basis of a statistical design presented in SRT-SCS-97004 (Rev. 0). Simulant PX-7 samples were taken in the DWPF Analytical Cell Mock

  9. Validation of qualitative microbiological test methods

    NARCIS (Netherlands)

    IJzerman-Boon, Pieta C.; van den Heuvel, Edwin R.

    2015-01-01

    This paper considers a statistical model for the detection mechanism of qualitative microbiological test methods with a parameter for the detection proportion (the probability to detect a single organism) and a parameter for the false positive rate. It is demonstrated that the detection proportion

  10. Standard test methods for chemical, mass spectrometric, and spectrochemical analysis of nuclear-grade mixed oxides ((U, Pu)O2)

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 These test methods cover procedures for the chemical, mass spectrometric, and spectrochemical analysis of nuclear-grade mixed oxides, (U, Pu)O2, powders and pellets to determine compliance with specifications. 1.2 The analytical procedures appear in the following order: Sections Uranium in the Presence of Pu by Potentiometric Titration Plutonium by Controlled-Potential Coulometry Plutonium by Amperometric Titration with Iron (II) Nitrogen by Distillation Spectrophotometry Using Nessler Reagent 7 to 14 Carbon (Total) by Direct Combustion-Thermal Conductivity 15 to 26 Total Chlorine and Fluorine by Pyrohydrolysis 27 to 34 Sulfur by Distillation-Spectrophotometry 35 to 43 Moisture by the Coulometric, Electrolytic Moisture Analyzer 44 to 51 Isotopic Composition by Mass Spectrometry Rare Earths by Copper Spark Spectroscopy 52 to 59 Trace Impurities by Carrier Distillation Spectroscopy 60 to 69 Impurities by Spark-Source Mass Spectrography 70 to 76 Total Gas in Reactor-Grade Mixed Dioxide P...

  11. Round Robin Posttest analysis of a 1/10-scale Steel Containment Vessel Model Test

    International Nuclear Information System (INIS)

    Komine, Kuniaki; Konno, Mutsuo

    1999-01-01

    NUPEC and U.S. Nuclear Regulatory Commission (USNRC) have been jointly sponsoring 'Structural Behavior Test' at Sandia National Laboratory (SNL) in Cooperative Containment Research Program'. As one of the test, a test of a mixed scaled SCV model with 1/10 in the geometry and 1/4 in the shell thickness. Round Robin analyses of a 1/10-scale Steel Containment Vessel (SCV) Model Test were carried out to obtain an adequate analytical method among seven organizations belonged to five countries in the world. As one of sponsor, Nuclear Power Engineering Corporation (NUPEC) filled the important role of a posttest analysis of SCV model. This paper describes NUPEC's analytical results in the round robin posttest analysis. (author)

  12. STEM - software test and evaluation methods: fault detection using static analysis techniques

    International Nuclear Information System (INIS)

    Bishop, P.G.; Esp, D.G.

    1988-08-01

    STEM is a software reliability project with the objective of evaluating a number of fault detection and fault estimation methods which can be applied to high integrity software. This Report gives some interim results of applying both manual and computer-based static analysis techniques, in particular SPADE, to an early CERL version of the PODS software containing known faults. The main results of this study are that: The scope for thorough verification is determined by the quality of the design documentation; documentation defects become especially apparent when verification is attempted. For well-defined software, the thoroughness of SPADE-assisted verification for detecting a large class of faults was successfully demonstrated. For imprecisely-defined software (not recommended for high-integrity systems) the use of tools such as SPADE is difficult and inappropriate. Analysis and verification tools are helpful, through their reliability and thoroughness. However, they are designed to assist, not replace, a human in validating software. Manual inspection can still reveal errors (such as errors in specification and errors of transcription of systems constants) which current tools cannot detect. There is a need for tools to automatically detect typographical errors in system constants, for example by reporting outliers to patterns. To obtain the maximum benefit from advanced tools, they should be applied during software development (when verification problems can be detected and corrected) rather than retrospectively. (author)

  13. 46 CFR 57.06-3 - Method of performing production testing.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false Method of performing production testing. 57.06-3 Section 57.06-3 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) MARINE ENGINEERING WELDING AND BRAZING Production Tests § 57.06-3 Method of performing production testing. (a) Except as...

  14. Basic methods of isotope analysis

    International Nuclear Information System (INIS)

    Ochkin, A.V.; Rozenkevich, M.B.

    2000-01-01

    The bases of the most applied methods of the isotope analysis are briefly presented. The possibilities and analytical characteristics of the mass-spectrometric, spectral, radiochemical and special methods of the isotope analysis, including application of the magnetic resonance, chromatography and refractometry, are considered [ru

  15. Intercomparison of analysis methods for seismically isolated nuclear structures. Part 1: Advanced test data and numerical methods. Working material

    International Nuclear Information System (INIS)

    1993-01-01

    The purpose of the meeting was to review proposed contributions from CRP participating organizations to discuss in detail the experimental data on seismic isolators, to review the numerical methods for the analysis of the seismic isolators, and to perform a first comparison of the calculation results. The aim of the CRP was to validate the reliable numerical methods used for both detailed evaluation of dynamic behaviour of isolation devices and isolated nuclear structures of different nuclear power plant types. The full maturity of seismic isolation for nuclear applications was stressed, as well as the excellent behaviour of isolated structures during the recent earthquakes in Japan and the USA. Participants from Italy, USA, Japan, Russian federation, Republic of Korea, United Kingdom, India and European Commission have presented overview papers on the present programs and their status of contribution to the CRP

  16. Intercomparison of analysis methods for seismically isolated nuclear structures. Part 1: Advanced test data and numerical methods. Working material

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-07-01

    The purpose of the meeting was to review proposed contributions from CRP participating organizations to discuss in detail the experimental data on seismic isolators, to review the numerical methods for the analysis of the seismic isolators, and to perform a first comparison of the calculation results. The aim of the CRP was to validate the reliable numerical methods used for both detailed evaluation of dynamic behaviour of isolation devices and isolated nuclear structures of different nuclear power plant types. The full maturity of seismic isolation for nuclear applications was stressed, as well as the excellent behaviour of isolated structures during the recent earthquakes in Japan and the USA. Participants from Italy, USA, Japan, Russian federation, Republic of Korea, United Kingdom, India and European Commission have presented overview papers on the present programs and their status of contribution to the CRP.

  17. Regional analysis of annual maximum rainfall using TL-moments method

    Science.gov (United States)

    Shabri, Ani Bin; Daud, Zalina Mohd; Ariff, Noratiqah Mohd

    2011-06-01

    Information related to distributions of rainfall amounts are of great importance for designs of water-related structures. One of the concerns of hydrologists and engineers is the probability distribution for modeling of regional data. In this study, a novel approach to regional frequency analysis using L-moments is revisited. Subsequently, an alternative regional frequency analysis using the TL-moments method is employed. The results from both methods were then compared. The analysis was based on daily annual maximum rainfall data from 40 stations in Selangor Malaysia. TL-moments for the generalized extreme value (GEV) and generalized logistic (GLO) distributions were derived and used to develop the regional frequency analysis procedure. TL-moment ratio diagram and Z-test were employed in determining the best-fit distribution. Comparison between the two approaches showed that the L-moments and TL-moments produced equivalent results. GLO and GEV distributions were identified as the most suitable distributions for representing the statistical properties of extreme rainfall in Selangor. Monte Carlo simulation was used for performance evaluation, and it showed that the method of TL-moments was more efficient for lower quantile estimation compared with the L-moments.

  18. Gene set analysis: limitations in popular existing methods and proposed improvements.

    Science.gov (United States)

    Mishra, Pashupati; Törönen, Petri; Leino, Yrjö; Holm, Liisa

    2014-10-01

    Gene set analysis is the analysis of a set of genes that collectively contribute to a biological process. Most popular gene set analysis methods are based on empirical P-value that requires large number of permutations. Despite numerous gene set analysis methods developed in the past decade, the most popular methods still suffer from serious limitations. We present a gene set analysis method (mGSZ) based on Gene Set Z-scoring function (GSZ) and asymptotic P-values. Asymptotic P-value calculation requires fewer permutations, and thus speeds up the gene set analysis process. We compare the GSZ-scoring function with seven popular gene set scoring functions and show that GSZ stands out as the best scoring function. In addition, we show improved performance of the GSA method when the max-mean statistics is replaced by the GSZ scoring function. We demonstrate the importance of both gene and sample permutations by showing the consequences in the absence of one or the other. A comparison of asymptotic and empirical methods of P-value estimation demonstrates a clear advantage of asymptotic P-value over empirical P-value. We show that mGSZ outperforms the state-of-the-art methods based on two different evaluations. We compared mGSZ results with permutation and rotation tests and show that rotation does not improve our asymptotic P-values. We also propose well-known asymptotic distribution models for three of the compared methods. mGSZ is available as R package from cran.r-project.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. TRACG post-test analysis of panthers prototype tests of SBWR passive containment condenser

    International Nuclear Information System (INIS)

    Fitch, J.R.; Billig, P.F.; Abdollahian, D.; Masoni, P.

    1997-01-01

    As part of the validation effort for application of the TRACG code to the Simplified Boiling Water Reactor (SBWR), calculations have been performed for the various test facilities which are part of the SBWR design and technology certification program. These calculations include post-test calculations for tests in the PANTHERS Passive Containment Condenser (PCC) test program. Sixteen tests from the PANTHERS/PCC test matrix were selected for post-test analysis. This set includes three steady-state pure-steam tests, nine steady-state steam-air tests, and four transient tests. The purpose of this paper is to present and discuss the results of the post-test analysis. The author includes a brief description of the PANTHERS/PCC test facility and test matrix, a description of the PANTHERS/PCC post-test TRACG model and the manner in which the various types of tests in the post-test evaluation were simulated, and a presentation of the results of the TRACG simulation

  20. Finite test sets development method for test execution of safety critical software

    International Nuclear Information System (INIS)

    El-Bordany Ayman; Yun, Won Young

    2014-01-01

    It reads inputs, computes new states, and updates output for each scan cycle. Korea Nuclear Instrumentation and Control System (KNICS) has recently developed a fully digitalized Reactor Protection System (RPS) based on PLD. As a digital system, this RPS is equipped with a dedicated software. The Reliability of this software is crucial to NPPs safety where its malfunction may cause irreversible consequences and affect the whole system as a Common Cause Failure (CCF). To guarantee the reliability of the whole system, the reliability of this software needs to be quantified. There are three representative methods for software reliability quantification, namely the Verification and Validation (V and V) quality-based method, the Software Reliability Growth Model (SRGM), and the test-based method. An important concept of the guidance is that the test sets represent 'trajectories' (a series of successive values for the input variables of a program that occur during the operation of the software over time) in the space of inputs to the software.. Actually, the inputs to the software depends on the state of plant at that time, and these inputs form a new internal state of the software by changing values of some variables. In other words, internal state of the software at specific timing depends on the history of past inputs. Here the internal state of the software which can be changed by past inputs is named as Context of Software (CoS). In a certain CoS, a software failure occurs when a fault is triggered by some inputs. To cover the failure occurrence mechanism of a software, preceding researches insist that the inputs should be a trajectory form. However, in this approach, there are two critical problems. One is the length of the trajectory input. Input trajectory should long enough to cover failure mechanism, but the enough length is not clear. What is worse, to cover some accident scenario, one set of input should represent dozen hours of successive values